Selling T-shirts and Time Shares in the Cloud
|
|
|
- Jemimah Bennett
- 10 years ago
- Views:
Transcription
1 Selling T-shirts and Time Shares in the Cloud Daniel Gmach HP Labs Palo Alto, CA, USA Jerry Rolia HP Labs Palo Alto, CA, USA Ludmila Cherkasova HP Labs Palo Alto, CA, USA Abstract Cloud computing has emerged as a ne and alternative approach for providing computing services. Customers acquire and release resources by requesting and returning virtual machines to the cloud. Different service models and pricing schemes are offered by cloud service providers. This can make it difficult for customers to compare cloud services and select an appropriate solution. Cloud Infrastructure-as-a-Service vendors offer a t-shirt approach for Virtual Machines (VMs) on demand. Customers can select from a set of fixed size VMs and vary the number of VMs as their demands change. Private clouds often offer another alternative, called time-sharing, here the capacity of each VM is permitted to change dynamically. With this approach each virtual machine is allocated a dynamic amount of CPU and memory resources over time to better utilize available resources. We present a tool that can help customers make informed decisions about hich approach orks most efficiently for their orkloads in aggregate and for each orkload separately. A case study using data from an enterprise customer ith 312 orkloads demonstrates the use of the tool. It shos that for the given set of orkloads the t-shirt model requires almost tice the number of physical servers as the time share model. The costs for such infrastructure must ultimately be passed on to the customer in terms of monetary costs or performance risks. We conclude that private and public clouds should consider offering both resource sharing models to meet the needs of customers. Keyords: resource sharing, virtualization, virtual machine sizing, cloud service model data center efficiency I. INTRODUCTION Organizations are increasingly obtaining some or all of their IT resources through service providers that host public and private clouds. When IT resources are obtained as-aservice from such clouds then fees must be charged for the resources. There are to popular models for resource sharing ith VMs. We refer to them as the t-shirt and time-share models. The t-shirt model offers a fixed set of static configurations/sizes for VMs. The customer is charged on a per-vm/hr basis ith greater charges for VMs ith more resources. The larger the VM you need to acquire the more you pay. Some resource providers only offer a static solution, here a orkload is allocated a fixed number of VMs to satisfy the application s demand. With this resource management approach, a orkload oner must select the instance size and number of instances of VMs that best supports the service. This static approach may result in significant resource overprovisioning, and hence higher resource costs. A more advanced resource management approach supports a dynamic time varying number of VMs such that the customer only pays for hat is needed. Customers can manage their costs by acquiring and releasing VMs according to their on specific load patterns. Workloads must be horizontally scalable to exploit the dynamic approach. In effect, customers can change their ardrobe of resources based on need. This paper offers a tool for evaluating the efficiency of the t-shirt sizing model (ith both the static and dynamic resource management approaches) and compares it ith a finer degree of resource sharing e call the time share model. In the time share model, a VM may use resources up to certain limits according to its load. The orkloads are consolidated onto servers based on their orkload patterns. We refer to this as time share sizing. Hoever, the support of such an approach in an infrastructure-as-a-service environment requires additional resource management methods. Automated orkload consolidation methods (such as [5] and [24]) are needed to find and assign orkloads ith complementary demand patterns to servers for improving utilization hile reducing the likelihood of service level violations. In addition, in environments ith VM migration support, a migration controller can move VMs from overloaded servers to servers that have available capacity to proactively reduce the likelihood of service level violations [25], [26]. Customers need a tool that helps them to make informed decisions about hich model is appropriate for their orkloads in aggregate and for each orkload separately. This tool should determine the quantity of infrastructure, and hence cost needed to support customer orkloads for the t-shirt and timeshare models. The t-shirt approach charges customers on a per hour basis for resources they may not fully use. The time-share model takes into account the burstiness in customer orkloads. In both case service providers must either pass on the costs of unused resources to the customers or over-book physical resources to improve utilization. Over-booking resources can lead to service level violations and performance risks for customers. In this ork, e offer a tool for determining the quantity of infrastructure needed to satisfy service levels under different cloud service models. We perform a case study that employs the designed tool. Using data from an enterprise customer ith 312 orkload e sho that for a given orkload set a service provider must have significantly more hardare to support customers ith t- shirt sizing alone than if a finer degree of resource sharing and orkload management is employed. The costs for extra hardare must ultimately be borne by customers monetarily or in terms of performance risks. We do note that some orkloads cost less to host ith the t-shirt approach and explain hy. This paper is organized as follos. Section II presents t- shirt sizing approach as offered by infrastructure as a service
2 cloud vendors today and Section III describes time share sizing in more detail. Section IV presents a tool that reports on ho many physical servers are needed by a given orkloads for both approaches. Section V offers a study that compares the hardare requirements of the to approaches folloed by related ork in Section VI. Summary and concluding remarks are offered in Section VII along ith a description of our next steps. II. T-SHIRT RESOURCE MANAGEMENT This section describes t-shirt model infrastructure-as-aservice offerings and resource management methods. We consider t-shirt offerings from Amazon Elastic Computing Cloud and Rackspace. These service providers also offer variants of this model, such as being able to pay up front for some instances to reduce ongoing fee a spot market to stimulate demand for otherise unused resource and the hosting of VMs to dedicated hardare for more predictable performance. We do not consider these alternatives in this paper. Amazon refers to its VMs as instances. In general instances can be acquired and released on demand. Pricing is per hour, hether the full hour is used or not and regardless of CPU utilization. Each instance has a 32 bit or 64 bit address space, an OS and application image, e.g., Linux, Windo a certain number of virtual CPU each ith an expected capacity, and a certain amount of memory and disk space. Bandidth charges are extra. There are micro, small, large, and extra large instances. In addition, there are high memory and high CPU instances. Finally, there are also special instances for high performance computing. Rackspace is similar to Amazon, but it distinguishes its VM sizes by memory size. Seven sizes are offered. The greater the size of memory, the greater the number of virtual CPUs and disk space. The greater the size of memory the greater the relative eight of the virtual CPUs hen competing for physical cores. Pricing is by the hour per VM hether CPUs are utilized or not. Finally, Rackspace provides an API to change a VM from one size to another but the application must be stopped and the change takes several minutes. In both case the price per hour seems roughly proportional to the quantity of resources in the instance. There does not appear to be any disproportionately increasing or decreasing costs for having smaller or larger instances. Both Amazon and Rackspace offer auto-scaling and load balancing services that can be configured to add/remove instances based on a customer application s needs. Neither offers any specific assurances that physical resources ill not be overbooked. A. T-shirt Model Resource Management Resource management for the t-shirt model is straightforard. Each VM instance and each server has a size in terms of CPU and memory. An additional VM instance can be placed on a server if there are sufficient CPU and memory resources available. B. T-shirt Costs The total cost of a resource pool includes the acquisition costs for facilitie physical IT equipment and softare, poer costs for operating the physical machines and facilitie and administration costs. In this paper e focus on server costs alone. Acquisition costs are often considered ith respect to a three year time horizon and reclaimed according to an assumed rate for each costing interval. In our case study, e consider a three eek costing interval and apportion costs based on a three year time horizon for the IT infrastructure. The cost of a resource pool is the product of the number of servers required for the pool and the cost per server. The resource pool has a certain capacity in terms of CPU and memory. For the static t-shirt resource management approach the cost per hour of a VM corresponds to its portion of the pool s capacity according to its specified capacity attribute e.g., one half a core and 2 GB of memory. For the dynamic t- shirt resource management approach feer servers are required for the pool so that the costs per VM e compute are reduced. III. TIME SHARE MODEL This section describes resource management for the time share model. The approach e describe is an integrated closed loop approach ith management actions that occur at multiple time scales. It is more complex than resource management for the t-shirt model both in terms of resource allocation and in terms of apportioning costs. Hoever, the approach is automated and the additional complexity might be justified by significant capacity savings. A. Time Share Resource Management For time share resource management, e employ a consolidation engine presented in [5] that minimizes the number of servers needed to host all orkloads hile satisfying their time varying resource demand requirements. The orkload consolidation engine uses historical CPU and memory resource demand traces and arranges VM orkloads ithin a server pool, i.e., a set of physical servers located in close physical proximity based on resource requests. The consolidation engine operates in configurable planning interval e.g., every four hour and dynamically initiates orkload (virtual machine) migrations at the start of each interval consolidating servers such that their predicted resource demands ill be met. In each interval it ill consolidate orkloads onto a minimum set of servers and poer don unused servers to save poer and associated cooling costs. A more detailed description can be found in [5], [6], [8]. B. Time Share Costs When multiple virtual machines ith different resource requirements are deployed on the servers in a resource pool, one of the challenging questions to address is ho to design a chargeback model and apportion the cost of the server resources among the hosted orkloads. A common sense approach for establishing the cost of providing a service is to extend the usage-based model, i.e., from virtualization layer monitoring information one can derive average resource usage per orkload for a costing interval, e.g., three eek and then the physical server costs can be split up respectively. Currently, many service providers employ such simplified usage based accounting models [1 4]. Hoever, the relationship beteen orkloads and costs is actually more complex. Some orkloads may have a large peak to mean ratio for demands
3 upon server resources. We refer to such orkloads as bursty. For example, a orkload may have a peak CPU demand of 5 CPU cores but a mean demand of.5 of a CPU core. Such ratios may have an impact on shared resource pools. A pool that aims to consistently satisfy the demands of bursty orkloads ill have to limit the number of orkloads assigned to each server. This affects the number of servers needed for a resource pool. Further, server resources are rarely fully utilized even hen orkloads are tightly consolidated and all servers are needed. Even though many services can be assigned to a server, some portion of the server s resources may remain unused over time. The amount of unused resources may depend on orkload placement/consolidation choices and these choices may change frequently. The costs of such unallocated resources must be apportioned across orkload but it should be done in a fair and predictable ay. Gaining visibility and accurately determining the cost of shared resources used by collocated services is essential for implementing a proper chargeback approach in cloud environments. In this paper, for the time share approach e use a model introduced and analyzed in [15]. The proposed pool-burst cost model supports a reliable and predictable cost apportioning for orkloads in the shared compute environment, and can also be useful in support of more elaborate pricing models. This section briefly describes the proposed model. Belo, e define three categories of resource usage that can be tracked separately for each server resource, e.g., CPU and memory, for each costing interval. To simplify the notation, the equations that e present consider only one server resource at a time, e.g., CPU, memory, or softare licensing costs for one costing interval. Then the corresponding costs over all resources are summed up to give a total cost for all server resources for each costing interval. Final costs are the sum of costs for all resource e.g., CPU, memory, or softare licensing cost over all costing intervals. The three categories of resource usage are: Direct resource consumption by a orkload: the notation d represents the average physical server utilization of a server s by a orkload. The values of d are in [,1]. Note that d is if a orkload does not use a server s. Burstiness for a orkload and for a server: the notation b represents the difference beteen peak utilization of a server s by orkload and its average utilization represented by d. The values of b are in [,1]. Additionally, b s represents the difference beteen the b peak utilization u s of a server s and its average utilization u d s. The values of b s are in [,1]. Unallocated resource for a server: the notation a s represents unallocated (unused) server capacity; it is defined as the difference beteen 1 and the peak b utilization u s of server s. The values of a s are in [,1]. The notation a refers to unallocated resource. To take into account burstiness and unallocated resources e partition server cost C s into three components C d s, C b s, C a s, d here C s correspond to costs associated ith the average utilization of server s, and C s b and C s a correspond to the difference beteen the peak and average utilization of the resource, and the difference beteen 1% and the peak utilization of the resource, respectively. C s d = C s u s d, C s b = C s (u s b u s d ), C s a = C s (1 u s b ), ith the average server utilization u s d 1 and the peak utilization u s d u s b 1. To provide a more robust cost estimate, e use the folloing pool-burst model [15] that attributes burstiness cost and unallocated resources using measures for the S servers in the resource pool instead of the individual servers. The server pool burst cost share of a orkload is defined as: d S + pool burst temp d ε b b = Cs + C W s' S, W s' = 1 d ' ( ε + bs', ' ) ' = 1 s' = 1, ' = 1 (1) S pool burst temp pool burst pool burst temp a = + Cs' S, W s' = 1 pool burst temp s' = 1, ' = 1 s', ' ε is a small value that prevents the denominator from being. This approach attributes all the unallocated resources in the server pool in a fair ay among all the orkloads and makes it less dependent on the orkload placement decisions. IV. A TOOL FOR INFRASTRUCTURE SIZING UNDER T-SHIRTS AND TIME SHARES We no introduce a tool that estimates ho many physical servers are needed by a given set of orkloads under the t- shirt and time-share models and on the costs per orkload. The tool has the folloing components: Historical orkload trace DB: stores traces of CPU, memory, and other demands for customer orkloads. Typically, it includes months of measurement traces at 5 minutes granularity. Both models use this component. Workload trace simulator: replays orkload traces hile managing simulated orkload assignments to hosts. Auto-scaler: dynamically varies the number of VMs associated ith a orkload based on the t-shirt size and the orkload s time varying load. The number of VMs is altered hourly. Consolidation engine: uses historical trace information to automatically assign orkloads on simulated hosts to minimize the number of hosts hile reducing the likelihood of service level violations. Migration controller: uses recent information from simulated hosts to migrate orkloads beteen hosts to further minimize service level violations. Cost attribution engine: partitions the server costs to orkloads based on their use of resources. This section presents ho the tool is used to evaluate the impact of the models on customer orkloads. Physical server configurations are chosen such that the memory requirements of VMs can be satisfied for both models.
4 For the t-shirt model, e use the tool in the folloing ay. For each VM size offered by a service provider, e employ our Auto-scaler to determine the time varying number of VMs to be associated ith the orkload. The peak number of VMs needed is also reported. The Auto-scaler assumes the orkload is horizontally scalable and does not add any extra orkload overhead. The number of physical servers is determined based on the peak number of VMs over the simulated time. Additionally, e permit a user specified limit on the number of VM instances for a orkload. For each orkload, the tool finds smallest VM size that satisfies its requirements. If a orkload is too large then it ill use as many of the largest VM size as is necessary. For the time-share model, e employ the orkload simulator, consolidation engine, and the migration controller. The tool alks forard over the historical trace data consolidating and migrating orkloads as is necessary. We report the maximum number of physical servers needed to support the orkloads. For both approache e employ our ne cost attribution engine. It partitions the cost of physical servers (that additionally may include shared costs such as poer, management, license cost etc.) across the orkloads. For the t-shirt approach, the cost is partitioned across VMs in a straightforard manner. A VM that is tice the size of another ill have tice the hourly cost. For the time-share approach e utilize a novel cost apportioning model [15]: it takes into account the impact of a orkload s burstiness on the number of physical machines needed and computes robust (per orkload) costs regardless of orkload placement decisions. V. CASE STUDY The goal of the study is to demonstrate the tool usage for estimating and comparing the quantity of infrastructure and per-orkload costs incurred for both the t-shirt and time share models. With such information a customer can decide hich model is most appropriate for a set of orkloads and/or for each orkload separately. We consider the folloing cases: Static t-shirt sizing ith a small set of fixed size VMs. The number of VMs per orkload is chosen to satisfy the orkload s peak demand. Dynamic t-shirt sizing. The number of fixed size VMs can vary over time on an hourly basis. Static time share sizing ith a fixed number of VMs that have time varying capacity. Only one consolidation is performed for the costing interval of three eeks. It aims to minimize the number of servers hile minimizing service level violations. Dynamic time share sizing ith a fixed number of VMs that have a time vary capacity. Consolidation ith automated orkload migration is performed every 4 hours ithin the costing interval to reduce the peak number of servers hile minimizing service level violations and VM migrations. The case study makes several simplifying assumptions. We assume that all orkload demands are knon in advance. For t- shirt sizing this lets us kno exactly ho many instances are needed. For time share sizing this yields consolidation orkload placements that are knon to avoid service level violations. We note that in real systems orkload prediction methods [7] and additional unallocated server headroom [18] are often used to overcome unexpected rises in demand. Furthermore, e assume that the orkloads e consider can be partitioned in a ay that they are supported by multiple VM instances. We do not consider additional resource demands introduced by the operating system or the virtualization layers or by inter-instance communication hen partitioning a orkload across many VM instances. These overheads might lead to underestimated costs for the t-shirt model compared to the costs of the time share model. Hoever, these assumptions do not affect the overall conclusions of the paper. Section V.A characterizes the orkloads in the study. The considered servers and resource pools are described in Section V.B. Sections V.C, D, and E characterize infrastructure sizes for the static t-shirt, dynamic t-shirt, and time share cases. Per orkload costs are considered in Section V.F. A. Workload Characterization To evaluate the effectiveness of our tool for comparing the cloud service model e obtained three eeks of data for 312 orkloads from one HP customer data center. Each orkload as hosted on its on server, so e use resource demand measurements for the server to characterize its orkload's demand trace. Each trace describes resource usage, e.g., processor and memory demand as measured every 5 minutes. We define CPU capacity and CPU demand in units of CPU shares. A CPU share denotes one percentage of utilization of a processor ith a clock rate of 1 GHz. A scale factor adjusts for the capacity beteen servers ith different processor speeds or architectures. For example, the servers ith 2.2 GHz CPUs in our case study ere assigned 22 shares. We note that the scaling factors are only approximate; the calculation of more precise scale factors is beyond the scope of this paper. The memory usage is measured in GB. Memory Demand in GB Peak Demand Mean Demand Workload Figure 1: Workload memory usage Figures 1 and 2 summarize the memory and CPU usage for the orkloads under study. Figure 1 shos the mean and maximum memory usage for each orkload. We order orkloads by their peak resource demand for presentation purposes. Figure 2 shos the mean and maximum CPU usage of corresponding orkloads. The figures sho that orkloads are very bursty and that some orkloads require much more resources than others. Different studies have reported a similar burstiness found in other enterprise orkloads [7]. Such bursty orkloads are good candidates for consolidation.
5 CPU Demand in Shares 45 Peak Demand 4 Mean Demand Workload Figure 2: Workload CPU demand B. Resource Pool and Server Configurations In the study, e consider the folloing resource pool configuration: each server consists of 24 x 2.2-GHz processor core 96 GB of memory, and to dual 1 Gb/s Ethernet netork interface cards for netork traffic and virtualization management traffic, respectively. The acquisition cost for each of these servers as estimated as $1145. Using a linear depreciation and assuming a lifetime of three year the cost for three eeks is $22 per server. We no explain ho the orkloads are associated ith instances for both the t-shirt and time share cases. For the t-shirt sizing cases e chose four instance configurations small, medium, large, and extra large. A small instance has 248 MB main memory and has half a physical CPU assigned, hich corresponds to 11 CPU shares. The medium, large and extra large instances have tice, four times and eight times the amount of resources assigned, respectively. Table 1 reflects the CPU and memory resources of considered instances sizes in the case study. We note that the considered physical servers are able to run 48 small instance 24 medium, 12 large, or 6 extra-large instances. Table 1: CPU and memory resources of VM instances. Small Medium Extra Phys. CPUs CPU shares Memory For the time share cases each orkload is associated ith a single instance. The instance is given a time varying capacity that is sufficient to handle its CPU and memory load. As described in Section III.A, time share resource management achieves this by consolidating orkloads ith complementary orkload patterns so that each VM has access to capacity hen it needs it. C. Static T-shirt Sizing For the static t-shirt case for each orkload, the number of required instances is chosen based on the orkload s peak CPU and memory demands over the three eeks. For example, a orkload ith peak resource demands of 3 CPU shares and 4GB of memory requires three small instances. In general, the last instance is never fully utilized. The larger the instance size the more likely it is that overall resource utilization ill be lo. The number of hosts that is needed to support the t-shirt instances is estimated by summing up the CPU and memory demands of the instance e.g., as reported in Table 1 and dividing these by the capacity of the hosts. This may be slightly optimistic. For some combinations of VM instance sizes it may be difficult to fully pack servers. Required Servers Small Medium Extra Figure 3: Number of physical servers required for static t-shirt sizing Figure 3 shos the number of servers required for the static t-shirt case hen different instance sizes are used. The number of VM instances for each case can be estimated by multiplying the number of servers ith the number of instances that fit per server. The static approach does not really share resources and represents a orst case scenario as there is no benefit achieved from statistical multiplexing. The larger the instance size that is used, the more asted resources there are in the resource pool. For example, many orkloads exhibit small capacity requirements and do not even need one full extra large instance. Those orkloads that do need more than one VM instance do not need all the resources in the last VM instance. D. Dynamic T-shirt Sizing For the dynamic t-shirt cases e adjust the number of instances on an hourly basis. Computing resources are often rented by the hour in public clouds. That mean hen renting a VM the consumer often pays for the hour, hether the VM is used for the full hour or not. The number of required instances for each orkload per hour is determined by the peak resource demand of the orkload in that hour as explained for the static t-shirt model. We assume that additional instances can be firedup and removed dynamically during run-time. Figure 4 shos the aggregate time varying peak and average CPU demand per hour in shares for the considered orkloads. In general, the difference beteen the peak and mean is 44% of the average demand. This large difference suggests there may be gains to be made by sharing resources. Figure 5 shos the numbers of servers needed per hour hen the orkloads under study exploit different VM instance sizes. The difference beteen peak and mean for all cases is smallest for the extra large instance case (1.4%) and greatest for the small instance case (31.9%). Not surprisingly, ith the small t-shirt size the required physical infrastructure is better able to follo the load than ith larger instances sizes. Figure 6 illustrates the average CPU utilization across the allocated VM instances. We can explicitly see ho lo the resource utilization under the large and extra large VM sizes is. We note that the memory utilization looks similar to the CPU utilization. There are a lot of asted resources that come from using the large/extra large instances for all the orkloads. The
6 Number of active Servers Average Utilization CPU Demand in Shares per hour Peak per hour Days Figure 4: Total orkload CPU demand for the three eeks Days Figure 5: Number of active servers for the three eeks Days Average per hour Figure 6: Average server utilization for the three eeks. Extra Medium Small Extra Medium Small utilization of larger instances is apparently quite lo for the considered orkloads. One may argue ith the single size approach across all the orkload hich e demonstrated in the above evaluations. The criticism may come from the to simplifying assumptions. First, many large enterprise orkloads can t realistically have a large number of small instances ithout incurring too much overhead. Moreover, there are orkloads that are not horizontally scalable at all, i.e., it might be impossible to serve the orkload s demands by using multiple small instance and the orkload should be allocated large or extra large VMs. Second, some small orkloads should be hosted in the smaller sized VMs. To deal ith these issues e make the assumption that if possible a orkload should have 4 or feer instances. We no introduce a mixed case here each orkload uses the smallest instance size such that it does not have more than four instances. Hoever, several orkloads still need 5 extra large instances. Figure 7 gives the mapping of orkloads to instance size and its number of required peak instances. Number of Instances Extra Medium Small Workload Figure 7: VM instances per orkload for the mixed scenario The difference in mean and peak number of required servers for the cases over the three eek period are shon in Figure 8. The smaller the instance size the bigger the difference beteen peak and mean. The mixed case has a slightly loer peak and average number of servers than the large instance case. Though e assume it is infeasible for all the orkloads to use small instance the allocation of resources associated ith small instances yields the loest peak and average number of servers. Servers Used Small Medium Extra Mixed Figure 8: Required physical servers in the dynamic T-shirt sizing approach Finally, comparing Figures 3 and 8 e see that the dynamic mixed case uses almost half of the resources of the static small instance case. The static allocation of VMs is not an effective ay to improve resource utilization and is not considered further. E. Time Share For time share cases each orkload is associated ith a single VM instance. The instance has a time varying CPU capacity and memory sufficient to handle its load. For the static time share case orkloads are consolidated onto a number of servers once for the three eek interval such that the likelihood of service violations is negligible. For the dynamic time share case, the consolidation is repeated every 4 hours; again, such that the likelihood of service violations is negligible and such that e minimize VM migrations. The number of hosts needed for the time share approach is the maximum number of hosts needed for the consolidation exercise in any 4 hour interval. Figure 9 introduces results for the to time share cases: the static time share case and dynamic time share case (denoted as time share 4 hours) here orkloads are re-consolidated every 4 hours. The results of Figure 9 sho that the static time share case has a loer peak, and slightly higher average number of servers than the small instance case. It uses at peak 11% feer servers than the small instance case and 4% feer servers than the more realistic mixed case. This is because the time share Peak Average
7 model shares server resources at the finest level of granularity. The peak and average for the time share case are the same because consolidation and orkload placement only take place once for the three eek period. The time share 4 hour case (dynamic time share) uses 6% feer servers than the mixed case. It also uses significantly feer resources than the small instance case because orkload migration enables additional opportunities to exploit the benefits of statistical multiplexing. Servers Used Small Medium Extra Mixed Time Share Figure 9: Capacity required for Time-share approach Peak Average Time Share (4 hours) Finally, e note that these consolidation results are optimistic since they use historic orkloads over the past time periods for consolidation. Accommodating unexpected future rises and variability in demands typically require 1-2% more capacity as shon in [8]. This applies to both the t-shirt sizing and time share sizing resource management method but likely more so to the time share sizing method. Given that the time share case and time share 4 hour case require 4% and 6% feer servers than the t-shirt sizing mixed case, respectively, e do not expect this assumption to affect the comparison. F. Costs per Workload We no consider apportioning the costs of the servers among the orkloads. For each case, the cost of the servers is computed as the product of the peak number of servers needed over the three eek period time and the cost per server. For the t-shirt case the cost of an instance per hour is described in Section II.B. For the time share cases e employ the cost apportioning method described in Section III.B. Figure 1 shos the costs associated ith different orkloads for various cases. As expected, the mixed t-shirt case has costs that are similar to small instance costs for small orkloads and large instance costs for large orkloads. The time share case corresponds to the scenario here orkloads are only consolidated once for the three eeks. Even so, it generally offers loer costs that the other cases for most orkloads. When compared to the more realistic mixed case it almost alays yields loer costs. Some orkloads have t-shirt sizing costs similar to time share costs. This is the case for those orkloads that have loads that are ell matched to the offered instance sizes including CPU and memory. For the time share case, some orkloads do incur higher costs. These correspond to bursty orkloads that cause the need for more servers than their utilization alone implies. We argue that for the time share case a burstiness cost is fair since these orkloads are causing the need for a larger resource pool of servers. Costs for 3 eeks in $ Small Mixed Medium Extra Time Share Workloads Figure 1: Costs per orkload VI. RELATED WORK The Grid computing community has paid a special attention to developing a variety of useful service provisioning models and pricing schemes [19], [2], [21]. In [23], the authors present a general pricing scheme for Grid computing services that supports arbitrary pricing functions and allos a uniform expression of prices. Our ork focuses on defining and comparing costs for hosting a service using different resource provisioning models in public and private Clouds. Cloud computing has emerged as a ne and alternative approach for providing computing services. There are a variety of service models and pricing schemes employed by the cloud service providers. This can make it difficult for customers to compare the cloud computing services and select an appropriate solution. In [22], the authors propose a decision making model for selecting the softare licensing model that is most appropriate for the customer. They compare the softareas-a-service ith the perpetual softare licensing models. Key cost factors for the softare-as-a-service model are the initial costs for hardare and softare and maintenance costs. Cloud computing providers can use our models to determine incurred costs by orkloads. In [27] the authors present a metric for sizing in public clouds. The metric relates the execution time of parameter seep applications to the number and size of virtual machines used. This helps to optimize cost and performance hen outsourcing the application into a public cloud Recently, there have been efforts to analyze cloud service provider models and perform comparison studies. Garfinkel [9] and Walker [1] devote their attention to the performance analysis of Amazon AWS [1]. Some other industry reports [11], [12] present a comparison of cloud service features across different companies such as Amazon and Rackspace. A ne interesting approach is proposed in [13], here the authors design a set of benchmarks that can be used for performance assessment of different cloud services in order to make a more informed choice for hosting a particular customer application. A related problem is considered in [14], here the authors analyze the properties of the applications and the accompanying cost to help in making a decision hether to move or not to move the application in the cloud. The authors sho that the specific application characteristics such as orkload intensity, groth rate, storage capacity and softare licensing cost create a complex, intricate cost function influencing the decision outcome. One of the important decision factors is an application horizontal scalability. A complementary perspective on the related problem is offered in
8 [15], here a novel cost model and related analysis is proposed to enable comparison of the costs for virtualized vs. nonvirtualized applications. Ward [16] highlights the differences beteen public (AWS) and private cloud offerings (Ubuntu Enterprise Cloud). The interesting related discussion is offered in [17], here the authors discuss QoS issues and advanced resource allocation mechanisms (resembling the time-share approach that e promote in this paper) proposed as an efficient management solutions in private clouds. The authors discuss hat ould it take to make these mechanisms available in public clouds. VII. SUMMARY AND CONCLUSIONS In this ork, e offer an automated tool that helps customers to evaluate and compare different service models from the perspective of the infrastructure that is needed to support the customer orkloads and their QoS. Many service (infrastructure) providers support t-shirt sizing, i.e., a certain number of fixed capacity VM configurations. Historically, enterprise computing environments have exploited a finer degree of resource sharing for CPU and memory that e refer to as time share sizing. For some enterprise orkloads a time share resource management might be more efficient. The case study performed ith our ne tool shos that 4% feer servers are needed to support the considered enterprise orkload compared to the dynamic mixed t-shirt sizing model. This comparison assumes a management method that reconsolidates orkloads every three eeks. Typically, such a process could be governed manually by operations staff. We also sho that if an automated consolidation process is employed every 4 hours then 6% feer servers ould be required compared to the dynamic mixed t-shirt sizing case. Though this ould be a more complex management solution, it ould be automated and once mature ould lead to loer costs overall. The results e present assume that cloud providers do not over-subscribe the servers. If they do then customer orkloads ill not alays get the resources they expect and may incur performance degradations and loer QoS. Our next steps include enhancing the proposed tool ith additional service models for comparison. Additional monetary cost e.g., poer, cooling and management, should be taken into account along ith different charging models. More attention should be paid to the overheads associated ith virtualizing and parallelizing orkloads. REFERENCES [1] Amazon eb services. [2] IBM Tivoli Usage and Accounting Manager Virtualization Edition. [3] HP Insight Dynamics: HP Virtual Server Environment. [4] VMare: Virtualize Your Business Infrastructure. [5] J. Rolia, L. Cherkasova, M. Arlitt, and A. Andrzejak: A Capacity Management Service for Resource Pools. In Proc. of the 5th Intl. Workshop on Softare and Performance (WOSP). Palma, Illes Balear Spain, pages , 25. [6] L. Cherkasova and J. Rolia, R-Opus: A Composite Frameork for Application Performability and QoS in Shared Resource Pool in Proc. of the Int. Conf. on DependableSystems and Netorks (DSN), Philadelphia, USA, 26. [7] D. Gmach, J. Rolia, L. Cherkasova, and A. Kemper: Workload Analysis and Demand Prediction of Enterprise Data Center Applications. Proc. of the 27 IEEE International Symposium on Workload Characterization (IISWC), Boston, MA, USA, September 27 29, 27. [8] D. Gmach, J. Rolia, L, Cherkasova, A. Kemper. Resource pool management: Reactive versus proactive or let's be friends, Computer Netork pages , 29. [9] S. Garfinkel. An Evaluation of Amazon s Grid Computing Services: EC2, S3 and SQS. Harvard University, Tech. Rep. TR-8-7. [1] Walker.Benchmarking amazon EC2 for high-performance scientific computing. USENIX Login, 28. [11] Rackspace Cloud Servers versus Amazon EC2: Performance Analysis. [12] VPS Performance Comparison. [13] A. Li, X. Yang, S. Kandula, M. Zhang: CloudCmp: Shopping for a Cloud Made Easy. In Proc.HotCLoud'21. [14] B. C.Tak, B. Urgaonkar, and A. Sivasubramaniam: To Move or Not to Move: The Economics of Cloud Computing. HotCloud, 211. [15] D. Gmach, J. Rolia, L, Cherkasova: Resource and Virtualization Costs up in the Cloud: Models and Design Choices In Proc. of DSN 211. [16] J. S. Ward. A Performance Comparison of Clouds: Amazon EC2 and Ubuntu Enterprise Cloud. SICSA Demo FEST, 29. [17] A. Gulati, G. Shanmuganathan, A. Holler, I. Ahmad. Cloud Scale Resource Management: Challenges and Techniques. HotCloud, 211. [18] D. Gmach, J. Rolia, L. Cherkasova, G. Belrose, T. Turicchi, A. Kemper. An Integrated Approach to Resource Pool Management: Policie Efficiency and Quality Metrics. In Proc. of the 38th Annual IEEE/IFIP International Conference on Dependable Systems and Netorks (DSN). Anchorage, Alaska, USA, 28. [19] J. Altmann, D.J. Veit, Grid Economics and Business Models, LNCS 4685, 27. [2] R. Buyya, D. Abramson, H. Stockinger, J. Giddy, Economic Models for Resource Management and Scheduling in Grid Computing, Concurrency and Computation: Practice and Experience, 22. [21] E. Afgan, P. Bangalore, Computation Cost in Grid Computing Environments, in Proc. of the 1st Int. Workshop on the Economics of Softare and Computation, 27. [22] J. Altmann, J. Rohitratana, Softare Resource Management Considering the Interrelation beteen Explicit Cost, Energy Consumption, and Implicit Cost: A Decision Support Model for IT Managers, TEMEP Discussion Papers 2936, 29. [23] A. Caraca J. Altmann, A Pricing Information Service for Grid Computing, in Proc. of the 5th Int. Workshop on Middleare for Grid Computing, 27. [24] HP Capacity Advisor Consolidation Softare. hp.com/products/servers/management/capad/index.html [25] Y. Chen, D. Gmach, C. Hyser, Z. Wang, C. Bash, C. Hoover, S. Singhal, Integrated Management of Application Performance, Poer and Cooling in Data Centers, in Proc. of the 12th IEEE/IFIP Netork Operations and Management Symposium, 21. [26] X. Zhu, D. Young, B. Watson, Z. Wang, J. Rolia, S. Singhal, B. McKee, C. Hyser, D. Gmach, R. Gardner, T. Christian, L. Cherkasova, 1 Islands: an integrated approach to resource management for virtualized data centers. Cluster Computing The Journal of Netork Softare Tools and Application 29. [27] J.L. Vazquez-Poletti, G. Bardera I.M. Llorente, P. Romero, A Model for Efficient Onboard Actualization of an Instrumental Cyclogram for the Mars MetNet Mission on a Public Cloud Infrastructure, in Proc. PARA21, Reykjavik, Iceland, June 21. Lecture Notes in Computer Science, Volume 7133, Pages 33 42, 212.
supported Application QoS in Shared Resource Pools
Supporting Application QoS in Shared Resource Pools Jerry Rolia, Ludmila Cherkasova, Martin Arlitt, Vijay Machiraju HP Laboratories Palo Alto HPL-2006-1 December 22, 2005* automation, enterprise applications,
Network Infrastructure Services CS848 Project
Quality of Service Guarantees for Cloud Services CS848 Project presentation by Alexey Karyakin David R. Cheriton School of Computer Science University of Waterloo March 2010 Outline 1. Performance of cloud
Auto-Scaling Model for Cloud Computing System
Auto-Scaling Model for Cloud Computing System Che-Lun Hung 1*, Yu-Chen Hu 2 and Kuan-Ching Li 3 1 Dept. of Computer Science & Communication Engineering, Providence University 2 Dept. of Computer Science
Leveraging Multipath Routing and Traffic Grooming for an Efficient Load Balancing in Optical Networks
Leveraging ultipath Routing and Traffic Grooming for an Efficient Load Balancing in Optical Netorks Juliana de Santi, André C. Drummond* and Nelson L. S. da Fonseca University of Campinas, Brazil Email:
Optimal Service Pricing for a Cloud Cache
Optimal Service Pricing for a Cloud Cache K.SRAVANTHI Department of Computer Science & Engineering (M.Tech.) Sindura College of Engineering and Technology Ramagundam,Telangana G.LAKSHMI Asst. Professor,
Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER
Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Table of Contents Capacity Management Overview.... 3 CapacityIQ Information Collection.... 3 CapacityIQ Performance Metrics.... 4
Today: Data Centers & Cloud Computing" Data Centers"
Today: Data Centers & Cloud Computing" Data Centers Cloud Computing Lecture 25, page 1 Data Centers" Large server and storage farms Used by enterprises to run server applications Used by Internet companies
Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure
Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure Chandrakala Department of Computer Science and Engineering Srinivas School of Engineering, Mukka Mangalore,
Two-Level Cooperation in Autonomic Cloud Resource Management
Two-Level Cooperation in Autonomic Cloud Resource Management Giang Son Tran, Laurent Broto, and Daniel Hagimont ENSEEIHT University of Toulouse, Toulouse, France Email: {giang.tran, laurent.broto, daniel.hagimont}@enseeiht.fr
Technical Investigation of Computational Resource Interdependencies
Technical Investigation of Computational Resource Interdependencies By Lars-Eric Windhab Table of Contents 1. Introduction and Motivation... 2 2. Problem to be solved... 2 3. Discussion of design choices...
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
OCRP Implementation to Optimize Resource Provisioning Cost in Cloud Computing
OCRP Implementation to Optimize Resource Provisioning Cost in Cloud Computing K. Satheeshkumar PG Scholar K. Senthilkumar PG Scholar A. Selvakumar Assistant Professor Abstract- Cloud computing is a large-scale
A Distributed Approach to Dynamic VM Management
A Distributed Approach to Dynamic VM Management Michael Tighe, Gastón Keller, Michael Bauer and Hanan Lutfiyya Department of Computer Science The University of Western Ontario London, Canada {mtighe2 gkeller2
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD M.Rajeswari 1, M.Savuri Raja 2, M.Suganthy 3 1 Master of Technology, Department of Computer Science & Engineering, Dr. S.J.S Paul Memorial
How AWS Pricing Works May 2015
How AWS Pricing Works May 2015 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction...
Teradici PCoIP Solutions
page_1 Teradici Solutions pcoip solutions brochure_002.0 Teradici technology is the fabric of the virtual orkspace. Integrated ith VMare Horizon Vie, and ith a broad ecosystem, technology is deployed end-to-end
Amazon EC2 XenApp Scalability Analysis
WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2
Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs
Cloud Computing Capacity Planning Authors: Jose Vargas, Clint Sherwood Organization: IBM Cloud Labs Web address: ibm.com/websphere/developer/zones/hipods Date: 3 November 2010 Status: Version 1.0 Abstract:
Capacity Management and Demand Prediction for Next Generation Data Centers
Capacity Management and Demand Prediction for Next Generation Data Centers Daniel Gmach Technische Universität München, 85748 Garching, Germany [email protected] Jerry Rolia and Ludmila Cherkasova
IT@Intel. Comparing Multi-Core Processors for Server Virtualization
White Paper Intel Information Technology Computer Manufacturing Server Virtualization Comparing Multi-Core Processors for Server Virtualization Intel IT tested servers based on select Intel multi-core
XXX. Problem Management Process Guide. Process Re-engineering Problem Management Process
XXX Management Process Guide Process Re-engineering Management Process Version Control Document Version Status Date Author Name Signature Date Table of Contents 1. INTRODUCTION...1 2. PURPOSE OF THE DOCUMENT...
Green Cloud Computing 班 級 : 資 管 碩 一 組 員 :710029011 黃 宗 緯 710029021 朱 雅 甜
Green Cloud Computing 班 級 : 資 管 碩 一 組 員 :710029011 黃 宗 緯 710029021 朱 雅 甜 Outline Introduction Proposed Schemes VM configuration VM Live Migration Comparison 2 Introduction (1/2) In 2006, the power consumption
A Middleware Strategy to Survive Compute Peak Loads in Cloud
A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: [email protected]
An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability
An Oracle White Paper August 2011 Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability Note This whitepaper discusses a number of considerations to be made when
Monitoring Databases on VMware
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
How AWS Pricing Works
How AWS Pricing Works (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Fundamental
Virtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES Carlos Oliveira, Vinicius Petrucci, Orlando Loques Universidade Federal Fluminense Niterói, Brazil ABSTRACT In
Parallels Virtuozzo Containers
Parallels Virtuozzo Containers White Paper Greener Virtualization www.parallels.com Version 1.0 Greener Virtualization Operating system virtualization by Parallels Virtuozzo Containers from Parallels is
Figure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 [email protected], [email protected] Abstract One of the most important issues
The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor
The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor Howard Anglin [email protected] IBM Competitive Project Office May 2013 Abstract...3 Virtualization and Why It Is Important...3 Resiliency
Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida
Amazon Web Services Primer William Strickland COP 6938 Fall 2012 University of Central Florida AWS Overview Amazon Web Services (AWS) is a collection of varying remote computing provided by Amazon.com.
Directions for VMware Ready Testing for Application Software
Directions for VMware Ready Testing for Application Software Introduction To be awarded the VMware ready logo for your product requires a modest amount of engineering work, assuming that the pre-requisites
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Step by Step Guide To vstorage Backup Server (Proxy) Sizing
Tivoli Storage Manager for Virtual Environments V6.3 Step by Step Guide To vstorage Backup Server (Proxy) Sizing 12 September 2012 1.1 Author: Dan Wolfe, Tivoli Software Advanced Technology Page 1 of 18
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department
Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads
Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads G. Suganthi (Member, IEEE), K. N. Vimal Shankar, Department of Computer Science and Engineering, V.S.B. Engineering College,
Efficient Data Management Support for Virtualized Service Providers
Efficient Data Management Support for Virtualized Service Providers Íñigo Goiri, Ferran Julià and Jordi Guitart Barcelona Supercomputing Center - Technical University of Catalonia Jordi Girona 31, 834
This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902
Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited
9/26/2011. What is Virtualization? What are the different types of virtualization.
CSE 501 Monday, September 26, 2011 Kevin Cleary [email protected] What is Virtualization? What are the different types of virtualization. Practical Uses Popular virtualization products Demo Question,
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) (ENCS 691K Chapter 4) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Moreno et al.,
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:
An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database
An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
Data Centers and Cloud Computing. Data Centers
Data Centers and Cloud Computing Slides courtesy of Tim Wood 1 Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises for server applications Internet
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
Affinity Aware VM Colocation Mechanism for Cloud
Affinity Aware VM Colocation Mechanism for Cloud Nilesh Pachorkar 1* and Rajesh Ingle 2 Received: 24-December-2014; Revised: 12-January-2015; Accepted: 12-January-2015 2014 ACCENTS Abstract The most of
When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014
When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014 Table of Contents Executive Summary... 2 Case Study: Amazon Ec2 Vs In-House Private Cloud... 3 Aim... 3 Participants...
An Analysis of First Fit Heuristics for the Virtual Machine Relocation Problem
An Analysis of First Fit Heuristics for the Virtual Machine Relocation Problem Gastón Keller, Michael Tighe, Hanan Lutfiyya and Michael Bauer Department of Computer Science The University of Western Ontario
Microsoft Exchange Solutions on VMware
Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...
Run-time Resource Management in SOA Virtualized Environments. Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang
Run-time Resource Management in SOA Virtualized Environments Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang Amsterdam, August 25 2009 SOI Run-time Management 2 SOI=SOA + virtualization Goal:
When Does Colocation Become Competitive With The Public Cloud?
When Does Colocation Become Competitive With The Public Cloud? PLEXXI WHITE PAPER Affinity Networking for Data Centers and Clouds Table of Contents EXECUTIVE SUMMARY... 2 CASE STUDY: AMAZON EC2 vs IN-HOUSE
Last time. Data Center as a Computer. Today. Data Center Construction (and management)
Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases
Resource Allocation for Security Services in Mobile Cloud Computing
IEEE INFOCOM 211 Workshop on M2MCN-211 Resource Allocation for Security Services in Mobile Cloud Computing Hongbin Liang 12 Dijiang Huang 3 LinX.Cai 2 Xuemin (Sherman) Shen 2 Daiyuan Peng 1 1 School of
The Truth Behind IBM AIX LPAR Performance
The Truth Behind IBM AIX LPAR Performance Yann Guernion, VP Technology EMEA HEADQUARTERS AMERICAS HEADQUARTERS Tour Franklin 92042 Paris La Défense Cedex France +33 [0] 1 47 73 12 12 [email protected] www.orsyp.com
Sistemi Operativi e Reti. Cloud Computing
1 Sistemi Operativi e Reti Cloud Computing Facoltà di Scienze Matematiche Fisiche e Naturali Corso di Laurea Magistrale in Informatica Osvaldo Gervasi [email protected] 2 Introduction Technologies
Cost Effective Automated Scaling of Web Applications for Multi Cloud Services
Cost Effective Automated Scaling of Web Applications for Multi Cloud Services SANTHOSH.A 1, D.VINOTHA 2, BOOPATHY.P 3 1,2,3 Computer Science and Engineering PRIST University India Abstract - Resource allocation
Performance Analysis: Benchmarking Public Clouds
Performance Analysis: Benchmarking Public Clouds Performance comparison of web server and database VMs on Internap AgileCLOUD and Amazon Web Services By Cloud Spectator March 215 PERFORMANCE REPORT WEB
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang University of Waterloo [email protected] Joseph L. Hellerstein Google Inc. [email protected] Raouf Boutaba University of Waterloo [email protected]
Efficient Cloud Management for Parallel Data Processing In Private Cloud
2012 International Conference on Information and Network Technology (ICINT 2012) IPCSIT vol. 37 (2012) (2012) IACSIT Press, Singapore Efficient Cloud Management for Parallel Data Processing In Private
Topic 4: Introduction to Labour Market, Aggregate Supply and AD-AS model
Topic 4: Introduction to Labour Market, Aggregate Supply and AD-AS model 1. In order to model the labour market at a microeconomic level, e simplify greatly by assuming that all jobs are the same in terms
SIDN Server Measurements
SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources
SOLUTION PAPER GET THERE FIRST
GET THERE FIRST Keep Your Organization Out in Front Opportunities for groth are out there. Businesses need connectivity to thrive and the economic recovery is providing significant indos of groth based
CS 695 Topics in Virtualization and Cloud Computing. Introduction
CS 695 Topics in Virtualization and Cloud Computing Introduction This class What does virtualization and cloud computing mean? 2 Cloud Computing The in-vogue term Everyone including his/her dog want something
The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service
The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service Jinesh Varia and Jose Papo March 2012 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1
Choosing the Optimal Object-Oriented Implementation using Analytic Hierarchy Process
hoosing the Optimal Object-Oriented Implementation using Analytic Hierarchy Process Naunong Sunanta honlameth Arpnikanondt King Mongkut s University of Technology Thonburi, [email protected] King
Elastic VM for Rapid and Optimum Virtualized
Elastic VM for Rapid and Optimum Virtualized Resources Allocation Wesam Dawoud PhD. Student Hasso Plattner Institute Potsdam, Germany 5th International DMTF Academic Alliance Workshop on Systems and Virtualization
Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li
Comparing major cloud-service providers: virtual processor performance A Cloud Report by Danny Gee, and Kenny Li Comparing major cloud-service providers: virtual processor performance 09/03/2014 Table
REDUCING RISK OF HAND-ARM VIBRATION INJURY FROM HAND-HELD POWER TOOLS INTRODUCTION
Health and Safety Executive Information Document HSE 246/31 REDUCING RISK OF HAND-ARM VIBRATION INJURY FROM HAND-HELD POWER TOOLS INTRODUCTION 1 This document contains internal guidance hich has been made
A Proposed Framework for Ranking and Reservation of Cloud Services Based on Quality of Service
II,III A Proposed Framework for Ranking and Reservation of Cloud Services Based on Quality of Service I Samir.m.zaid, II Hazem.m.elbakry, III Islam.m.abdelhady I Dept. of Geology, Faculty of Sciences,
Data Centers and Cloud Computing
Data Centers and Cloud Computing CS377 Guest Lecture Tian Guo 1 Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Case Study: Amazon EC2 2 Data Centers
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
Cloud deployment model and cost analysis in Multicloud
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) ISSN: 2278-2834, ISBN: 2278-8735. Volume 4, Issue 3 (Nov-Dec. 2012), PP 25-31 Cloud deployment model and cost analysis in Multicloud
benchmarking Amazon EC2 for high-performance scientific computing
Edward Walker benchmarking Amazon EC2 for high-performance scientific computing Edward Walker is a Research Scientist with the Texas Advanced Computing Center at the University of Texas at Austin. He received
White Paper. Better Performance, Lower Costs. The Advantages of IBM PowerLinux 7R2 with PowerVM versus HP DL380p G8 with vsphere 5.
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 White Paper Better Performance, Lower Costs The Advantages of IBM PowerLinux 7R2 with PowerVM versus HP DL380p G8 with vsphere
Challenges and Importance of Green Data Center on Virtualization Environment
Challenges and Importance of Green Data Center on Virtualization Environment Abhishek Singh Department of Information Technology Amity University, Noida, Uttar Pradesh, India Priyanka Upadhyay Department
