What are you paying for? Performance benchmarking for Infrastructure-as-a-Service offerings

Size: px
Start display at page:

Download "What are you paying for? Performance benchmarking for Infrastructure-as-a-Service offerings"

Transcription

1 2011 IEEE 4th International Conference on Cloud Computing What are you paying for? Performance benchmarking for Infrastructure-as-a-Service offerings Alexander Lenk, Michael Menzel, Johannes Lipsky, Stefan Tai FZI Forschungszentrum Informatik Haid-und-Neu-Str Karlsruhe, Germany Philipp Offermann Deutsche Telekom Laboratories Ernst-Reuter-Platz Berlin, Germany philipp.offermann@telekom.de Abstract As part of the Cloud Computing stack, Infrastructure-as-a-Service (IaaS) offerings become more and more widespread. They allow users to deploy and run virtual machines in remote data centers (the Cloud), paying by use. However, performance specifications for virtual machines provided by providers are not coherent and sometimes not even sufficient to predict the actual performance of a deployment. To measure hardware performance, hardware benchmarks are available. For measuring the performance of virtual machines in IaaS offerings, these benchmarks are not sufficient, as they don t take into account the IaaS provisioning model where the host hardware is unknown and can change. We have designed a new performance measuring method for Infrastructure-as-a-Service offerings. The method takes into account the type of service running in a virtual machine. By using the method, the actual performance of the virtual machines running a specific IaaS service is measured. This measurement can be used to better compare prices between different providers, but also to evaluate the performance actually available on a certain IaaS platform. We have evaluated the method on several Cloud infrastructure offerings of the Amazon EC2 platform, Flexiscale platform and Rackspace platform to validate its utility. We show that already on EC2, the performance indicators given by providers, namely Amazon s Elastic Compute Unit, are not sufficient to determine the actual performance of a virtual machine. I. INTRODUCTION With the advent of Cloud Computing, more and more providers offer Infrastructure-as-a-Service (IaaS) platforms. IaaS allows the customer to create a virtual machine (VM), i.e. a bootable file that can be executed by a virtual machine hypervisor emulating a physical computer, upload it to the provider and run the VM at the providers data center [1]. The customer is unaware of the underlying hardware and pays per used resources. Consumption can include any kind of resource, e.g. storage used, runtime of a VM or bandwidth usage. The runtime costs often depend on the number of virtual CPUs presented to the VM by the hypervisor, the speed of the CPUs and the random access memory (RAM) available to the VM. For choosing the best IaaS provider for running a VM, the user needs a good understanding of what performance he is getting when he starts a virtual machine. Based on this performance ratio and the user s use case he can decide if it is worth using a IaaS offering or not. However, calculating cost/performance ratios requires precise performance indicators to assess the performance of the same VM deployed on different platforms. Still, due to techniques like resource overcommitment and underlying hardware, these are not necessarily the decisive indicators. For physical machines, there are widely accepted benchmarks available. For VMs running on IaaS platforms, it is not common to use these benchmarks in order to compare different Cloud offerings. Therefore the research question of this paper is: Are the performance indicators presented by IaaS providers sufficient to compare the actual performance available to VMs on the platform? And if not can standard benchmarks be used to make different IaaS Cloud offerings more comparable and to assist the user in his decision which Cloud offering to use. The remainder of the paper is structured as follows: First we introduce the state-of-the-art for IaaS and benchmarking, focusing on IaaS benchmarks. Then we present our method for benchmarking IaaS offerings and provide an example of our benchmark including the surprising results. Finally, we explain our lessons learned and present a conclusion. II. RELATED WORK Several benchmarking approaches exist that aim at specifically addressing the measurement of performance available in the Cloud. The Bitsource online magazine [2] conducted a benchmarking of the infrastructure services offered by Rackspace and Amazon, focusing on CPU and Storage performance only. A method to support the benchmarking process is not presented in this work and the results were gathered with a single benchmark for CPU and storage performance each. Furthermore, Walker [3] and Jackson [4] have benchmarked the high-performance computing capabilities of the Amazon Cloud, but does not provide a method either. Schadt et al. [5] do not provide a method nor benchmark other providers than Amazon, but they present a very detailed analysis of the the Amazon EC2 cloud and come to similar results we do with our benchmark runs in section IV /11 $ IEEE DOI /CLOUD

2 However, approaches that describe a method for benchmarking the Cloud have been developed recently. Binning et al. [6] propose a Cloud benchmark that measures cloud services in terms of the metrics scalability, cost, peak-load handling and fault tolerance. The benchmarking approach focuses on Web-based applications, similar to the TPC-W benchmark [7], [8], and considers different consistency settings on the database layer of a Web-based application. Nevertheless, the proposed benchmarking approach does not reflect aspects such as varying hardware architectures, server locations or time of execution. Sobel et al. [9] developed CloudStone, a Cloud benchmark that measures on the application layer. Benchmarking results gathered in a cost-per-user-per-monthmetric show different costs per served user when running an application on different Amazon EC2 virtual machine types and with different software configurations. The benchmarking results also show the maximum number of users that can be served with one setup consisting of a virtual machine type and software configuration. This cloud benchmarking approach, however, only reflects the overall transaction performance of one specific application run in the Cloud. Making predictions about the performance offered by a infrastructure service is not possible as this approach observes an application that comprises several application servers and a database server and, thus, restricts implications on the application level. Cloud- Harmony [10] offers a services that can be used to view detailed benchmarking results of several cloud providers. The offering can be used in conjunction with our method (e.g. to have reference values) but does not replace it. It does not give the user control over the benchmarks parameters and the time it runs which is necessary in many use cases. Besides benchmarking approaches customized for Cloud systems, there are benchmarks that focus on measuring the effects of server consolidation, and therefore in particular the performance of virtual machines and the underlying physical host. Makhija et al. [11] propose VMmark, a benchmark that determines the capacity of a physical host running a VMware hypervisor by using multiple sets of six different, separated virtual machines that stress the system resources with workloads regarding diverse dimensions including CPU load, RAM load and I/O load, which is storage and network traffic. The final score considers the number of virtual machine sets, each consisting of a mail, java, standby, web, database, and file server, and the benchmarking results of each virtual machine in all sets. Similarly, Casazza et al. [12] define vconsolidate, a benchmark that also aims at measuring performance of different workloads on a consolidated server. Mei et al. [13] propose another approach that focuses on network traffic as the constraining dimension in server consolidation and uses web server workload only. All virtualization-related benchmarking approaches offer comprehensive methods to measure the performance of consolidated Cloud servers, but are restricted to a specific hypervisor and aim at measuring the capabilities of physical host for providing virtual resources instead of measuring the performance of a Cloud infrastructure service. III. METHOD A growing number of providers and offerings open the chance to leverage the virtually infinite computing resources of the Cloud. However, offerings for virtual machine resources vary in price and performance attributes. Commonly, provider specify their offerings with different performance indicators measured by themselves. The given indicators often lack in comparability and are not providing comprehensive information about the overall performance of a virtual machine offering or regarding specific tasks. Transforming performance characteristics of multiple offerings into comparable metrics, requires to measure benchmarking results regarding chosen dimensions of interest, such as decryption processing speed or average RAM read performance in case of a decryption application or in-memory database system respectively, with identical benchmarks. Therefore, we propose a method to develop a custom benchmark suite that consists of multiple benchmarks to gain reliable and comparable results. The selection of benchmarks is heavily influenced by the related Cloud software project that requires comparable results to identify a suitable IaaS provider to deploy the application to. Thus, building a custom tailored benchmark suite to compare Cloud offerings of virtual machines is a process that should be part of every project. To support building a custom tailored benchmark suite from preparation to execution, we introduce a method that contains a process to structures all involved steps. Figure 1 depicts the top level process in business process modelling notation ( that describes the general steps to develop, execute and evaluate a custom tailored Cloud benchmark suite, and the roles involved in the process [14]. In the following sections we describe the essential steps for such a process. A. Use Case and Requirement Definition In the beginning of the process the use case of the benchmark has to be defined. The stakeholder s involved in the project must define technical and non technical requirements for the use case. Generally, requirements differ whenever our method is used. On the one hand, technical requirements, such as the operating system or the system architecture of choice (32bit or 64bit), narrow the extent of considerable Cloud offerings. On the other hand, the location of the datacenter or the price are also very important factors to determine which offering is used. B. Provider Selection After the use case and its requirements have been identified, all virtual machine Cloud offerings that will be part of the roadmap for the custom tailored benchmark suite can be selected in this step. First, available Cloud offerings have to be listed by choosing offerings that are to be compared. Then, by subsequently applying each requirement as a filter, the initial number of Cloud offerings is narrowed down to a subset of offerings that satisfy all requirements, e.g. only offerings holding data centers in Europe that support 64bit architectures. 485

3 Fig. 1. Process that describes how to implement a custom tailored benchmark that can be used for testing different Cloud providers. C. Benchmark Suite Definition A benchmark suite is a selection of benchmarks, their parameters and execution order that are chosen according to the Use Case and Requirements Definition (see III-A). The suite is defined by the benchmark professional since he knows what benchmarks are suitable to test several aspects (like floating point operations, high RAM throughput, etc.) of the test case. D. Benchmark Suite Implementation The previously defined benchmarking suite has to be implemented in this step. This benchmarking suite is implemented as a batch process that execute the standard benchmarks every time the same way. Instead of implementing everything from the scratch frameworks like Phoronix [15] can be used and have only to be configured. E. Roadmap Definition Besides the benchmark suite that ensures the version, attributes, execution order of each single run is the same, a roadmap must be defined that plans the execution over time. How exactly a roadmap needs to be defined depends again on the use case defined in the first step of the process. Roadmaps typically result in repetitive benchmark suite executions to gather results over e.g. one day or a whole year. A roadmap can also define if a benchmark suite should be executed at the same time on different virtual machines in parallel (bulk start) or subsequently (e.g. hourly start). F. Roadmap Implementation We propose several best practices for the implementation of a roadmap: 1) Common Storage: To be as flexible as possible the VMs should store as less data as possible. We recommend to have a common storage that holds all the scripts, programs, and data necessary for the benchmark suite executions. Depending on the circumstances a common storage could be a cloud storage like Amazon S3, but as well a single server using NFS or FTP. 2) Basic Setup of the Prototype VM: A basic VM of the target size and OS is used to setup the individual benchmarks, the startup script and environment variable script. After installing all benchmarks on the prototype VM, we propose to use the standard packaging tool of the OS to build packages of all benchmarks which automatically install all dependencies unattended. While the environment variables script ensures the virtual VM has access to all necessary keys and sets up the OS s environment variables, the startup script starts the unattended installation, executes the benchmark suite, uploads the results and shuts down the machine afterwards. The following steps should be regarded in the startup script: Get installation packages and additional scripts from common storage Execute the environment setup script Install all downloaded installation packages Execute the benchmark suite Bundle all results of the benchmarks and name the results according to the end time Upload the results to the common storage Shutdown the VM Note: if different VM types (e.g. 32bit and 64bit architecture) are tested this step has to be taken for each type. 3) Upload: The scripts and packages from the prototype VM are uploaded to the common storage and the VM shuts down afterwards. 4) Prepare Templates: On each target cloud, a machine template is prepared to properly execute the benchmark suite. The machine first gets the startup script from the common storage and executes it. This requires, that the VM has access to the common storage in this step. Since it is not secure to store passwords or keys in a virtual machine template we propose to pass the access credentials to the virtual machine at boot time. Some vendors like Amazon offer a mechanism called user data to pass data to the VM s OS. If this mechanism is not present temporary credentials could be stored inside the template. After preparing the template it can be saved using the cloud vendors standard mechanism. 486

4 G. Execution For the execution of a roadmap we propose to have a dedicated machine running a script, taking care of starting the scheduled benchmark suite executions according to the roadmap definition. Since the benchmarking VMs are always terminating, the script has only to take care of starting the VMs. If the tested Cloud vendor does not destroy all the data after stopping a machine, the script also has to take care that new machines are cloned only from the plain template and get destroyed after uploading the results to the common storage. H. Result Evaluation Since the result data of each test is very different, a data extraction mechanism has to be developed. If an existing benchmark suite is used, this suite often already collects the data in a standardized way. For further analysis the data has to be parsed and aggregated into a single dataset (e.g. comma separated values (CSV) text file). IV. EXAMPLE As stated in the introduction, it is most valuable for an IaaS consumer to find the provider with the best cost per performance ratio for running a VM. In the following experiment, we investigated the performance of different IaaS providers within a given price range. In order to come to an informed decision different factors were taken into consideration. Not only the actual performance and pricing model of the provider is important, but also the individual requirements of the given use case. Since pricing models and information about performance differ heavily among providers and can also not easily be matched to the clients use case, we propose a defined process that enables a sound comparison of providers. In the following sections we describe our instantiation of the method introduced in section III. A. Use Case and Requirement Definition In our use case we decided to compare only VMs running the operating system Ubuntu 10.04, but we were interested in the 32bit version as well as the 64bit version. Additionally, our main concern was the CPU performance and the price of those virtual machines. Since it is not reasonable to compare the performance of 32bit and 64bit architectures directly, we decided to conduct scheduled runs for two different categories of VMs: cheap instances and expensive instances. Cheap instances resemble VMs that run on a 32bit architecture and have access to moderate hardware resources, namely one or two virtual cores, and at least 1GB of RAM. As the name suggests, cheap instances should be low-priced and therefore cost no more then 20 cent per hour. Expensive instances run on a 64bit architecture and offer at least two virtual cores, approximately 8GB of RAM and can cost 21 to 50 cent per hour. B. Provider Selection With regard to the constraints stated in the requirements definition we decided to use three different IaaS Providers: Amazon EC2 [16], Flexiscale [17] and Rackspace [18]. The cheap instances category holds VMs from Amazon and Flexiscale. Rackspace was not taken into consideration here because only 64bit instances are available from this provider. Amazon EC2 offers their VMs in distinct classes ranging from small over medium to large instances. Small and medium instances both use 32bit operating systems and cost 8,5 cent (small) or 17 cent (medium) per hour. At Amazon every instance of a specific class has access to a fixed amount of processing power measured in EC2 Compute Units (ECU) and a certain amount of RAM. According to the Amazon website one such unit equals the CPU capacity of an Opteronor Xeon-processor of 2007 with approximately 1,0 to 1,2 GHz [16]. Small instances have 1 ECU and 1700 MB of RAM while medium instances have access to 5 ECU and also 1700 MB of RAM. Flexiscale offers its customers a variety of different VMs, but specific constraints apply. One VM can have up to eight virtual cores and up to 8 GB of RAM, but not all possible combinations are allowed. An instance with one core for example can be started with a maximum of 2GB RAM whereas an instance with 8GB RAM has to have at least 4 virtual cores. Pricing is based on units that have to be bought in advance and each VM will cost a certain amount of units per hour based on the resources that it consumes. The price of one unit also depends on the amount of units that are bought. In contrast to EC2 additional charges for hard disk space and IO Operations per hour will apply. For the cheap instance category we decided to run instances that have one virtual core and 2GB of RAM. The cost for such a VM was between 7 and 15 cent per hour, with the actual price strongly depending on the amount of IO-operations performed. The expensive instance category contains VMs from all three providers. On Amazon the large instance class with 2 virtual cores, 4 ECU and 7,5GB RAM was chosen. Large instances cost 34 cent per hour. All other classes at Amazon that offer 64bit instances have more resources and exceed the price limit of 50 cent/h. At Flexiscale we chose to use instances with 4 cores and 8GB of RAM. The price for such an instance will for our test setup vary between 28 and 40 cent an hour, again strongly dependent on the amount of IO operations performed per hour. Rackspace finally makes no distinction between the amount of computing power or cores that one particular instance can use. Pricing is applied by the amount of RAM that will be available for a VM. We also chose virtual machines with 8GB of RAM here; the price at Rackspace for such an instance was 48 cent per hour. C. Benchmark Suite Definition For our use case our benchmark professional selected the benchmarks in figure 2. Since all these benchmarks are available within the the Phoronix Test Suite, we decided to use 487

5 Fig. 3. Evaluated results of our benchmark suite runs on several cheap Cloud providers. Fig. 2. List of standard benchmarks with standard parameters used in our example. Phoronix to build our benchmark suite. We used the standard configuration parameters of version of the test suite. D. Roadmap Definition While our main goal was to compare the performance between different IaaS providers we also wanted to see if a provider would deliver constant performance for a given type of VMs over time. We believe that multiple factors like different underlying hardware or utilization of the hosting system as well as the overall utilization of the datacenter affect the performance of one specific VM. In our experiment, we focused attention on the following factors: datacenter location, long term utilization and short term utilization. Since Amazon gives the user control over all these parameters we decided to use Amazon to perform these tests. 1) Datacenter Location: We decided to test the Amazon datacenters US-East-1a and US-East-1c. 2) Long Term Utilization: Long term utilization describes the fact that the overall utilization of a datacenter could change over longer time periods and will therefore slowly influence the of performance of a single VM. To cover this aspect, we scheduled runs in August 2010 and November ) Short Term Utilization: Short term utilization characterizes fluctuations in datacenter utilization during the day. To test if the actual time of day influences the performance, we continuously started VMs each hour or 20 machines in a bulk start at once. E. Benchmark Suite Implementation For each prototype VM (32bit and 64 bit) we used a minimal installation of the target operation system and installed Phoronix. It is very important to keep track of all installed packages in order for this installation routine to be automated. F. Roadmap Implementation For storing our results we selected Amazon s Simple Storage Service (S3). It provides high availability and is easy to use. According to the method we equipped each VM with a script that gets the common storage access credentials, downloads all scripts and packages. In the case of EC2 we Fig. 4. Evaluated results of our benchmark suite runs on several expensive Cloud providers. used the user data mechanism to provide the credentials to the machine, in all other cases we stored them in the template. The startup script executes the unattended installation and the environment script defines the name of the S3 bucket where the results are uploaded to. Afterwards, Phoronix is executed, the results are uploaded, and the VM finally shuts down. This procedure ensures re-usability of prototype VMs and reproducible conditions for each VM. G. Execution A virtual machine in our local network was running to start pre-configured machines on the different clouds according to our roadmap. Since our roadmap was not very complex we used a simple Python-Script for the entire process. H. Result Evaluation The data base for our result evaluation is formed by the XML files generated by Phoronix, which creates one file per VM and benchmark suite run. Every XML file holds all benchmark results for a benchmark suite execution on a VM with every benchmark represented by its average result as floating point value calculated from three runs. To aggregate all XML result files and accordingly all benchmarking results of all VMs we had to develop a software tool in Java that consolidated all XML files in a single CSV file. Further analysis of the collected data is based on the CSV files converted with our tool. Aggregated benchmarking results of the cheap, lowperformance and expensive, high-performance categories of Cloud IaaS offerings are depicted in the Figures 3 and 4 respectively. For result aggregation the statistical standard 488

6 Fig. 5. Histogram of the OpenSSL results. Area under the curve equals 1. OpenSSLValue 6,75 6,7 6,65 6,6 6,55 6,5 6,45 6,4 6,35 6, BenchmarkSuite RunID OpstoneVSP OpenSSL Fig. 6. Subset of the OpenSSL and Opstone Vector Scalar Product results US-East-1a, August, bulk start. Runs with high Optstone values have low OpenSSL values and the other way around. metrics average, median, standard deviation, and min- and max-values have been used. Since not every benchmark exploits multi-core CPU features we focused on the OpenSSL (OSSL) benchmark results to compare different IaaS offerings. Results gathered with the OpenSSL benchmark showed an obviously suspicious distribution that indicated different performance behaviors for the systems under test. For detailed analysis of the results we plotted a histogram chart depicted in Figure 5. The histogram chart reveals that results peak at two different results, 6.45 and A deeper analysis of the environmental conditions disproved suspected strong influences of datacenter location and utilization. In particular, variation of datacenter location, start type and execution time of the benchmarks show no influence on benchmarking results. Further investigations incorporating a second benchmark lead to the results depicted as a bar chart in Figure 6. The bar chart shows benchmarking results gathered with OpenSSL and Opstone Vector Scalar Product benchmarks run at Amazon region US-East-1a in August of 2010 using the bulk start start type. A comparison of the benchmarking results unfolds an opposed performance behavior of the tested machines regarding each benchmark. To be specific, machines performing well in an OpenSSL benchmark disappoint in a Opstone benchmark and vice versa. These findings required a detailed inspection of the hardware characteristics of the 1,4 1,2 1 0,8 0,6 0,4 0,2 0 Opstone Value Fig. 7. CPU performance according to the Passmark CPU Mark [19], CPU Type used and the odds for getting this CPU type when starting a Amazon EC2 small, medium or large instance. The sum of each column adds to 100%. Fig. 8. CPU performance according to the Passmark CPU Mark [19], CPU Type used and the odds for getting this CPU type when starting a Flexiscale (1 Core), Flexiscale (2 Core) or Rackspace (2 Core) instance. The sum of each column adds to 100%. virtual machines. Mapping the hardware characteristics to the gathered benchmarking results, pointed out a correlation between benchmark and CPU type and manifested a strong relation between CPU type and benchmarking results. The correlation shows that machines running on a Intel Xeon CPU E5430 CPU perform better in most of the benchmarks and AMD Opteron 2218 HE CPUs only in few of the benchmarks. This is an interesting result, in particular, as both CPU types are purchased in the same IaaS offering for the same price. A variation of CPU type and, hence, the performance of a machine can be found for most of the offerings of all providers. However, more constant performance results can be observed for expensive offerings. Figure 7 and 8 show the odds of getting a certain hardware type when purchasing a machine from different providers. V. LESSONS LEARNED AND DISCUSSION We showed in the last sections that the method can be used to compare IaaS offerings. By implementing this method we learned several things we are presenting in this sections. First we describe our experiences and lessons learned and afterwards we discuss the whole approach critically. A. Lessons Learned We can draw from our results is that it is not recommended to completely trust the informations regarding performance given out by the vendor itself. The vendors interest does not lie in providing exact information about performance of VM s or even giving out guarantees on the quality or even robustness of the computing power provided. If you want real performance data you have to collect information and evaluate the platform yourself. We were also lucky that Phronix collected data like CPU type and other system related data we probably would not have collected if we wrote the benchmark suite from the scratch. In general we can recommend to collect as much data as possible since the expensive part is the runtime and not the storage. 489

7 By doing so you will not only get the data you are actually interested in but may also come across unexpected performance indicators. Therefore we suggest to take a close look at the data you collected. As seen on Amazon, performance values can vary from one booked VM to another just by the sole fact that they run on different platforms which is, in contrast to different datacenters or times of the day, a factor that a client can not influence himself. Furthermore, time related dimensions like time of day or month did not seem to have any effect on the actual performance of one VM. B. Discussion While in-depth benchmarking of cloud computing providers will undoubtedly provide valuable information, there are some downsides. Performing benchmarks consumes time and VM s and will therefore produce considerable costs (in our case about $800). Those costs will depend not only on the amount of benchmarks performed but on the type also. Benchmarking IO performance will generally take longer then benchmarking CPU or RAM. Time and cost for the process will rise with the aspects of a system that is tested, the amount of benchmarks performed for each aspects and also the number of VM s used during the process. An important question that arises from these constraints is: even if one can find the perfect provider and service class for a given problem by extensive benchmarking, will it be cheaper and faster to do so instead of just choosing a provider and launch the job? Additionally, benchmarking will delay the start time when you can actually start using the service. One could on the one hand argue that benchmarking can be performed in advance, but on the other hand collected results will also have a decay time. Today the cloud landscape is changing at a steady pace, new providers emerge and existing ones alter their services, upgrade hardware or even introduce new pricing models and service types. Amazon for instance is constantly adjusting its own platform and pricing models. For example new service types that where introduced within the last year are spot and micro instances. Portability of the obtained results is also an important issue. Benchmarks are, for the most part, synthetical. They measure distinct features of a platform but do not necessarily reflect performance or behavior of a real world application. One can assume that the more synthetical the problem itself is, the more accurate benchmarking predictions for it will be. When performing number crunching for example for de- or encryption of large datasets, then this would be a closely to hardware performance related problem that can on top be distributed on a large number of machines that do not have to be in contact with each other. In contrast a mail server would not show these characteristics because it will strongly depend on user interaction and load and will have to communicate with other systems. In our test setup, network capabilities were not of interest and therefore not measured. This of course depended strongly on our use case. Looking back at the mail server example networking performance would of course have been an important performance factor. One can also argue that for a mail server the differences in CPU performance of Intel and AMD based host systems would maybe have been almost meaningless. VI. CONCLUSION The research question for this paper was if the performance indicators presented by IaaS providers are sufficient to calculate a cost/performance ratio and if not, which parameters should be presented. Our measurements show that the performance indicators currently presented are not sufficient. E.g. whenever starting a new instance on Amazon EC2, even though the same performance class is specified, the VM shows a different performance behavior, depending on the underlying CPU architecture. However, the allocation of CPU architecture to VM is outside the control of the user, making it impossible to know the performance of a VM beforehand. Due to that behavior, it is not only impossible to reliably calculate a cost/performance ratio, there is even an uncertainty regarding the available performance when restarting a VM. Based on the experiences from our measurement, we recommend to create standardized cloud performance measurement VMs that are used to measure and compare performance between different providers. Because the performance requirements depend on the specific application, we recommend to have a set of VMs configured to measure the performance for different generalized application requirements. The performance measurements should include RAM read/write, computation and HDD read/write in varying combinations according to the generalized application requirements. We see two directions for further research. For the cost/performance ratio, it is important to also compare costs between different IaaS providers. Due to different pricing schemes, currently this comparison is very difficult. If a set of standardized performance measurement VMs is defined, the cost of running such a VM for a certain amount of time can be used to compare the costs incurred by a specific application. Further research might follow this line to make costs more comparable. Also, the performance of the virtual machine only measures the production part of a IaaS platform. To evaluate the actual usage experience, the delivery performance from a user to the VM also has to be taken into account. Further research might measure the network performance and compare network properties when accessing different IaaS platforms. REFERENCES [1] A. Lenk, M. Klems, J. Nimis, S. Tai, and T. Sandholm, What s inside the cloud? an architectural map of the cloud landscape, in CLOUD 09: Proceedings of the 2009 ICSE Workshop on Software Engineering Challenges of Cloud Computing. Washington, DC, USA: IEEE Computer Society, 2009, pp [2] The Bitsource, Rackspace Cloud Servers versus Amazon EC2: Performance Analysis, January [Online]. Available: rackspace-cloud-servers-versus-amazon-ec2-performance-analysis/ [3] E. Walker, Benchmarking amazon EC2 for high-performance scientific computing, USENIX Login, vol. 33, no. 5, pp , [4] K. Jackson, L. Ramakrishnan, K. Muriki, S. Canon, S. Cholia, J. Shalf, H. Wasserman, and N. Wright, Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud, in 2nd IEEE International Conference on Cloud Computing Technology and Science. IEEE, 2010, pp

8 [5] J. Schad, J. Dittrich, and J.-A. Quiané-Ruiz, Runtime measurements in the cloud: observing, analyzing, and reducing variance, Proc. VLDB Endow., vol. 3, pp , September [Online]. Available: [6] C. Binnig, D. Kossmann, T. Kraska, and S. Loesing, How is the weather tomorrow?: towards a benchmark for the cloud, in Proceedings of the Second International Workshop on Testing Database Systems, ser. DBTest 09. New York, NY, USA: ACM, 2009, pp. 9:1 9:6. [Online]. Available: [7] Transaction Processing Performance Council (TPC), TPC- W Benchmark, Last Seen: [Online]. Available: [8] D. Menascé, TPC-W: A benchmark for e-commerce, IEEE Internet Computing, vol. 6, no. 3, pp , [9] W. Sobel, S. Subramanyam, A. Sucharitakul, J. Nguyen, H. Wong, S. Patil, A. Fox, and D. Patterson, Cloudstone: Multi-platform, multilanguage benchmark and measurement tools for web 2.0, in Proc. of CCA. Citeseer, [10] CloudHarmony.com, Cloudharmony, Last Seen: [Online]. Available: [11] V. Makhija, B. Herndon, P. Smith, L. Roderick, E. Zamost, and J. Anderson, VMmark: A scalable benchmark for virtualized systems, VMware Inc, CA, Tech. Rep. VMware-TR , September, [12] J. Casazza, M. Greenfield, and K. Shi, Redefining server performance characterization for virtualization benchmarking, Intel Technology Journal, vol. 10, no. 3, pp , [13] Y. Mei, L. Liu, X. Pu, and S. Sivathanu, Performance Measurements and Analysis of Network I/O Applications in Virtualized Cloud, in Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on, 2010, pp , i/o trhoughput for VMs on same physical host. [14] Object Management Group, BPMN 2.0 Specification, Last Seen: [Online]. Available: [15] Phoronix Media, [phoronix] linux hardware reviews, benchmarking, and gaming, Last Seen: [Online]. Available: http: // [16] Amazon Web Services LLC, Amazon elastic compute cloud (amazon ec2), Last Seen: [Online]. Available: http: //aws.amazon.com/ec2/ [17] Flexiant Ltd, Flexiscale public cloud, Last Seen: [Online]. Available: [18] Rackspace, US Inc., Cloud computing, cloud hosting & online storage by rackspace hosting. mosso is now the rackspace cloud. Last Seen: [Online]. Available: [19] PassMark Software, Passmark software - pc benchmark and test software, Last Seen: [Online]. Available: http: // 491

Comparison of Windows IaaS Environments

Comparison of Windows IaaS Environments Comparison of Windows IaaS Environments Comparison of Amazon Web Services, Expedient, Microsoft, and Rackspace Public Clouds January 5, 215 TABLE OF CONTENTS Executive Summary 2 vcpu Performance Summary

More information

An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability An Oracle White Paper August 2011 Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability Note This whitepaper discusses a number of considerations to be made when

More information

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud StACC: St Andrews Cloud Computing Co laboratory A Performance Comparison of Clouds Amazon EC2 and Ubuntu Enterprise Cloud Jonathan S Ward StACC (pronounced like 'stack') is a research collaboration launched

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li

Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li Comparing major cloud-service providers: virtual processor performance A Cloud Report by Danny Gee, and Kenny Li Comparing major cloud-service providers: virtual processor performance 09/03/2014 Table

More information

Cloud Computing and Amazon Web Services

Cloud Computing and Amazon Web Services Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD

More information

OTM in the Cloud. Ryan Haney

OTM in the Cloud. Ryan Haney OTM in the Cloud Ryan Haney The Cloud The Cloud is a set of services and technologies that delivers real-time and ondemand computing resources Software as a Service (SaaS) delivers preconfigured applications,

More information

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance

More information

Cloud Performance Benchmark Series

Cloud Performance Benchmark Series Cloud Performance Benchmark Series Amazon Elastic Load Balancing (ELB) Md. Borhan Uddin Bo He Radu Sion ver. 0.5b 1. Overview Experiments were performed to benchmark the Amazon Elastic Load Balancing (ELB)

More information

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY TECHNICAL WHITE PAPER MAY 1 ST, 2012 GRIDCENTRIC S VIRTUAL MEMORY STREAMING (VMS) TECHNOLOGY SIGNIFICANTLY IMPROVES THE COST OF THE CLASSIC VIRTUAL MACHINE

More information

Performance Analysis: Benchmarking Public Clouds

Performance Analysis: Benchmarking Public Clouds Performance Analysis: Benchmarking Public Clouds Performance comparison of web server and database VMs on Internap AgileCLOUD and Amazon Web Services By Cloud Spectator March 215 PERFORMANCE REPORT WEB

More information

Cloud Computing. Adam Barker

Cloud Computing. Adam Barker Cloud Computing Adam Barker 1 Overview Introduction to Cloud computing Enabling technologies Different types of cloud: IaaS, PaaS and SaaS Cloud terminology Interacting with a cloud: management consoles

More information

By Cloud Spectator July 2013

By Cloud Spectator July 2013 Benchmark Report: Performance Analysis of VS. Amazon EC2 and Rackspace Cloud A standardized, side-by-side comparison of server performance, file IO, and internal network throughput. By Cloud Spectator

More information

Virtual Desktops Security Test Report

Virtual Desktops Security Test Report Virtual Desktops Security Test Report A test commissioned by Kaspersky Lab and performed by AV-TEST GmbH Date of the report: May 19 th, 214 Executive Summary AV-TEST performed a comparative review (January

More information

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Dynamic Load Balancing of Virtual Machines using QEMU-KVM Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College

More information

A Middleware Strategy to Survive Compute Peak Loads in Cloud

A Middleware Strategy to Survive Compute Peak Loads in Cloud A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: sashko.ristov@finki.ukim.mk

More information

Technical Investigation of Computational Resource Interdependencies

Technical Investigation of Computational Resource Interdependencies Technical Investigation of Computational Resource Interdependencies By Lars-Eric Windhab Table of Contents 1. Introduction and Motivation... 2 2. Problem to be solved... 2 3. Discussion of design choices...

More information

Figure 1. The cloud scales: Amazon EC2 growth [2].

Figure 1. The cloud scales: Amazon EC2 growth [2]. - Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues

More information

Using SUSE Studio to Build and Deploy Applications on Amazon EC2. Guide. Solution Guide Cloud Computing. www.suse.com

Using SUSE Studio to Build and Deploy Applications on Amazon EC2. Guide. Solution Guide Cloud Computing. www.suse.com Using SUSE Studio to Build and Deploy Applications on Amazon EC2 Guide Solution Guide Cloud Computing Cloud Computing Solution Guide Using SUSE Studio to Build and Deploy Applications on Amazon EC2 Quickly

More information

Iaas for Private and Public Cloud using Openstack

Iaas for Private and Public Cloud using Openstack Iaas for Private and Public Cloud using Openstack J. Beschi Raja, Assistant Professor, Department of CSE, Kalasalingam Institute of Technology, TamilNadu, India, K.Vivek Rabinson, PG Student, Department

More information

Choosing Between Commodity and Enterprise Cloud

Choosing Between Commodity and Enterprise Cloud Choosing Between Commodity and Enterprise Cloud With Performance Comparison between Cloud Provider USA, Amazon EC2, and Rackspace Cloud By Cloud Spectator, LLC and Neovise, LLC. 1 Background Businesses

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Amazon Hosted ESRI GeoPortal Server. GeoCloud Project Report

Amazon Hosted ESRI GeoPortal Server. GeoCloud Project Report Amazon Hosted ESRI GeoPortal Server GeoCloud Project Report Description of Application Operating Organization The USDA participated in the FY 2011 Federal Geographic Data Committee (FGDC) GeoCloud Sandbox

More information

Directions for VMware Ready Testing for Application Software

Directions for VMware Ready Testing for Application Software Directions for VMware Ready Testing for Application Software Introduction To be awarded the VMware ready logo for your product requires a modest amount of engineering work, assuming that the pre-requisites

More information

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server) Scalability Results Select the right hardware configuration for your organization to optimize performance Table of Contents Introduction... 1 Scalability... 2 Definition... 2 CPU and Memory Usage... 2

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

9/26/2011. What is Virtualization? What are the different types of virtualization.

9/26/2011. What is Virtualization? What are the different types of virtualization. CSE 501 Monday, September 26, 2011 Kevin Cleary kpcleary@buffalo.edu What is Virtualization? What are the different types of virtualization. Practical Uses Popular virtualization products Demo Question,

More information

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0 Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Benchmarking Large Scale Cloud Computing in Asia Pacific

Benchmarking Large Scale Cloud Computing in Asia Pacific 2013 19th IEEE International Conference on Parallel and Distributed Systems ing Large Scale Cloud Computing in Asia Pacific Amalina Mohamad Sabri 1, Suresh Reuben Balakrishnan 1, Sun Veer Moolye 1, Chung

More information

SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs

SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs Fabian Hueske, TU Berlin June 26, 21 1 Review This document is a review report on the paper Towards Proximity Pattern Mining in Large

More information

Cloud-pilot.doc 12-12-2010 SA1 Marcus Hardt, Marcin Plociennik, Ahmad Hammad, Bartek Palak E U F O R I A

Cloud-pilot.doc 12-12-2010 SA1 Marcus Hardt, Marcin Plociennik, Ahmad Hammad, Bartek Palak E U F O R I A Identifier: Date: Activity: Authors: Status: Link: Cloud-pilot.doc 12-12-2010 SA1 Marcus Hardt, Marcin Plociennik, Ahmad Hammad, Bartek Palak E U F O R I A J O I N T A C T I O N ( S A 1, J R A 3 ) F I

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

Deploying Business Virtual Appliances on Open Source Cloud Computing

Deploying Business Virtual Appliances on Open Source Cloud Computing International Journal of Computer Science and Telecommunications [Volume 3, Issue 4, April 2012] 26 ISSN 2047-3338 Deploying Business Virtual Appliances on Open Source Cloud Computing Tran Van Lang 1 and

More information

CloudFTP: A free Storage Cloud

CloudFTP: A free Storage Cloud CloudFTP: A free Storage Cloud ABSTRACT: The cloud computing is growing rapidly for it offers on-demand computing power and capacity. The power of cloud enables dynamic scalability of applications facing

More information

Amazon EC2 XenApp Scalability Analysis

Amazon EC2 XenApp Scalability Analysis WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2

More information

SHORT VERSION, NO EXAMPLE AND APPENDIX 1. (MC 2 ) 2 : A Generic Decision-Making Framework and its Application to Cloud Computing

SHORT VERSION, NO EXAMPLE AND APPENDIX 1. (MC 2 ) 2 : A Generic Decision-Making Framework and its Application to Cloud Computing SHORT VERSION, NO EXAMPLE AND APPENDIX 1 (MC 2 ) 2 : A Generic Decision-Making Framework and its Application to Cloud Computing Michael Menzel, FZI Forschungszentrum Informatik Karlsruhe, menzel@fzi.de

More information

Technology and Cost Considerations for Cloud Deployment: Amazon Elastic Compute Cloud (EC2) Case Study

Technology and Cost Considerations for Cloud Deployment: Amazon Elastic Compute Cloud (EC2) Case Study Creating Value Delivering Solutions Technology and Cost Considerations for Cloud Deployment: Amazon Elastic Compute Cloud (EC2) Case Study Chris Zajac, NJDOT Bud Luo, Ph.D., Michael Baker Jr., Inc. Overview

More information

Understanding Data Locality in VMware Virtual SAN

Understanding Data Locality in VMware Virtual SAN Understanding Data Locality in VMware Virtual SAN July 2014 Edition T E C H N I C A L M A R K E T I N G D O C U M E N T A T I O N Table of Contents Introduction... 2 Virtual SAN Design Goals... 3 Data

More information

Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida

Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida Amazon Web Services Primer William Strickland COP 6938 Fall 2012 University of Central Florida AWS Overview Amazon Web Services (AWS) is a collection of varying remote computing provided by Amazon.com.

More information

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:

More information

Cloud n Service Presentation. NTT Communications Corporation Cloud Services

Cloud n Service Presentation. NTT Communications Corporation Cloud Services Cloud n Service Presentation NTT Communications Corporation Cloud Services 1 Overview of Global Public Cloud Services Cloud n offeres datacenters in U.S. and Japan Global standard service architecture

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Data Centers and Cloud Computing

Data Centers and Cloud Computing Data Centers and Cloud Computing CS377 Guest Lecture Tian Guo 1 Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Case Study: Amazon EC2 2 Data Centers

More information

Optimal Service Pricing for a Cloud Cache

Optimal Service Pricing for a Cloud Cache Optimal Service Pricing for a Cloud Cache K.SRAVANTHI Department of Computer Science & Engineering (M.Tech.) Sindura College of Engineering and Technology Ramagundam,Telangana G.LAKSHMI Asst. Professor,

More information

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India sudha.mooki@gmail.com 2 Department

More information

IOS110. Virtualization 5/27/2014 1

IOS110. Virtualization 5/27/2014 1 IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Top Ten Considerations For Choosing A Server Virtualization Technology www.parallels.com Version 1.0 Table of Contents Introduction... 3 Technology Overview...

More information

Managing Traditional Workloads Together with Cloud Computing Workloads

Managing Traditional Workloads Together with Cloud Computing Workloads Managing Traditional Workloads Together with Cloud Computing Workloads Table of Contents Introduction... 3 Cloud Management Challenges... 3 Re-thinking of Cloud Management Solution... 4 Teraproc Cloud

More information

Maintaining HMI and SCADA Systems Through Computer Virtualization

Maintaining HMI and SCADA Systems Through Computer Virtualization Maintaining HMI and SCADA Systems Through Computer Virtualization Jon Reeser Systems Engineer PROFI-VISION Automation 1150 Glenlivet Drive Allentown, PA 18106 jon.reeser@profi-vision.com Thomas Jankowski

More information

Relational Databases in the Cloud

Relational Databases in the Cloud Contact Information: February 2011 zimory scale White Paper Relational Databases in the Cloud Target audience CIO/CTOs/Architects with medium to large IT installations looking to reduce IT costs by creating

More information

Improved metrics collection and correlation for the CERN cloud storage test framework

Improved metrics collection and correlation for the CERN cloud storage test framework Improved metrics collection and correlation for the CERN cloud storage test framework September 2013 Author: Carolina Lindqvist Supervisors: Maitane Zotes Seppo Heikkila CERN openlab Summer Student Report

More information

Data Centers and Cloud Computing. Data Centers

Data Centers and Cloud Computing. Data Centers Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing 1 Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises

More information

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD M. Lawanya Shri 1, Dr. S. Subha 2 1 Assistant Professor,School of Information Technology and Engineering, Vellore Institute of Technology, Vellore-632014

More information

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department

More information

Data Centers and Cloud Computing. Data Centers. MGHPCC Data Center. Inside a Data Center

Data Centers and Cloud Computing. Data Centers. MGHPCC Data Center. Inside a Data Center Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises

More information

Online Failure Prediction in Cloud Datacenters

Online Failure Prediction in Cloud Datacenters Online Failure Prediction in Cloud Datacenters Yukihiro Watanabe Yasuhide Matsumoto Once failures occur in a cloud datacenter accommodating a large number of virtual resources, they tend to spread rapidly

More information

Towards an understanding of oversubscription in cloud

Towards an understanding of oversubscription in cloud IBM Research Towards an understanding of oversubscription in cloud Salman A. Baset, Long Wang, Chunqiang Tang sabaset@us.ibm.com IBM T. J. Watson Research Center Hawthorne, NY Outline Oversubscription

More information

Workstation Virtualization Software Review. Matthew Smith. Office of Science, Faculty and Student Team (FaST) Big Bend Community College

Workstation Virtualization Software Review. Matthew Smith. Office of Science, Faculty and Student Team (FaST) Big Bend Community College Workstation Virtualization Software Review Matthew Smith Office of Science, Faculty and Student Team (FaST) Big Bend Community College Ernest Orlando Lawrence Berkeley National Laboratory Berkeley, CA

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

More information

RED HAT ENTERPRISE VIRTUALIZATION

RED HAT ENTERPRISE VIRTUALIZATION Giuseppe Paterno' Solution Architect Jan 2010 Red Hat Milestones October 1994 Red Hat Linux June 2004 Red Hat Global File System August 2005 Red Hat Certificate System & Dir. Server April 2006 JBoss April

More information

VMware and Xen Hypervisor Performance Comparisons in Thick and Thin Provisioned Environments

VMware and Xen Hypervisor Performance Comparisons in Thick and Thin Provisioned Environments VMware and Xen Hypervisor Performance Comparisons in Thick and Thin Provisioned Environments Devanathan Nandhagopal, Nithin Mohan, Saimanojkumaar Ravichandran, Shilp Malpani Devanathan.Nandhagopal@Colorado.edu,

More information

Comparing Free Virtualization Products

Comparing Free Virtualization Products A S P E I T Tr a i n i n g Comparing Free Virtualization Products A WHITE PAPER PREPARED FOR ASPE BY TONY UNGRUHE www.aspe-it.com toll-free: 877-800-5221 Comparing Free Virtualization Products In this

More information

Licensing++ for Clouds. Mark Perry

Licensing++ for Clouds. Mark Perry Licensing++ for Clouds Mark Perry Plan* 1. Cloud? 2. Survey 3. Some ques@ons 4. Some ideas 5. Some sugges@ons (that would be you) * Plan 9 future events such as these will affect you in the future Clouds

More information

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist ACANO SOLUTION VIRTUALIZED DEPLOYMENTS White Paper Simon Evans, Acano Chief Scientist Updated April 2015 CONTENTS Introduction... 3 Host Requirements... 5 Sizing a VM... 6 Call Bridge VM... 7 Acano Edge

More information

Full Drive Encryption with Samsung Solid State Drives

Full Drive Encryption with Samsung Solid State Drives Full Drive with Solid State Drives A performance and general review of s new selfencrypting solid state drives. Trusted Strategies LLC Author: Bill Bosen November 2010 Sponsored by Electronics Full Drive

More information

Datacenters and Cloud Computing. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html

Datacenters and Cloud Computing. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html Datacenters and Cloud Computing Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html What is Cloud Computing? A model for enabling ubiquitous, convenient, ondemand network

More information

Kronos Workforce Central on VMware Virtual Infrastructure

Kronos Workforce Central on VMware Virtual Infrastructure Kronos Workforce Central on VMware Virtual Infrastructure June 2010 VALIDATION TEST REPORT Legal Notice 2010 VMware, Inc., Kronos Incorporated. All rights reserved. VMware is a registered trademark or

More information

RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK

RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK AT A GLANCE The performance of Red Hat Enterprise Virtualization can be compared to other virtualization platforms using the SPECvirt_sc2010

More information

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com ` CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS Review Business and Technology Series www.cumulux.com Table of Contents Cloud Computing Model...2 Impact on IT Management and

More information

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash

More information

NetScaler VPX FAQ. Table of Contents

NetScaler VPX FAQ. Table of Contents NetScaler VPX FAQ Table of Contents Feature and Functionality Frequently Asked Questions... 2 Pricing and Packaging Frequently Asked Questions... 4 NetScaler VPX Express Frequently Asked Questions... 5

More information

Managing the Real Cost of On-Demand Enterprise Cloud Services with Chargeback Models

Managing the Real Cost of On-Demand Enterprise Cloud Services with Chargeback Models Managing the Real Cost of On-Demand Enterprise Cloud Services with Chargeback Models A Guide to Cloud Computing Costs, Server Costs, Pricing Plans, and Chargeback Implementation and Systems Introduction

More information

Performance Evaluation Approach for Multi-Tier Cloud Applications

Performance Evaluation Approach for Multi-Tier Cloud Applications Journal of Software Engineering and Applications, 2013, 6, 74-83 http://dx.doi.org/10.4236/jsea.2013.62012 Published Online February 2013 (http://www.scirp.org/journal/jsea) Performance Evaluation Approach

More information

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM Akmal Basha 1 Krishna Sagar 2 1 PG Student,Department of Computer Science and Engineering, Madanapalle Institute of Technology & Science, India. 2 Associate

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

LCMON Network Traffic Analysis

LCMON Network Traffic Analysis LCMON Network Traffic Analysis Adam Black Centre for Advanced Internet Architectures, Technical Report 79A Swinburne University of Technology Melbourne, Australia adamblack@swin.edu.au Abstract The Swinburne

More information

DynamicCloudSim: Simulating Heterogeneity in Computational Clouds

DynamicCloudSim: Simulating Heterogeneity in Computational Clouds DynamicCloudSim: Simulating Heterogeneity in Computational Clouds Marc Bux, Ulf Leser {bux leser}@informatik.hu-berlin.de The 2nd international workshop on Scalable Workflow Enactment Engines and Technologies

More information

Performance Testing of a Cloud Service

Performance Testing of a Cloud Service Performance Testing of a Cloud Service Trilesh Bhurtun, Junior Consultant, Capacitas Ltd Capacitas 2012 1 Introduction Objectives Environment Tests and Results Issues Summary Agenda Capacitas 2012 2 1

More information

Cloud Computing: Meet the Players. Performance Analysis of Cloud Providers

Cloud Computing: Meet the Players. Performance Analysis of Cloud Providers BASEL UNIVERSITY COMPUTER SCIENCE DEPARTMENT Cloud Computing: Meet the Players. Performance Analysis of Cloud Providers Distributed Information Systems (CS341/HS2010) Report based on D.Kassman, T.Kraska,

More information

Cloud Analysis: Performance Benchmarks of Linux & Windows Environments

Cloud Analysis: Performance Benchmarks of Linux & Windows Environments Cloud Analysis: Performance Benchmarks of Linux & Windows Environments Benchmarking comparable offerings from HOSTING, Amazon EC2, Rackspace Public Cloud By Cloud Spectator July 2013 Table of Contents

More information

Monitoring Databases on VMware

Monitoring Databases on VMware Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com

More information

Dimension Data Enabling the Journey to the Cloud

Dimension Data Enabling the Journey to the Cloud Dimension Data Enabling the Journey to the Cloud Grant Morgan General Manager: Cloud 14 August 2013 Client adoption: What our clients were telling us The move to cloud services is a journey over time and

More information

System Requirements Table of contents

System Requirements Table of contents Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5

More information

Cloud Vendor Benchmark 2015. Price & Performance Comparison Among 15 Top IaaS Providers Part 1: Pricing. April 2015 (UPDATED)

Cloud Vendor Benchmark 2015. Price & Performance Comparison Among 15 Top IaaS Providers Part 1: Pricing. April 2015 (UPDATED) Cloud Vendor Benchmark 2015 Price & Performance Comparison Among 15 Top IaaS Providers Part 1: Pricing April 2015 (UPDATED) Table of Contents Executive Summary 3 Estimating Cloud Spending 3 About the Pricing

More information

Performance of the Cloud-Based Commodity Cluster. School of Computer Science and Engineering, International University, Hochiminh City 70000, Vietnam

Performance of the Cloud-Based Commodity Cluster. School of Computer Science and Engineering, International University, Hochiminh City 70000, Vietnam Computer Technology and Application 4 (2013) 532-537 D DAVID PUBLISHING Performance of the Cloud-Based Commodity Cluster Van-Hau Pham, Duc-Cuong Nguyen and Tien-Dung Nguyen School of Computer Science and

More information

Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure

Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure Q1 2012 Maximizing Revenue per Server with Parallels Containers for Linux www.parallels.com Table of Contents Overview... 3

More information

Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Xeon Processor-based Platforms

Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Xeon Processor-based Platforms Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Xeon Processor-based Platforms Enomaly Elastic Computing Platform, * Service Provider Edition Executive Summary Intel Cloud Builder Guide

More information

Permanent Link: http://espace.library.curtin.edu.au/r?func=dbin-jump-full&local_base=gen01-era02&object_id=154091

Permanent Link: http://espace.library.curtin.edu.au/r?func=dbin-jump-full&local_base=gen01-era02&object_id=154091 Citation: Alhamad, Mohammed and Dillon, Tharam S. and Wu, Chen and Chang, Elizabeth. 2010. Response time for cloud computing providers, in Kotsis, G. and Taniar, D. and Pardede, E. and Saleh, I. and Khalil,

More information

Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines

Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Dr. Johann Pohany, Virtualization Virtualization deals with extending or replacing an existing interface so as to

More information

Last time. Data Center as a Computer. Today. Data Center Construction (and management)

Last time. Data Center as a Computer. Today. Data Center Construction (and management) Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases

More information

System and Storage Virtualization For ios (AS/400) Environment

System and Storage Virtualization For ios (AS/400) Environment Date: March 10, 2011 System and Storage Virtualization For ios (AS/400) Environment How to take advantage of today s cost-saving technologies for legacy applications Copyright 2010 INFINITE Corporation.

More information

Estimating the Cost of a GIS in the Amazon Cloud. An Esri White Paper August 2012

Estimating the Cost of a GIS in the Amazon Cloud. An Esri White Paper August 2012 Estimating the Cost of a GIS in the Amazon Cloud An Esri White Paper August 2012 Copyright 2012 Esri All rights reserved. Printed in the United States of America. The information contained in this document

More information

Hadoop in the Hybrid Cloud

Hadoop in the Hybrid Cloud Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big

More information

Virtualizing Apache Hadoop. June, 2012

Virtualizing Apache Hadoop. June, 2012 June, 2012 Table of Contents EXECUTIVE SUMMARY... 3 INTRODUCTION... 3 VIRTUALIZING APACHE HADOOP... 4 INTRODUCTION TO VSPHERE TM... 4 USE CASES AND ADVANTAGES OF VIRTUALIZING HADOOP... 4 MYTHS ABOUT RUNNING

More information

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain

More information

Chapter 14 Virtual Machines

Chapter 14 Virtual Machines Operating Systems: Internals and Design Principles Chapter 14 Virtual Machines Eighth Edition By William Stallings Virtual Machines (VM) Virtualization technology enables a single PC or server to simultaneously

More information