CloudCmp:Comparing Cloud Providers Raja Abhinay Moparthi 1
Outline Motivation Cloud Computing Service Models Charging schemes Cloud Common Services Goal CloudCom Working Challenges Designing Benchmark tasks Results 2
Motivation Which one to Choose??? 3
Cloud Computing Def: A model for delivering information technology services in which resources are retrieved from the internet through webbased tools and applications, rather than a direct connection to a server. 4
Service Models Infrastructure-as-a-Service (IaaS) Guest Operating System on a Virtual Machine Applications run on virtual machines using OS APIs Platform-as-a-Service (PaaS) Application runs by using specified APIs from the cloud provider in an sandbox environment 5
Charging Schemes Depends on Service Model Infrastructure-as-a-Service (IaaS) Number of virtual instances Time of its running Platform-as-a-Service (PaaS) Number of CPU Clocks utilized by application Implemented as API s in the Cloud servers 6
Cloud Common Services Elastic Computing virtual instances running application code Persistent Storage stores application data and state Intra-Cloud network connects application instances Wide-area network Connects the cloud and end users at different geographical locations 7
Goals Help customers in picking a cloud provider that best fits his requirements To estimate the costs and performance of applications on clouds without really deploying it Suggesting the providers in improving their service 8
CloudCmp Working Performance of each service is measure using some metrics Elastic Computing Benchmark finishing time Benchmark task cost Scaling latency 9
CloudCmp Working Persistent Storage Operation response time Time to consistency Cost per operation Intra-cloud network andwide-area network Throughput Latency 10
Selecting benchmark tasks Fair and representative Challanges Performance of multi-core and single-core Using both multi threaded benchmark task and single threaded benchmark task Accuracy vs Measurement costs Considering those providers who account for most of the customers Evaluating wide-area network performance By using planet lab instances from different geographical locations 11
For Computation Designing Benchmark Tasks Need to stress on all aspects (CPU, Memory, Disk I/O) Java support by most the cloud providers Java based benchmark tasks Cost per task measurement Pay per instance-hour time and number of instances Pay per CPU clock cycles CPU cycles are provided by API s from cloud 12
For Storage Storage service types Designing Benchmark Tasks Tables databases (structured data) Blobs unstructured data Queues message passing systems Benchmark task for the common storage operations Get, Put, Query Test against different data sizes Cost and latency of each operation is measured 13
For Network Intra-Cloud Designing Benchmark Tasks Data is sent between two randomly chosen instances in same datacenter Wide-area Planet lab instances are considered as vantage points The TCP throughput and network latencies are calculated in both the cases 14
Results : Computation At approximately same cost the performance of the three clouds show an major diversity 15
Results : Storage Even though cloud X has a good performance in computation its storage service response is quite slow. 16
Results : Storage Even though cloud X has a good performance in computation its storage service response is quite slow. 17
Results : wide-area network The average latency of X s wide-area network is 80% shorter than that of others 18
Conclusion Evaluating performance of clouds with diversified technology and costs is difficult CloudCmp attempts to compare cloud providers without actual deployment of application Both performance and cost are estimated Bringing down common metrics for evaluating cloud performance is a challenging task Clouds show a large variation in terms of performance and costs. 19
20 Queries