CBUD Micro: A Micro Benchmark for Performance Measurement and Resource Management in IaaS Clouds

Similar documents
PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

Performance Analysis of Cloud-Based Applications

C-Meter: A Framework for Performance Analysis of Computing Clouds

Resource Provisioning in Clouds via Non-Functional Requirements

Cloud Computing Simulation Using CloudSim

DYNAMIC CLOUD PROVISIONING FOR SCIENTIFIC GRID WORKFLOWS

Permanent Link:

Profit Based Data Center Service Broker Policy for Cloud Resource Provisioning

Performance Analysis of Web Applications on IaaS Cloud Computing Platform

Reallocation and Allocation of Virtual Machines in Cloud Computing Manan D. Shah a, *, Harshad B. Prajapati b

Dynamic Round Robin for Load Balancing in a Cloud Computing

CLOUD COMPUTING: A NEW VISION OF THE DISTRIBUTED SYSTEM

Towards Comparative Evaluation of Cloud Services

UBUNTU DISK IO BENCHMARK TEST RESULTS

Performance Analysis of Cloud Computing Platform

A SURVEY ON LOAD BALANCING ALGORITHMS FOR CLOUD COMPUTING

Performance Gathering and Implementing Portability on Cloud Storage Data

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms

Multilevel Communication Aware Approach for Load Balancing

Cloud Computing and E-Commerce

CLOUD SIMULATORS: A REVIEW

Dynamic resource management for energy saving in the cloud computing environment

CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications

IEEE TPDS, MANY-TASK COMPUTING, NOVEMBER Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Simulation-based Evaluation of an Intercloud Service Broker

IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT

SMICloud: A Framework for Comparing and Ranking Cloud Services

Virtual Machine Based Resource Allocation For Cloud Computing Environment

Figure 1. The cloud scales: Amazon EC2 growth [2].

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load

CPU Utilization while Scaling Resources in the Cloud

Grid Computing Vs. Cloud Computing

Low Cost Quality Aware Multi-tier Application Hosting on the Amazon Cloud

Evaluation Methodology of Converged Cloud Environments

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Mobile and Cloud computing and SE

Estimating Trust Value for Cloud Service Providers using Fuzzy Logic

System Models for Distributed and Cloud Computing

SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS

CLOUD SCALABILITY CONSIDERATIONS

Amazon Elastic Compute Cloud Getting Started Guide. My experience

An Efficient Checkpointing Scheme Using Price History of Spot Instances in Cloud Computing Environment

Cloud FTP: A Case Study of Migrating Traditional Applications to the Cloud

How to Do/Evaluate Cloud Computing Research. Young Choon Lee

Overview of Offloading in Smart Mobile Devices for Mobile Cloud Computing

DOCLITE: DOCKER CONTAINER-BASED LIGHTWEIGHT BENCHMARKING ON THE CLOUD

Cloud Friendly Load Balancing for HPC Applications: Preliminary Work

Auto-Scaling Model for Cloud Computing System

Dynamic Resource Pricing on Federated Clouds

Cloud Performance Benchmark Series

Comparison of PBRR Scheduling Algorithm with Round Robin and Heuristic Priority Scheduling Algorithm in Virtual Cloud Environment

Introducing STRATOS: A Cloud Broker Service

Performance analysis of Windows Azure data storage options

Efficient and Enhanced Load Balancing Algorithms in Cloud Computing

Cornell University Center for Advanced Computing

DataCenter optimization for Cloud Computing

DynamicCloudSim: Simulating Heterogeneity in Computational Clouds

From Grid Computing to Cloud Computing & Security Issues in Cloud Computing

A Study of Infrastructure Clouds

Distributed Framework for Data Mining As a Service on Private Cloud

A Generic Auto-Provisioning Framework for Cloud Databases

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

CHAPTER 8 CLOUD COMPUTING

Benchmarking Amazon s EC2 Cloud Platform

Affinity Aware VM Colocation Mechanism for Cloud

Storage CloudSim: A Simulation Environment for Cloud Object Storage Infrastructures

Scheduler in Cloud Computing using Open Source Technologies

Cloud Computing Submitted By : Fahim Ilyas ( ) Submitted To : Martin Johnson Submitted On: 31 st May, 2009

Cloud Computing. Adam Barker

High performance computing network for cloud environment using simulators

EFFICIENT VM LOAD BALANCING ALGORITHM FOR A CLOUD COMPUTING ENVIRONMENT

Dr. Ravi Rastogi Associate Professor Sharda University, Greater Noida, India

Performance Analysis of VM Scheduling Algorithm of CloudSim in Cloud Computing

Cornell University Center for Advanced Computing

Dependency Free Distributed Database Caching for Web Applications and Web Services

A Open Source Tools & Comparative Study on Cloud Computing

Enhancing Dataset Processing in Hadoop YARN Performance for Big Data Applications

SCORE BASED DEADLINE CONSTRAINED WORKFLOW SCHEDULING ALGORITHM FOR CLOUD SYSTEMS

Exploring Resource Provisioning Cost Models in Cloud Computing

CLOUD COMPUTING PERFORMANCE EVALUATION: ISSUES AND CHALLENGES

A Survey on Cloud Computing-Deployment of Cloud, Building a Private Cloud and Simulators

Transcription:

CBUD Micro: A Micro Benchmark for Performance Measurement and Resource Management in IaaS Clouds Vivek Shrivastava 1, D. S. Bhilare 2 1 International Institute of Professional Studies, Devi Ahilya University Indore, India 2 Computer Centre, Devi Ahilya University Indore, India Abstract Cloud computing provides processing power in form of virtual machines. This processing power can be given to very big and small devices. Similarly for providing processing power, devices with small or big processing power can be utilized. These devices can also be ubiquitous computing devices. Ubiquitous computing devices present in environment may communicate with each other. Ubiquitous computing devices may also share load with each other. Spare processing power of these devices can also be used for cloud computing. An old, less powerful computing device which is capable of connecting with Internet can avail processing power of new computing devices. Problem with using spare processing power is lack of common benchmark for different types of ubiquitous computing devices. To assign proper workload in terms of processing power, devices must be tested according to one common benchmark. This paper presents the CBUDMicro (Common Benchmark for Ubiquitous Computing Devices Micro), an extendable common benchmark to evaluate performance of ubiquitous computing devices so that they can be used in cloud computing environment. CBUDMicro can be used at both cloud host and consumer side for performance and resource management by supporting scheduling decisions. This paper describes the vision and architecture of CBUDMicro in detail and core components implemented. Keywords Benchmark, Cloud computing, Ubiquitous computing, Processing power, CBUDMicro, Workload. I. INTRODUCTION Ubiquitous computing is the new computing era, in which various computing devices may present everywhere in environment [1]. Collective use of processing power of these computing devices can provide various information processing services equivalent to high-end computers [2], [3]. Ubiquitous computing (Ubicomp) devices are now very helpful for presenting and processing information everywhere. Processing power of Ubicomp devices can be used by Infrastructure as a Service (IaaS) model of cloud computing. Cloud computing is the delivery of computing as a scalable on-demand service on a pay per use basis [4], [5], [6]. These services include software, platform and infrastructure provided to consumers as a metered service over a network. Any computing device capable of connecting to Internet and cloud compatible can avail services [7], [8], [9], [10]. One common benchmark is required for every type of ubicomp devices. A benchmark suite is a set of programs, which is used to measure the performance of different machines. Standard benchmarking provides the run-times for given programs on given machines [11]. Analyzing the results of benchmark suite on different machines helps designers in improving the performance of future machines and users in tuning their applications to better utilize the performance of existing machines. A benchmark suite CBUDMicro is proposed in this paper. CBUDMicro can be used to measure performance of Ubicomp device categories like tabs, pads, and boards. These devices have dissimilar features like some devices can have all the facilities and interfaces but some other devices may not include such facilities or interfaces [11]. CBUDMicro can be used to measure processing power of all types of Ubicomp devices and can also be used for scheduling load, and resource management for cloud environments. CBUDMicro does not emphasis on metrics like network throughput, database and operating system performance because of available variety of devices. CBUDMicro can be used for evaluating performance of all kinds of Java enabled ubicomp devices. CBUDMicro is platform independent and can run on all devices on which JRE can be installed. 433

Table 1 shows qualitative and quantitative cloud resource characteristics provided by different cloud hosts as shown in [12] but this comparison does not include Ubicomp devices or collective use of processing power of Ubicomp devices, also consumers don t get idea which cloud host will be suitable for their requirements. Table 1 The Resource Characteristics For The Instance Types Offered By The Four Selected Clouds [12]. Name Cores RAM Archi. Disk Cost Consumers (ECUs) [GB] [bit] [GB] [$/h] Figure 1 Consumer Serviced By Cloud Of Ubiquitous Computing Devices And Data Centre. CBUDMicro is designed in such a way that it has interfaces to additional modules which can be integrated separately with the existing code. This benchmark provides the ability to benchmark a user s own application by adding them separately. CBUDMicro can be used at both cloud host and consumer side for providing and hiring computing services as shown in figure 2. Section 2 describes related works in which various cloud performance measurement projects have been discussed. Section 3 explicates issues with existing benchmarks, and how those are addressed in this paper. Section 4 explores design and implementation of CBUDMicro benchmark. Overall architecture, classes and CBUDMicro client-server are also detailed in Section 4. Finally Section 5 concludes the paper, which shows findings and future scope. II. RELATED WORK Performance of cloud computing services for scientific computing workloads was analyzed in [12]. Authors quantified the presence of Many-Task Computing (MTC) users in real scientific computing workloads. MTC users employ loosely coupled applications comprising many tasks to achieve their scientific goals. Authors also performed an empirical evaluation of the performance of four commercial cloud computing services. The need for valuation of Cloud Computing is given in [13]. Authors structure components in a framework by identifying key components which affects valuation. Framework proposed in [13] assists decision makers in estimating cloud computing costs and comparing them with conventional IT solutions. Amazon EC2 m1.small 1 (1) 1.7 32 160 0.1 m1.large 2 (4) 7.5 64 850 0.4 m1.xlarge 4 (8) 15.0 64 1,690 0.8 c1.medium 2 (5) 1.7 32 350 0.2 c1.xlarge 8 (20) 7.0 64 1,690 0.8 GoGrid (GG) GG.small 1 1.0 32 60 0.19 GG.large 1 1.0 64 60 0.19 GG.xlarge 3 4.0 64 240 0.76 Elastic Hosts (EH) EH.small1 1 1 32 30 0.04 2 EH.large 1 4 64 30 0.09 Mosso Mosso.small 4 1 64 40 0.06 Mosso.large 4 4 64 160 0.24 End-to-end response time in cloud computing environment can be measured for various cloud providers and locations with the help of benchmark Java ecommerce application developed by Gomez [14]. 434

Performance of a web application across multiple cloud providers and services (servers, storage, CDN, PaaS) can be measured with CloudHarmony [15]. CloudHarmony has a service called Cloud SpeedTest, for the aforementioned purpose. Realistic performance can be tested with Cloudstone. Cloudstone is an academic open source project from the UC Berkeley [16]. Straight performance benchmarking and a costperformance analysis can be done with the help of Cloud CMP [6]. Objective of this benchmark is to enable comparison shopping. Cloud CMP is from Duke University and Microsoft Research. Four categories of performance: raw response time and caching, network throughput and congestion, computational performance (CPU-intensive tasks) and I/O performance was measured by BitCurrent in [17]. III. EXISTING BENCHMARKS FOR UBICOMP DEVICES Existing benchmarks for checking processing power may not be suitable for Ubicomp devices since such devices have a large variations in them, even some of ubicomp devices may not have visual output. This section presents issues of existing benchmark and how these are handled. A. Issues with Existing Benchmarks Lack of standard Ubicomp benchmark: There is not a single standard benchmark for Ubicomp devices that emphasize on processing power of ubicomp devices in cloud computing environment. Coverage of Ubicomp devices: Existing benchmarks do not consider all types of ubicomp devices. Collective use of Ubicomp devices processing power: Collective processing power is not considered in existing benchmarks. Privacy maintenance: Privacy issues through present benchmark are not solved. I/O bound problems: I/O bound problems affect results of benchmarking. B. Solutions to Issues Lack of standard ubicomp benchmark can be filled with the proposed CBUDMicro benchmark. Presented benchmark can be applied to all types of Ubicomp devices. This work suggests use of collective processing power of present Ubicomp devices in current ubicomp environment as it ranks them accordingly. Every device needs to be used by the environment can provide consent to benchmark server thus privacy of other devices can be maintained. In CBUDMicro only computation bound problems are used. IV. DESIGN AND IMPLEMENTATION OF CBUDMICRO The design and implementation of proposed benchmark is given in this section. There may be a number of devices present in ubicomp environment but for simplicity one ubicomp device as a client only is assumed. A. The Overall Architecture CBUDMicro contains a CBUDMicro server and CBUDMicro client. At first when a server is up any ubicomp device having CBUDMicro client can connect with it, that client will provide frequency and it s consent for checking the processing capacity to CBUDMicro. CBUDMicro server then assigns program module to CBUDMicro client, and on the basis of execution times server produces result. This result is then saved in database. Different metrics are saved in database that can be further used for future evaluation and use. CBUDMicro client-server, interface and classes developed and used are shown in subsequent sections. B. CBUDMicro Server and Client CBUD server is written in Java, so it inherits all the advantages of Java, like it is Architecture-Neutral, Distributed, and Dynamic [18]. Since different ubicomp devices may have different architectures so Java was suitable to implement CBUDMicro. Server load is shared by client in CBUDMicro because Java naturally supports distributed system and focus in this work is to evaluate processing power of Ubicomp devices. Java programs carry with them considerable amounts of run-time type information that is used to verify and resolve accesses to objects at run time. This makes it possible to dynamically link code in a safe and convenient manner. Remote Method Invocation (RMI) is used in developing CBUDMicro which allows a Java object that executes on one machine to invoke a method of a Java object that executes on another machine [19]. Since Ubicomp devices may have very less main memory, so a total 8 KB of memory on client side is required for stubs. C. The Interface and Classes of CBUDMicro Two interfaces AddServerIntf and Notifiable, and four classes AddServer, AddClient, AddServerImpl, and Result developed for implementation. 435

Aforementioned interfaces extended Remote interfaces. RMI callback methods are used for transferring load to client end [20]. Methods in classes are kept small in number and size to support with low memory devices. D. Prototype Implementation A prototype of CBUDMicro has been implemented for testing purpose. The Server runs under J2SE. The Client devices are provided with a jar file, thus all client have to support JRE. For message transfer between ubicomp device and the server, Benchmark RMI server on Server side is run. Various devices are also tested for comparing the results. E. Performance Evaluation Done by CBUDMicro Geometric means of results given by CBUDMicro on different machines are given below in table 2: Table 2 Results obtained by CBUD MID F CPI IPC MIPS ExTime 1 2.20E+09 67842.9 1.4 2.4 40089259 2 2.20E+09 67842.9 1.4 3.2 40089258 Here MID is Machine Identification number, F is Frequency of tested machine in hertz, CPI is Cycles Per Instruction, IPC is Instructions Per Cycle, MIPS is Millions of Instructions Per Seconds, ExTime is Execution Time in milliseconds. Results generated by J2ME midlet for mobile version on different mobile phones are as given below in 3: Mobile Name & model Table 3 Results Obtained On Mobile Phones Execution time in nanoseconds Test 1 Test 2 Test 3 Nokia Asha 202 172000000 169000000 173000000 Nokia SuperNova 7210 163000000 167000000 169000000 Nokia E 63 116000000 117000000 123000000 Samsung Chat 322 Duos 277000000 240000000 287000000 Samsung Cham Duos E 2652 227000000 226000000 225000000 Samsung Chat 527 120000000 129000000 120000000 3 2.20E+09 88881.7 1.1 2.4 52521125 4 2.20E+09 188636.5 5.3 1.1 11146725 5 2.20E+09 66661.7 1.5 3.3 39391432 6 2.20E+09 65006.6 1.5 3.3 38413187 7 2.20E+09 128002.8 7.8 1.7 75638189 8 2.20E+09 66543.2 1.5 3.3 39321178 9 2.20E+09 66543.2 1.5 3.3 39111254 10 2.20E+09 6666.2 1.5 3.3 39391256 11 2.60E+09 24294.0 4.1 1.0 11873025 12 2.60E+09 26637.6 3.7 9.9 13018412 13 2.60E+09 4230.0 2.3 6.2 20673156 14 2.60E+09 4172.9 2.3 6.3 20394154 15 2.60E+09 4115.6 2.4 6.4 20114147 Above results are not taken on fresh installed mobile phones. F. Comparison with Other Benchmarks Presented benchmark CBUDMicro is not comparable with other benchmarks, since benchmarks for Ubicomp devices that directly check processing power do not exist. Other benchmarks measure performance for use of Ubicomp devices for the main intended functionality of that device only. V. CONCLUSION Processing power of Ubicomp devices may be small or large. This processing power may be used in cycle scavenging mode and this devices can also get processing power on demand via cloud computing IaaS model. For both of the purpose a benchmark suite (like proposed CBUDMicro) is required to schedule workloads and resource management. Proposed benchmark CBUDMicro has a small number of computational tasks to evaluate processing power of these devices due to wide range of processing performance of devices. Proposed benchmark is extendable i.e. new applications can easily be added to measure performance of devices. 436

Future work may help in measuring performance under varied network congestion situations, network bandwidth and web cache when measuring web performance for cloud computing and grid computing tasks. This work also suggests that a middleware for grid of Ubicomp devices can be developed. This grid can utilize computing resources according to computing capacity of devices that can be used in cloud computing. computin g nodes Figure 2 CBUD Is Useful At Both Cloud Host End And Cloud Consumer End. REFERENCES Consumer CBUDMicro computi ng nodes Cloud Service Consumer computing nodes [1] Lukowicz, P., & Intille, S. (2011). Experimental Methodology in Pervasive Computing. Pervasive Computing, IEEE, 10(2), 94-96. [2] Egami, K., Matsumoto, S., & Nakamura, M. (2011, March). Ubiquitous cloud: Managing service resources for adaptive ubiquitous computing. In Pervasive Computing and Communications Workshops (PERCOM Workshops), 2011 IEEE International Conference on (pp. 123-128). IEEE. [3] Chang, C. C., & Lee, C. Y. (2012). A secure single sign-on mechanism for distributed computer networks. IEEE Transactions on Industrial Electronics, 59(1), 629-637. [4] Hay, B., Nance, K., & Bishop, M. (2011, January). Storm clouds rising: security challenges for IaaS cloud computing. In proceedings of 44th Hawaii International Conference on (pp. 1-7) System Sciences (HICSS), 2011. IEEE. [5] Younge, A. J., Henschel, R., Brown, J. T., von Laszewski, G., Qiu, J., & Fox, G. C. (2011, July). Analysis of virtualization technologies for high performance computing environments. In proceedings of International Conference on Cloud Computing (CLOUD), 2011 (pp. 9-16). IEEE. [6] Calheiros, R. N., Ranjan, R., Beloglazov, A., De Rose, C. A., & Buyya, R. (2011). CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Software: Practice and Experience, 41(1), 23-50. [7] Shrivastava, V., & Bhilare, D. S. (2012). Algorithms to Improve Resource Utilization and Request Acceptance Rate in IaaS Cloud Scheduling. International Journal of Advanced Networking and Applications, 3(05), 1367-1374. [8] Saavedra, R. H., & Smith, A. J. (1996). Analysis of benchmark characteristics and benchmark performance prediction. ACM Transactions on Computer Systems (TOCS), 14(4), 344-384. [9] Agarwala, S., Jadav, D., & Bathen, L. A. (2011, July). icostale: Adaptive Cost Optimization for Storage Clouds. In proceedings of International Conference on Cloud Computing (CLOUD), 2011 IEEE (pp. 436-443). IEEE. [10] Kovalick, A. (2011). Cloud Computing for the Media Facility: Concepts and Applications. SMPTE Motion Imaging Journal, 120(2), 20-29. [11] Ranganathan, A., Al-Muhtadi, J., Biehl, J., Ziebart, B., Campbell, R. H., & Bailey, B. (2005). Evaluating Gaia using a pervasive computing benchmark. University of Illinois at Urbana Champaign, IL, Tech. Rep. [12] Iosup, A., Ostermann, S., Yigitbasi, M. N., Prodan, R., Fahringer, T., & Epema, D. H. (2011). Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Transactions on Parallel and Distributed Systems,, 22(6), 931-945. [13] Klems, M., Nimis, J., & Tai, S. (2009). Do clouds compute? a framework for estimating the value of cloud computing. In Designing E-Business Systems. Markets, Services, and Networks (pp. 110-123). Springer Berlin Heidelberg. [14] Application Performance Management, available http://www.compuware.com/application-performance-management. 437