An SLA-based Broker for Cloud Infrastructures

Size: px
Start display at page:

Download "An SLA-based Broker for Cloud Infrastructures"

Transcription

1 Journal of Grid Computing manuscript No. (will be inserted by the editor) An SLA-based Broker for Cloud Infrastructures Antonio Cuomo Giuseppe Di Modica Salvatore Distefano Antonio Puliafito Massimiliano Rak Orazio Tomarchio Salvatore Venticinque Umberto Villano Abstract The breakthrough of Cloud comes from its service oriented perspective where everything, including the infrastructure, is provided as a service. This model is really attractive and convenient for both providers and consumers, as a consequence the Cloud paradigm is quickly growing and widely spreading, also in non commercial contexts. In such a scenario, we propose to incorporate some elements of volunteer computing into the Cloud paradigm through the Cloud@Home solution, involving into the mix nodes and devices provided by potentially any owners or administrators, disclosing high computational resources to contributors and also allowing to maximize their utilization. This paper presents and discusses the first step towards Cloud@Home: providing quality of service and service level agreement facilities on top of unreliable, intermittent Cloud providers. Some of the main issues and challenges of Cloud@Home, such as the monitoring, management and brokering of resources according to service level requirements are addressed through the design of a framework core architecture. All the tasks committed to the architecture s modules and components, as well as the most relevant component interactions, are identified and discussed from both the structural and the behavioural viewpoints. Some encouraging experiments on an early implementation prototype deployed in a real testing environment are also documented in the paper. Keywords Cloud Computing, Cloud@Home, SLA, QoS, Resource Brokering. Antonio Cuomo, Umberto Villano Dipartimento di Ingegneria, Università degli Studi del Sannio, Italy. antonio.cuomo,villano@unisannio.it Giuseppe Di Modica, Orazio Tomarchio Dipartimento di Ingegneria Elettrica, Elettronica ed Informatica, Università di Catania, Italy. giuseppe.dimodica,orazio.tomarchio@dieei.unict.it Salvatore Distefano Dipartimento di Elettronica e Informazione, Politecnico di Milano, Italy. distefano@elet.polimi.it Antonio Puliafito Dipartimento di Matematica, Università di Messina, Italy. apuliafito@unime.it Massimiliano Rak, Salvatore Venticinque Dipartimento di Ingegneria dell Informazione, Seconda Università di Napoli, Italy. massimiliano.rak,salvatore.venticinque@unina2.it

2 2 Antonio Cuomo et al. 1 Introduction Among the several definitions of Cloud computing available in literature, one of the most authoritative is that provided by NIST [32]: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This Cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models. It is important to remark that such definition identifies the availability as a key concept of the Cloud paradigm. This concept has to be categorized into the broader class of quality of service (QoS), service level agreement (SLA) and related issues, topics of primary and strategic importance in Cloud. In such context, the focus of this paper is on the Infrastructure as a Service (IaaS) provisioning model. IaaS Clouds are built up to provide infrastructures such as computing, storage and communication systems for free or by charge, with or without QoS/SLA guarantees. There are multiple frameworks able to provide computing IaaS services: Eucalyptus [28], OpenNebula [20], Nimbus [29], PerfCloud [12], Clever [42], OpenStack [41] to name a few. All of them, as well as the existing proprietary solutions (i.e., Amazon EC2, Rackspace, etc.), aggregate and manage powerful and reliable underlying computing resources (usually single or multiple interconnected datacenters) to build up the Cloud IaaS infrastructure. A different approach is instead proposed by Cloud@Home, a project funded by Italian Ministry for Education and Research [15]. Cloud@Home (briefly, C@H) aims at building an IaaS Cloud Provider using computing, storage and sensing resources also acquired from volunteer contributors. The basic assumption on which C@H relies is that the resources offered on a volunteer basis are not reliable and can not provide levels of QoS comparable to those offered by commercial public Clouds. We believe the C@H volunteer approach can provide benefits in both business and open contexts. In business environments one of the main source of cost and complexity for companies is related to expanding, maintaining, tuning and optimizing the hardware resources in order to effectively satisfy the highly demanding, domain-specific software and to ensure adequate productivity levels. The C@H technology will enable companies to organize their computing resources, which are sometimes distributed over several sites, in order to meet the demands of the mentioned software. Indeed, C@H allows a company to aggregate its sites into a federation, to which each site can contribute with its available and underexploited hardware resources to provide added value, guaranteed services. In open contexts a possible scenario that magnifies the C@H features and approach can be the academic one. Let us imagine that several universities or, in general, research institutions worldwide, need to collaborate on a scientific project that requires a huge amount of hardware resources. Moreover, let us assume that each institution owns a private datacenter, made up of heterogeneous computing resources, each having a different level of utilization (depending on their geographic coordinates and time zones, some datacenters may result underexploited with respect to others). C@H will provide the institutions with tools to build up a federation of datacenters acting as a Cloud broker, to which each partner can contribute with its own (i.e., not utilized or underexploited) resources according to its scheduled availability. Pushing the approach to its limits, one can also imagine a scenario where private users aggregate into a federation and share their resources for each other s needs. In order to implement such ambitious idea, in the C@H project a three-phase roadmap was scheduled: i) development of quality of service (QoS) and service level agreement (SLA) brokering/federation mechanisms for private Cloud providers; ii) development of

3 SLA-based Cloud Broker 3 billing and reward mechanism for merging both private and public Cloud; iii) development of tools for involving single-resource (desktop, laptop, cluster) volunteer contributors. The focus here is on the first step of the roadmap. The paper discusses the implementation of a framework for federating and brokering private Clouds, able to provide QoS guarantees on top of best effort providers. The strength of the C@H proposal is that it offers a solution that is efficient from a cost perspective (when needed, resources can be borrowed from the system, thus avoiding to purchase them outside) and sufficiently reliable at the same time, as mechanisms for guaranteeing QoS are also provided. If we take a look at the state of the art in this research field, the management of service quality of the leased resources is sometimes partially covered by commercial Clouds (availability, reliability and high level performance is natively supported by very powerful and high-quality datacenters) and often neglected in actual Cloud frameworks. The objectives of the C@H project and its rationale, limits and application contexts have been critically discussed in [5], which is a preliminary version of this work mainly focusing on concepts and ideas behind C@H. The present work tries to further develop and implement such ideas, more specifically by: i) identifying and characterizing the system actors; ii) detailing the architectural design and its modules and components; iii) providing details on the interactions among the components implementing the C@H functionalities; iv) reporting on the current prototype implementation and the testbed. The remainder of the paper is organized as follows. A brief overview of the state of the art is first reported in Section 2. Then, Section 3 describes the C@H core architecture. Sections 4, 5, 6 and 7 delve into the details of the C@H architectural modules and components. Section 8 presents the system from a dynamic perspective, while Section 9 deals with a real testbed on which tests have been carried out. Conclusions and future developments are discussed in Section Related work C@H aims at implementing a brokering-based Cloud Provider starting from resources shared by different providers, addressing QoS and SLA related issues, as well as resource management, federation and brokering problems. Since such issues and problems involve different topics, in the following we identified some of them providing an overview of the current state of the art. Volunteer and Cloud Computing. The idea of volunteer Clouds recently emerges as one of the most interesting topic in Cloud computing. Some work is available in literature, also inspired by C@H that is one of the first attempt in such direction [16]. In [14] the authors present the idea of leveraging volunteer resources to build a form of dispersed Clouds, or nebulas, as they call them. Those nebulas are not intended to be general purpose, but to complement the offering of traditional homogeneous Clouds in some areas where a more flexible, less guaranteed approach can be beneficial, like in testing environments or in application where data are intrinsically dispersed and centralising them would be costly. Some requirements and possible solutions are presented. BoincVM [38] is an integrated Cloud computing platform that can be used to harness volunteer computing resources such as laptops, desktops and server farms, for computing CPU intensive scientific applications. It leverages on existing technologies (the BOINC platform and VirtualBox) along with some projects currently under development: VMWrapper, VMController and CernVM. Thus, it is a kind

4 4 Antonio Cuomo et al. of volunteer-on-cloud approach, whereas can be classified as a Cloud-on-volunteer model. In [3] the authors investigate how a mixture of dedicated (and so highly available) and non-dedicated (and so highly volatile) hosts can be used to provision a processing tier of a large-scale Web service. They propose an operational model that guarantees long-term availability despite of host churn, by ranking non-dedicated hosts according to their availability behavior. Through experimental simulation results they demonstrate that the technique is effective in finding a suitable balance between costs and service quality. Although the technique is interesting and the results are encouraging, in the paper there is no evidence of either a possible implementation or an architecture design of the overall infrastructure framework that should implement the idea. An approach that can be categorized into the volunteer Cloud is the P2P Cloud. It has been proposed in several papers as the ones cited above, and particularly in storage Cloud contexts. An interesting implementation of such idea is proposed in [22]. In particular, this work specifically focuses on the peer reliability, proposing a distributed mechanism in order to enable churn resistant reliable services that allows to reserve, to monitor and to use resources provided by the unreliable P2P system and maintains long-term resource reservations through controlled redundant resource provision. The evaluation results obtained through simulation show that using KAD measurements on the prediction of the lifetime of peers allows for 100% successful reservations under churn with very low traffic overhead. As in the above case, there is no real implementation of the proposed solution. Anyway, the monitoring and prediction tools developed can be of interest for C@H. Federation, InterCloud and resource provisioning from multiple Clouds. Management of resources in Cloud is a complex topic. In [4,37] examples of resource management techniques are discussed, while in [26] a policy-based technique facing the resource management in Cloud environment is proposed. Even if Cloud computing is an emerging field, the need to move out from the limitations of provisioning from a single provider is gaining interest both in academic and commercial research. In [25], the authors move from a data center model (in which clusters of machines are dedicated to running Cloud infrastructure software) to an ad-hoc model for building Clouds. The proposed architecture aims at providing management components to harvest resources from non-dedicated machines already in existence within an enterprise. The need for intermediary components (Cloud coordinators, brokers, exchange) is explained in [11], where the authors outline an architecture for a federated network of Clouds (the InterCloud). The evaluation is conducted on a simulated environment modeled through the CloudSim framework, showing significant improvements in average turnaround time and make-span in some test scenarios. Federation issues in Cloud environments have been considered, and some research projects focuses on this specific topic actively investigate possible solutions, such as RESERVOIR [36] and more recently mosaic [31] and OPTIMIS [21]. With specific regards to [31,21], the approach they propose is to implement a brokering system that acquires resources from different Cloud Providers and offers them in a custom way to their users. As regards the brokering solution for federation, C@H pioneeringly identified and proposed such modality since 2009 [16]. SLA Management in Clouds. In service oriented environments several proposals addressing the negotiation of dynamic and flexible SLAs have appeared [19]. However, to the best of our knowledge, none of the main commercial IaaS providers (Amazon, Rackspace, GoGRID,...) is offering negotiable SLA. What they usually propose is an SLA contract that specifies simple grants on uptime percentage or network availability. Moreover, most of the

5 SLA-based Cloud Broker 5 providers offer additional services (for example Amazon CloudWatch) which monitor the state of target resources (i.e., cpu utilization and bandwidth). Open Cloud Engine software like Eucalyptus, Nimbus, OpenNebula, also implement monitoring services for the private Cloud Provider, but do not provide solutions for SLA negotiation and enforcement. A survey of the SLAs offered by commercial Cloud Providers can be found in [44]. In [24] the authors describe a system able to combine SLA-based resource negotiations with virtualized resources, pointing out how in current literature there is no approach taking into account both these aspects. A global infrastructure aiming at offering SLA on any kind of Service Oriented Infrastructure (SOI) is the objective of the SLA@SOI project [39], which proposes a general architecture that can be integrated in many existing solutions. Anyway, this interesting solution is hard to be fully adopted in an infrastructure composed of unreliable resources such as the ones targeted by the C@H project. Recently, in the context of the mosaic project, some offerings exist in order to offer user-oriented SLA services to final users [35,1]. A critical analysis. Due to the large number of computational resources often available in different environments, (like scientific labs, office terminals, academic clusters) there is a clear need of building solutions which are able to reuse such resources in a Cloud computing fashion. The above described state of art illustrates that a few attempts were done in order to apply volunteer computing approaches towards such direction. The main limit of such solution is related to different aspects: (1) definition of the real use cases where the volunteer computing approach can be applied, (2) integration with resources that come from commercial providers and (3) a clear evaluation of the quality of services obtained with such resources. The first open issue needs a clear identification of the type of resources that can be shared with the volunteer approach and their possible usage. At the best of authors knowledge such analysis is not yet available, let alone [3], which is a work in progress paper which partly anticipates some of the ideas presented here. The second problem, instead, is strictly related to what is called Cloud federation, i.e., the idea of integrating resources from different Cloud providers using different techniques (like brokering or Cloud bursting). As outlined above, there is a lot of interest towards federation, but few results take into consideration the effect of integrating commercial-based and volunteer-based resources, and the different issues that arise in such a context. In any case, even though brokering solutions are now available, few of them are stable; furthermore the techniques to be adopted are open and the way in which the brokering functionalities should be offered is, at state of art, not clearly defined. The third problem is a well-known one in literature: how to grant service level agreement over Cloud providers? Even if a lot of effort exists in such direction, no one has come out with a stable proposal. Commercial Cloud Providers use natural language to describe the functionalities, the terms of use and the service levels of their offers. Research projects (like SLA@SOI, Contrail, Optimis, above described) try to offer frameworks that can be integrated in Cloud Providers, but are usually heavy to maintain and hard to customize. The only available standard (WS-Agreement) has only one stable framework implementing its features (WSAG4J). Moreover all such proposals focus on integration of SLA management system from the Cloud Provider perspective. Implementing SLA mechanisms on top of federated resources is still an open question. Existing solutions (i.e. SLA@SOI) focus on how to integrate SLA management framework in complex datacenters; the problem is translated into a resource optimization problem. Some results focus on how to optimize the resources at brokering level, i.e. after the resources are obtained and without control over the physical infrastructure. More recently solutions aiming at offering SLA functionalities on the top of

6 6 Antonio Cuomo et al. brokering systems have been proposed [10, 8, 33, 7]. To the best of the authors knowledge, none of them take into account volunteer and time-bounded availability scenarios. 3 System Overview From a wider perspective, C@H aims at merging the Cloud and the Volunteer computing paradigms. C@H collects infrastructure resources from different providers and offers them to the end users through a uniform interface, in an IaaS fashion. As depicted in Fig. 1, C@H resources are gathered from heterogeneous providers, potentially ranging from commercial Cloud providers, offering highly reliable services, to single PCs, voluntarily shared by their owners, who, by their nature, are not able to provide guarantees on the QoS. Cloud@Home Users Cloud@Home Admin C@H CP Amazon EC2 PerfCloud OpenStack Eucalyptus Volunteer CP OpenNEbula Clever Yet Another CP Fig. 1: Resource aggregation from different Cloud providers. The main goal of C@H is to provide a set of tools for building up a new, enhanced provider of resources (namely, a C@H provider) that is not yet another classic Cloud provider, but instead acts as an aggregator of resources offered by third party providers. A C@H provider collects heterogeneous resources from different Cloud providers adopting diverse resource management policies, and offers such resources to the users in a uniform way. This results in an added value infrastructure that also provides mechanisms and tools for implementing, managing and achieving QoS requirements defined through, and managed by, a specific SLA process. Indeed, in order to deal with the heterogeneity and churn of resources, a C@H provider can use a set of tools and services dedicated to the SLA management, monitoring and enforcement. A goal of C@H is to release the above discussed tools to the Cloud community. Any interested organization may use such tools to build a C@H provider. Nothing prevents the instantiation of multiple C@H providers, each one collecting and aggregating resources by different resource providers. Furthermore, a resource provider is allowed to join any C@H system they wish. As explained in Section 5, the availability of resources for a given request

7 SLA-based Cloud Broker 7 is assessed by the C@H provider at run-time, and the management of the resource status (free, busy, etc.) is up to the resource provider itself. To implement the Cloud@Home wide and ambitious vision it was necessary to adequately design and organize the work into phases according to the project aims and goals. In this paper we focus on the first phase towards Cloud@Home, which aims at identifying, specifying and implementing the C@H building blocks and its core architecture restricting the scope to private Cloud providers. 3.1 Actors On the backend side, the C@H provider interfaces with Cloud providers and performs the brokering of their services. It has to deal with the different levels of the service quality they are natively able to deliver. On the frontend side, the C@H provider has to allow the final users to access the resources in a uniform way, providing them with the required, sustainable QoS specified through the SLA process. In such a context, it is possible to identify three main actors involved in the C@H management process: Users, Admins and Resource Owners. A C@H User interacts with the C@H provider in order to request resources along with the desired quality of service. The C@H Users are also provided with tools to negotiate the desired QoS and, at service provision time, to check that the promised QoS is actually being delivered. A C@H Admin builds up and manages the C@H provider. The C@H Admin is the manager of the C@H infrastructure and, in particular, is in charge of the infrastructure activation, configuration and management. The C@H Admin decides which services provided by the infrastructure must be activated/deactivated. Furthermore, in case of QoS/SLA enabled infrastructures and services, the C@H Admin specifies the policies that have to be adopted to carry out the SLA negotiation process and the QoS enforcement. A Resource Owner shares its resources with the C@H system. Besides private sharers, the category of Resource Owners also encompasses commercial offerers (e.g., mainstream IaaS Cloud providers). In other terms a Resource Owner is a potential Cloud provider for C@H, even if some Resource Owners are not able to provide any standalone Cloud service. The role of Resource Owners can be classified and specialized as public contributors (i.e., well known public Cloud automatically enrolled by the system) and volunteer contributors. Volunteer contributors are Resources Owners that voluntarily share their resources to build up a C@H provider. We can further categorize volunteer contributors as: Private Clouds: standalone administrative domains, which may have own QoS-SLA management and other resource facilities and services. They can be voluntarily involved in the C@H according to their administrators needs and wills thus becoming contributors. Individuals: anyone who wants to voluntarily share their own desktop, laptop, cluster or generic resource/device with a C@H community. In this paper we specifically focus on private Clouds, narrowing the issues and related solutions to such class of volunteer contributors. In other words, here we restrict the concept of volunteer to just private Cloud contributors.

8 8 Antonio Cuomo et al. Fig. 2: The system architecture. 3.2 A Modular Architecture According to the scenario above discussed, in the following we identify the main blocks and modules of the C@H architecture, just considering private Clouds as Resource Owners. This includes the basic functionalities and mechanisms for implementing a C@H provider on top of private Cloud contributors. It can be also considered as the core architecture, the starting point to extend and generalize when public Clouds and/or individuals will be involved into C@H as Resource Owners. The main goal of the architecture is to address some of the issues arised above. The architecture offers a set of components that can be easily used to build up its own Cloud Brokering solution. In Fig. 2 the C@H core architecture is depicted: it is organized into modules each composed of units providing specific functionalities, named C@H components. C@H components are themselves delivered as resources hosted on Cloud providers. Following and applying the separation of concerns principle, four main modules have been logically identified, grouping the main C@H functionalities: the Resource Abstraction module, the SLA Management module, the Resource Management module and the Frontend module. As shown above, some of the main components are thought to deal with SLA issues. The Resource Abstraction module hides the heterogeneity of resources (computing, storage and sensor elements) collected from Resource Owners and offers the C@H User a uniform way to access them. It provides a layer of abstraction adopting a standard, implementationagnostic representation. It also implements drivers in order to convert requests expressed in the intermediate representation to actual invocations on the interface of the Resource Owner. On top of the Resource Abstraction module C@H provides tools for the management of the SLAs that have to be negotiated with the C@H Users. The definition of formal guarantees on the performance that the resources must deliver is achieved through the SLA Management module. C@H Users can negotiate the quality level of the requested resources. The negotiation process relies on performance prediction and on statistics of providers historical

9 SLA-based Cloud Broker 9 availability in order to assess the sustainability of C@H Users requests. Statistics are built from information collected on the actual performance and QoS recorded for the supplied resources. A mobile agent-based monitoring service is responsible for gathering those data. The Resource Management module is the core of the system. It is in charge of the provision of resources and of the SLA enforcement. The most important functionalities provided by such module are related to the resource management. In particular this module is responsible for resource enrolment, discovery, allocation/re-allocation, activation/deactivation. These activities are carried out in accordance with the SLA goals and applying the procedures defined in the SLA modules. Finally, the Frontend module acts as an interface to the C@H IaaS for both the C@H Admin and the C@H Users. It just collects C@H Admin and User requests and dispatches them to the appropriate system module. In terms of implementation, C@H components are able to interact with each other through standardized, service-oriented interfaces. Such a choice enables flexible component deploying schemes. Pushing the virtualization paradigm to its limits, a single C@H Component can even be offered as a customized virtual machine hosted by any Cloud provider. 4 Resource Abstraction C@H mainly acts as an intermediary, acquiring different types of infrastructural resources from different Resource Owners and delivering them to C@H Users. To cope with resources and providers heterogeneity, a resource abstraction s scheme have been devised. The Resource Abstraction module encompasses the components providing the abstraction and the necessary logic to map resources into real implementations. Heterogeneous resources lack of uniformity in some specific characteristics, properties and aspects, as they consist of dissimilar or diverse elements. Such differences can be broadly categorized according to three resource aspects: Type - a possible classification could be based on the resource intended function (i.e., the resource type), distinguishing among computing, sensor and storage resources. Such specific aspect of resource heterogeneity has been investigated in [17]. In the present work, we specifically focus on computing resources, since we are mainly interested in describing the high level mechanisms and solutions to deal with QoS, SLA and resource management in C@H. However the proposed solutions do not depend on the type of resources and can be easily adapted to sensor and storage resources. Hardware - resources can physically differ in their characteristics, ranging from the internal architecture (CPU, symmetric multiprocessor, shared memory, etc.) to the devices (number of cores, controllers, buses, disks, network interfaces, memories, etc.) and so on. Hardware heterogeneity issues of computing resources can be mainly addressed by virtualization. Software - software environments can differ on operating systems, compilers, libraries, protocols, applications, etc. In Cloud context, as above, computing resource heterogeneity is overcome through virtualization. To cope with this complexity, we decide to adopt an implementation-agnostic representation and access interface for every class of resources, embracing current standardization efforts whenever possible. Another issue to adequately take into account is the delivery of resources from providers, for which the specific acquisition modality must be defined and reflected in the interface.

10 10 Antonio Cuomo et al. Based on the abstractions discussed above, a computing resource interface has been designed to enable access to the provisioned infrastructure. First of all, computing resources are defined in terms of basic blocks like Virtual Machines and Virtual Clusters. A reference standard has been chosen as the native interface to be supported for computing resources management: the OGF Open Cloud Computing Interface (OCCI, [30]). As for the delivery of resources, currently C@H defines two acquisition modalities for computing resources: Charged - resources obtained from public Cloud provider at a certain cost or fees and providing guaranteed QoS levels; Volunteer - resources obtained from Resource Owners that voluntarily support C@H. These might range from Cloud-enabled academic clusters, which deliver their resources for free (possibly subjected to internal administration policies and restrictions), to laboratory providing single machines outside their working time, to single desktops willing to contribute to the infrastructure. Such resources are intermittent but, in case of private Cloud volunteer contributors the availability is slotted, i.e., the private Cloud provider defines a time window, an interval, or more specifically a series of intervals, in which and when it is available as C@H Resource Owner. Otherwise, in case of individuals, no guarantees are provided on the shared resources. In this paper, restricting the scope to just private Cloud volunteer contributors, we assume resources are provided specifying the availability time windows. Besides the just described computing resource interface, the Resources Abstraction module contains components to support the practical implementation of such interface. These are the Provider Drivers that implement tools for the acquisition of resources by enabling the interaction with the Resource Owners. They receive OCCI-compliant resource requests from other components, convert them into the target provider s interface, perform the actual invocation to the Resource Owner and return the results that are again converted into OCCI. In this way, it is possible to interact with several open and commercial Cloud platforms too, like Amazon EC2 and Rackspace. An OCCI generic driver is provided too, so that Resource Owners whose infrastructure implements this interface are automatically able to interact with the higher level services by directly exchanging OCCI-compliant messages and requests. In the current implementation, mainly focusing on private Cloud contributors, we have implemented drivers for PerfCloud [12] and Clever [42] providers, two Cloud frameworks adopted in the context of the C@H project. PerfCloud is a solution for integrating Cloud and Grid paradigms, an open problem that is attracting growing research interest [27, 43]. The PerfCloud approach aims at building an IaaS (Infrastructure as a Service) Cloud environment upon a Grid infrastructure, leasing virtual computing resources that usually (but not necessarily) are organized in virtual clusters. Clever builds a Cloud provider out of independent hosts through a set of peer-to-peer based services. The choice of a decentralized P2P infrastructure allows the framework to provide fault-tolerance with respect to issues like host volatility. In case a new Cloud framework wants to join C@H, it ought to provide an OCCIcompliant interface or implement a driver for its own infrastructure. 5 Resource management From a high level point of view, a C@H provider is an intermediary (or broker) for the acquisition of resources from different Resource Owners. In this way, it delegates the

11 SLA-based Cloud Broker 11 low-level management of the infrastructure to interface with such Resource Owners. Two important tasks of a C@H provider are therefore the search for Resource Owners (providers) and the acquisition of resources that eventually will be delivered to the final C@H Users. This section describes the C@H components implementing such tasks: the Registry and the Resource & QoS Manager. 5.1 Registry The Registry component collects information on resource providers and on the way their offered resources can be assessed. It provides a simple interface through which resource providers can subscribe to C@H (i.e., decide to share their resources to C@H). At subscription time, resource providers must supply a Resource Provider Descriptor (RPD) file. As briefly shown in Listing 1, the file contains the following sections: CloudEngine - identifies the Cloud solution adopted by the provider. This information is needed by the C@H to set up the correct drivers. As discussed in Section 4, drivers for PerfCloud and Clever have been fully developed. Other engines must to use the generic OCCI driver in order to interact with C@H. WindowShare - contains the schedule of the time windows during which the resource provider is willing to share resources to the C@H. This information is used by the C@H when new user requests arrive, to filter out providers that are not willing to share their resources at the time requested by the C@H User. Security - contains the subsection(s) dedicated to the kind of credentials accepted by the provider. In the example proposed, the provider adopts the PerfCloud engine, which makes use of GSI credentials from the Globus Security Infrastructure [40]. In this case, a subset of the information contained in the PKI certificate is reported. In order to gain access, the user needs the credentials for the target Virtual Organization. AccessPoint - contains the information needed to access the target provider (IP address and qualified names, in the example). Listing 1: Resource Provider Descriptor [..] <Provider> <CloudEngine>PerfCloud</CloudEngine> <WindowShare> <Window> <Day>WorkingDay</Day> <From>6:00 PM CEST</From> <To>6:00 AM CEST</To> </Window> <Window> <Day>HolyDay</Day> <From>12:00 AM CEST</From> <To>12:00 PM CEST</To> </Window> </WindowShare> <Security> <GRIDAuthentication> <VirtualOrganization>Cloud@Home</VirtualOrganization> <Issuer>O=Grid, OU=GlobusTest, OU=simpleCA-CloudAtHome, CN =Globus Simple CA</Issuer>

12 12 Antonio Cuomo et al. <Subject>O=Grid, OU=GlobusTest, OU=simpleCA-CloudAtHome, CN=Globus Simple CA</Subject> </GRIDAuthentication> <Security> <AccessPoint> <IP> </IP> <Name>Antares.ing.unina2.it</Name> <Name>Antares</Name> </AccessPoint> </Provider> [..] At discovery time, i.e. when resources must be enrolled to satisfy a C@H User s request, the Registry can be queried to retrieve the list of providers that are eligible to serve the request according to their WindowShare and Security settings. 5.2 Resource & QoS Manager The Resource & QoS Manager (RQM) is a crucial component in the C@H architecture, as it is responsible of acquiring the virtual resources from the providers and ensuring that the negotiated QoS is being delivered. As shown in Fig. 3, the crosscutting tasks of the RQM require it to be able to interface with all other subsystems. To this end, the RQM has been designed as an asynchronous event-based system, denoted as RQMCore, which reacts to requests coming from other components. Fig. 3: The Resource & Qos Manager The RQM core tasks are Request Management and SLA Enforcement. Request Management consists in the translation of C@H User requests into actual resource selection and allocation. Resource requests that do not involve SLA negotiations are directly forwarded to the RQM by the Frontend. SLA-based resource requests are handed to the RQM by the SLA Manager (described in subsection 6.1): the associated policies and procedures that must be used in SLA Enforcement are defined through the SLA Manager and stored in the RQM Core databases.

13 SLA-based Cloud Broker 13 To fulfill these tasks, the RQM performs many activities. A workflow view of the activities and the interactions they entail is described in section 8.2.2, while an overview from the RQM perspective is provided here. Activities related to Request Management include: Provider Selection. The RQM can query the Registry to obtain the list of subscribed providers. The query can include filtering criteria on parameters specified in the Resource Provider Descriptor of Listing 1. Resource acquisition. Once a suitable provider has been found, the Provider Drivers are set up to carry out resource acquisition. Logging. The RQM logs all its operations and their status for further inspection and bookkeeping. More complex activities involve cooperation between multiple modules and are oriented to the management of SLA-based requests and SLA Enforcement: Availability guarantees. To provide availability guarantees, the RQM needs a forecast of the availability level of resources shared by a provider. The availability of a provider resource can be described with the well-known equation: ProviderAvailability = MT BF MT BF + MT T R. (1) where MTBF is the mean time between failures and (MTTR) is the mean time to repair of a single resource provided by the corresponding Cloud provider. To estimate the MTBF of a provider resource, the RQM firstly retrieves historical heartbeat provider data available from the monitoring subsystem, which is based on the Mobile Agents based GriD Architecture (MAGDA) described in the next section 6.3. The historical data can be used to obtain a forecast of the provider MTBF by invoking the forecast service provided by the Cloud@Home Autonomic Service Engine (CHASE), a component described in section 6.2. With regards to the MTTR, it can be defined as: where MT T R = T f d + T boot (2) T f d is the time required to detect a resource failure. This is related to the rate with which the monitoring subsystem performs checks, which is bounded by the MAGDA timeout, a parameter that specifies how often MAGDA agents have to report on the resource status. T boot is the time required for the system to boot up another virtual machine in substitution of the failed one. Such time depends on both the computing power of the virtual machine and on the complexity of the VM image. Again, the RQM can obtain a forecast for this value by feeding the forecast service in CHASE with historical boot time data (logs of virtual machines boot times) obtained from MAGDA. Once the MTBF and the MTTR are evaluated, the RQM is able to compute the provider availability through Equation 1. Alert reaction. The RQM uses alerts generated by the monitoring subsystem to activate the SLA Enforcement process. The policies to be activated are expressed through simple triples [< parameter >, < condition >, < procedure >], which formalize the procedures that have to be triggered when a given parameter satisfies a certain condition. The policies offered to the C@H administrator are represented in terms of a simple template, which enables the administrator to configure them: the responsibility of the correctness of the policies is up to the administrator. The current implementation uses a format akin to JSON.

14 14 Antonio Cuomo et al. Let us describe an example use of policies. Through the MAGDA monitoring component, the system uses heartbeat messages to verify if a node is alive. It stores the heartbeat information using two variables: HBfail, representing the number of failures, and HBsuccess, representing the number of consecutive success heartbeats. We wish to specify a policy to use the heartbeat results to detect a machine crash and perform a restart. The policy states that if the number of failed heartbeats hits a certain threshold X, the machine must be restarted. However, random failed heartbeats may happen, for example for a very short network unavailability, that are not symptoms of a crashed machine. To avoid the accumulation of randomly failed heartbeats, we reset the HBfail counter if there are at least Y consecutive success heartbeats. The described policy can be specified as follows: {Policy: [[Heartbeat,HBfail>X,restart], [Heartbeat,HBsuccess>Y,resetHBfail], [Heartbeat,HBsuccess>Y,resetHBsuccess]] } where restart, resethbfail and resethbsuccess are identifiers of procedures that respectively restart the virtual machine (the first) and reset the heartbeats counters (the other two). The procedure definitions are collected in a database local to the RQM. When multiple policies are applicable to the current situation (i.e. all their conditions evaluate to true), all the policies are applied without any order. Again, the responsibility of verifying that this does not lead to the application of conflicting policies is up to the C@H administrator. Performance guarantees. When the SLA involves application-level performance parameters, the RQM can provide guarantees through predictions on the performance of a resource configuration, provided by the CHASE simulation-based performance prediction service. To enable the predictions, the RQM must provide CHASE with a description of the application (included in the user request), the user QoS requirements (part of the SLA) and benchmark data of the provider machines (obtained through the monitoring subsystem). 6 Service Level Agreement management One of the most critical Cloud-related issues is the management of the Service Level Agreement. This is particularly true for C@H, since it tries to mix the Cloud and the volunteer paradigms. The volunteer contribution in C@H dramatically complicates the SLA management task: the volatility of resources (resources providers can asynchronously join or leave the system without any message) has to be taken into account when enforcing QoS requirements. In C@H, the SLA Management module is in charge of the negotiation and the monitoring of SLAs, and collaborates with the RQM component for the enforcement of the QoS. The SLA Management module is composed of three components: the SLA Manager, the Cloud@Home Autonomic Service Engine (CHASE) [34] and the Mobile Agent Based Grid Architecture (MAGDA) [6]. The features of these components are briefly discussed in the following. 6.1 SLA Manager The SLA Manager is in charge of managing the SLA templates to be offered to C@H Users. A C@H User can refer to these templates to start the negotiation procedure with the SLA

15 SLA-based Cloud Broker 15 Manager, that will eventually produce an SLA. In this case, resources (which are the negotiation object of the SLA) are virtualized, and in general can be provided by several, different Cloud providers enforcing different management policies: SLAs are then crucial to guarantee the quality of virtualized resources and services in such heterogeneous environment. We recall that the resource context we are addressing is heterogeneous, from different points of view. As discussed in Section 4, aggregates resource providers that are by their nature heterogeneous: on the one hand the commercial providers, seeking to maximize the profit, and on the other one the volunteer providers, that just share their underutilized resources. Secondly, the provided resources themselves are heterogeneous in terms of computational power they are able to supply. The SLA management must take into account this heterogeneity, and enforce the appropriate strategy according to the nature of the providers and the resources being involved. Stated that resource availability is the only QoS parameter that is currently addressing, when commercial providers are involved in the provision of resources the s SLA strategy will guarantee no more than what the providers proprietary SLA are claiming to guarantee, and will apply the very same penalties for unattended QoS. When, instead, providers voluntarily share resources, the SLA produced for a specific provision aims at just forecasting the minimum service level (again, in terms of resource availability) that C@H will likely be able to sustain. The client of volunteered resources is aware that the resources are volatile, and that C@H strives to guarantee the provision; should the forecast service perform worse than the agreed minimum level, no penalty would be applied. For volunteer scenarios, we are planning to develop an incentive mechanism (supported by the SLA framework itself) that awards those providers that are able to guarantee at best the promised QoS. The basic principle is that the better the SLA is honoured, the more credits the provider gains. Credits can then be used by providers to acquire new resources within the federation. The SLA Manager adopts the WS-Agreement protocol [2] for the user-c@h interactions. WS-Agreement compliant templates are used by the C@H Users to specify the required quality level. An example of template filled with the C@H User s required functional and non functional parameters is reported in the following: Listing 2: Resource and Availability request in WS-Agreement <ws:servicedescriptionterm ws:name="cluster REQUEST" ws:servicename ="SET VARIABLE"> <mod:cluster xmlns:mod=" <Compute> <architecture>x86</architecture> <cpucores>4</cpucores> [...] <title>compute1</title> </Compute> <Compute> <architecture>x86</architecture> [...] <title>compute2</title> </Compute> [...] </Cluster> </ws:servicedescriptionterm> [...] <wsag:guaranteeterm wsag:name="availability" wsag:servicescope="c@h Cluster"> <wsag:variables>

16 16 Antonio Cuomo et al. <wsag:variable wsag:name="nodeavailability" wsag:metric="ch:availability" /> <wsag:servicelevelobjective> 97.0 </wsag:servicelevelobjective> <wsag:variable wsag:name="duration" wsag:metric="ch:hours" /> <wsag:servicelevelobjective> 8 </wsag:servicelevelobjective> </wasg:variables> [..] </wsag:guaranteeterm> In the ServiceDescriptionTerm section the features of the needed resource are expressed (in this case the C@H User is asking for a cluster of two nodes). It is important to remark that the resource features request complies with the OCCI specification format. At acquisition time, this information will be extracted from the SLA and used to make explicit request to OCCI-compliant providers. As for the Duration parameter in the GuaranteeTerm section, it will be used at discovery time to filter out providers that do not share resources at the time, and for the duration, requested by the C@H User. Finally, through the NodeAvailability parameter the C@H User specifies the required QoS (non-functional parameters), which in this specific case is targeted at 97.0%. 6.2 CHASE CHASE (Cloud@Home Autonomic Service Engine) is a framework that allows to add selfoptimization capabilities to grid and Cloud systems. It evolves from an existing framework for the autonomic performance management of service oriented architectures [13]. The engine allows to identify the best set of resources to be acquired and the best way to use them from the application point of view. CHASE is a modular framework that follows the autonomic computing paradigm, providing components to fully manage a grid/cloud computing element. For the operation of CHASE as a stand-alone autonomic manager, the interested reader is invited to consult the work in [34]. Focus here is on the design of the services provided by the framework to support the operativeness of C@H, namely the forecast service and the performance prediction service. The forecast service provides a forecast of future values from historical data. It takes as input a time series of values and produces forecast based on autoregressive methods. The forecast service is used when the RQM needs to evaluate the provider availability, for which estimates for MTBF and MTTR are required. For the MTBF, historical data of heartbeat failure are used to produce the forecast. For the MTTR, historical boot times of the specific virtual machine image on a specific provider are used. The collection of historical data is made through the MAGDA platform, described in the next subsection. The performance prediction service is a simulation-based estimator of application performance parameters, like execution time and resource usage. In particular, the CHASE simulator, fed with a) an application description, b) information regarding the current state of the system in terms of resource availability and load, and c) the user s requested QoS, builds a parameterized objective function to be optimized. The optimization engine drives the simulator to explore the space of possible configurations in order to find a configuration

17 SLA-based Cloud Broker 17 that meets the demands. The performance prediction service is used during the negotiation to evaluate the sustainability of performance guarantees. It can be also invoked when the monitored QoS agreed in the SLA is at risk of violation. New simulations are run with up-to-date settings in order to search for alternative scheduling decisions (like migrating or adding more VMs) that can solve the QoS problem. 6.3 Mobile Agent based Application Monitoring The MAGDA (Mobile Agent Based Grid Architecture) component constantly carries out the monitoring of the QoS level provided by the leased resources. MAGDA [6] is a mobile agent platform implemented as extension of JADE, a FIPA standard [23] compliant agent platform developed by TILAB [9]. The MAGDA toolset allows to create an agent-enabled Cloud in which mobile agents are deployed on different virtual machines which are connected by a real or a virtual network. The details of how the MAGDA platform can interact with Cloud environments are discussed in past work [18]. The emphasis here will be on the description of the MAGDA-based monitoring service. This has been designed as a multiagent system that distributes tasks among specialized agents. It contemplates both static and mobile agents: the former are responsible for performing complex reasoning on the knowledge base, so they are statically executed where the data reside; the latter usually need to move to the target resources in order to perform local measurements or to get system information. The Archiver is a static agent that configures the monitoring infrastructure, collects and stores measurements, computes statistics. According to the parameters to be monitored, the kind of measurement and the provider technology, the archiver starts different Meters, which are implemented as mobile agents that the Archiver can dispatch where it needs. Observers periodically check a set of rules to detect critical situations. They query the Archiver to know about the statistics and eventually notify applications if some checks have failed. Applications can use a Agent-bus service to subscribe themselves for being alerted about each detected event. They can also invoke MAGDA services to start, stop or reconfigure the monitoring infrastructure. Finally, applications can access the complete knowledge base to retrieve information about Cloud configuration, monitoring configuration, statistics and the history of past failed checks. In the current prototype, the Meters are used to collect three kinds of metrics: Image boot times: a Meter is configured to start-up as soon as the MAGDA platform is loaded. This provides an estimate of the time required to boot the virtual machine. Heartbeats: heartbeats are sent by the Meters to verify the liveness of the resource on which they are residing. Benchmark figures: Meters are able to execute different kind of benchmarks, which vary from simple local data sampling (actual CPU utilization, memory available, etc.) to distributed benchmarking (evaluating distributed data collection, or evaluating the global state with snapshot algorithms). The MAGDA component poses a number of issue in terms of deployment. The agent platform, the Archiver and the bus service can be deployed as a dedicated virtual machine which coordinates all the available agents. MAGDA Meters, instead, must be installed inside the virtual machines of the user. A dedicated Java-based agent execution environment is installed and configured as a start-up service in the user VM. From the portal hosted in the C@H Frontend, it is possible to manage (start, stop, migrate, etc.) the mobile agents in order to control monitoring or other functionalities on top of all the computational resources

18 18 Antonio Cuomo et al. hosting a MAGDA container. In the current implementation, if the user does not accept the installation of the agent platform and the roaming of mobile agents in its virtual machine, the system cannot provide the required monitoring and the associated services like health check and performance prediction. 7 Frontend The main role of the C@H Frontend within the C@H architecture is to provide an access point to the system functionalities. The Frontend provides the reference links to all the C@H components, thus operating as a glue entity for the C@H architecture. It serves the incoming requests by triggering appropriate processes on the specific C@H components in charge of serving such requests. In order to provide a user-friendly and comprehensive interface to the system, the Frontend was implemented as an extensible and customizable Web portal. Furthermore, in accordance with the everything-as-a-service philosophy, the developed Frontend component, as the other C@H components, can be deployed as a virtual machine image, enabling the setup of an infrastructure access point with a modest amount of work (as shown in Section 8). More specifically, the Frontend component has to manage both the C@H User and the C@H Admin incoming requests. For this reason it has been split into two parts as discussed in the following. 7.1 User Frontend The main goal of the C@H User Frontend is to provide access to tools and the services implemented by the system in favor of the end users. Such tools base on and involve lower level functionalities and operations that are hidden to the users, masquerading the C@H internal organization and resource provisioning model in a Cloud-fashion. Moreover, ubiquity and fault tolerance have to be guaranteed, following a service oriented provisioning model. For such a reason the C@H User Frontend is implemented as a Web service, providing suitable SOAP and/or REST interfaces. The User Frontend exposes the main services allowing end users to access C@H resources. In case the C@H provider implements guarantees on the resource provisioning, adequate services for negotiating the SLA and for monitoring the agreed QoS level on the resources are required. In such cases the Frontend pre-processes, splits and forwards the incoming requests to the corresponding SLA and QoS management components according to the specified requirements. More specifically the C@H User Frontend exposes the following functionalities: resource negotiation - through which the C@H User can trigger the resource negotiation process with the SLA Management module (in particular, downloading SLA templates, issuing SLA proposals, proposing SLA counter offers); resource management - allowing the C@H User to manage (activate/deactivate) the leased resources obtained by the C@H IaaS through the Resource Management module; SLA monitoring - providing the C@H User tools for monitoring the status of the SLA regulating the resource provision through a specific interface to the SLA Manager; settings and preference management - allowing the C@H User to customize the user interface with specific user settings and configurations.

19 SLA-based Cloud Broker Admin Frontend From the provider perspective one of the most interesting characteristic of the system is to provide a service for building up a C@H provider and the corresponding Infrastructureas-a-Service by selecting the basic components and services it has to provide. In this way a C@H Admin can customize its service (i.e. the IaaS Cloud provisioning), can decide to either include SLA/QoS management or provide the infrastructure without any guarantee (best effort), and finally can specify where they have to be deployed. The Admin Frontend implements the services to set-up, customize and implement such choices, driving the Admin through all the required steps for a correct C@H service establishment. More specifically such services are: system management - allows the administrator to set-up, configure and deploy a C@H infrastructure and to customize all the services it provides, and therefore also the Frontend interface; negotiation management - through which negotiation policies can be defined, fine-tuned and deployed; QoS management - allows the administrator to specify the QoS policies that must be put in force to sustain the SLAs. The basic components that needs be included in any C@H provider configuration are those of the Resource Abstraction module. A minimal C@H system configuration, building an IaaS with no QoS guarantees, must provide such tools and services, that just implement the basic system management mechanisms. In case the Admin needs to set-up a C@H infrastructure providing QoS on top of resources, it is necessary to enhance the minimal configuration with the Resource and SLA management modules, thus deploying the whole configuration shown in Fig. 2. Different configurations can be identified to best fit and meet the Admin requirements, not necessarily involving all the C@H components identified above. It is important to remark that the component selection is not directly performed by the C@H Admin that just customizes the C@H provisioning model they want to implement. The component selection is therefore automatically performed by the Frontend tools according to the requirements specified by the Admin. The deployment is instead mainly in charge of the Admin. 8 C@H in action In the previous sections the C@H system has been described from a static perspective. The stress has been on the C@H architectural modules and components, and on the functionality that each analyzed entity is able to offer. The current section, instead, intends to give the reader a dynamic view of the C@H system, focusing on the activities that are triggered within the system and on the interactions taking place among the components of the architecture. 8.1 Process View Some activities must be carried out in order to create from scratch a new C@H system and to get it ready to work, as shown in Fig. 4. In the first stage (image retrieval) the overall C@H

20 20 Antonio Cuomo et al. Fig. 4: infrastructure set-up process system image, made up of specific software components, must be retrieved. Several versions of the system are available, each providing a different flavor of services, ranging from the very basic to the full-fledged. As discussed in Section 7, the basic version is equipped with just the Resource Abstraction module. The C@H User is aware that they will not get any guarantee on the performance of the resources provided, and can not claim support in case of QoS degradation. Thus, the service will be provided on a best-effort basis. The full-fledged version of the C@H system, instead, is featured with the SLA/QoS management service. In the following we will discuss the activities to be carried on in the case that a full-fledged C@H system must be set-up. Once the image is retrieved, the system profiling activity is performed, during which the C@H Admin specifies the policies and the strategies that will have to be adopted for SLA negotiation and enforcement respectively. After that, the components can be deployed (system deployment), configured in order to properly work with each other (system configuration), and run (system boot-up). The C@H system is then up and ready to accept C@H User requests. The overall request management process (shown in Fig. 5) involves many activities, some of which are optional, in the sense that they can be either triggered or not, depending on the C@H User s specific request. Such process starts with the SLA negotiation of the resource functional parameters and, optionally, of the QoS level (non-functional parameters) that the C@H system must support at provision time. In the latter case, upon a successful termination of the negotiation activity, the C@H User will receive a formal guarantee (in the form of an SLA) that the requested resources will be provisioned and, if required, the QoS levels will be sustained. Should the negotiation fail, the request would be simply discarded and the C@H User would have to issue a new request. Otherwise, upon a successful negotiation, the requested resources are activated and assigned to the C@H User (resource delivery). From this point onwards, the C@H User can use the resources. If the SLA also requested QoS, a monitoring infrastructure is set-up to take under constant control the performance of the delivered resources (resource monitoring): in such activity, the non-functional parameters contributing to the overall QoS (namely, the availability) are monitored to ensure that none of the SLA terms is being violated. Whenever any QoS parameter is about to be violated, a recovery action is triggered (QoS recovery): in this step countermeasures are taken in order to bring the QoS back to safe levels. Upon a successful recovery, the originally requested QoS is restored and the monitoring process, which had been temporarily put on stand-by, is resumed in order to detect new possible faults. Should instead the recovery fail, the resource provision would be stopped and the SLA would be terminated. Finally, unless the C@H User or the C@H system decides (for any reason) to prematurely force the termination, the resource provision

21 SLA-based Cloud Broker 21 Fig. 5: Request management process will end up at the termination time specified in the SLA, and the resources will be released (termination). 8.2 Interaction view This subsection describes in detail through a practical approach how to set up a C@H provider and how this latter negotiates and enforces QoS on top of resources voluntarily shared by their Cloud providers. In subsection 8.2.1, we first show how a C@H Admin can build and set up the C@H components that implement the C@H infrastructure according to the IaaS service they want to provide (with or without fault tolerance, QoS-SLA management, etc.). The QoS negotiation and enforcement dynamics are described in subsection and subsection 8.2.3, respectively. As regards the described scenarios, the target resources taken into account are academic clusters hosting a Cloud middleware (e.g., Eucalyptus [28], Nimbus [29], PerfCloud [12], Clever [42], OpenStack [41]) able to provide Virtual Clusters (VCs). We also assume that such clusters have a reliable frontend during the provider availability time windows, whereas computing nodes are unreliable since, for example, they might crash due to power outages or they might be periodically unavailable for a while to perform reconfiguration or maintenance. The scenario we consider implements a C@H infrastructure able to satisfy C@H User requests of VCs including specific QoS requirements. In particular, the QoS is expressed in terms of availability, i.e., the fraction of time (percentage) the node must be up and reachable with respect to a 24 hours base time, as specified in Section 5.2. In order to do that, we assume all the nodes belong to the same Cloud provider. Although C@H is able to pick resources from different providers, in this specific case all resources are acquired by the same provider. The VC images host the MAGDA agent-based monitoring service System Setup The C@H Admin is the actor in charge of the deployment and the setup of the C@H infrastructure. As already pointed out in Section 3, C@H components are provided with serviceoriented interfaces. Such a choice enables flexible deployment schemes. Inspired by the as-a-service paradigm, individual (or group of) components can be packaged together and run within virtual machines offered by any (even commercial) Cloud provider.

Cloud@Home: Performance Management Components

Cloud@Home: Performance Management Components Cloud@Home: Performance Management Components Rocco Aversa 1, Dario Bruneo 3, Antonio Cuomo 2, Beniamino Di Martino 1, Salvatore Distefano 3, Antonio Puliafito 3, Massimiliano Rak 1, Salvatore Venticinque

More information

SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS

SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS Foued Jrad, Jie Tao and Achim Streit Steinbuch Centre for Computing, Karlsruhe Institute of Technology, Karlsruhe, Germany {foued.jrad, jie.tao, achim.streit}@kit.edu

More information

DESIGN OF A PLATFORM OF VIRTUAL SERVICE CONTAINERS FOR SERVICE ORIENTED CLOUD COMPUTING. Carlos de Alfonso Andrés García Vicente Hernández

DESIGN OF A PLATFORM OF VIRTUAL SERVICE CONTAINERS FOR SERVICE ORIENTED CLOUD COMPUTING. Carlos de Alfonso Andrés García Vicente Hernández DESIGN OF A PLATFORM OF VIRTUAL SERVICE CONTAINERS FOR SERVICE ORIENTED CLOUD COMPUTING Carlos de Alfonso Andrés García Vicente Hernández 2 INDEX Introduction Our approach Platform design Storage Security

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

CLEVER: a CLoud-Enabled Virtual EnviRonment

CLEVER: a CLoud-Enabled Virtual EnviRonment CLEVER: a CLoud-Enabled Virtual EnviRonment Francesco Tusa Maurizio Paone Massimo Villari Antonio Puliafito {ftusa,mpaone,mvillari,apuliafito}@unime.it Università degli Studi di Messina, Dipartimento di

More information

IaaS Federation. Contrail project. IaaS Federation! Objectives and Challenges! & SLA management in Federations 5/23/11

IaaS Federation. Contrail project. IaaS Federation! Objectives and Challenges! & SLA management in Federations 5/23/11 Cloud Computing (IV) s and SPD Course 19-20/05/2011 Massimo Coppola IaaS! Objectives and Challenges! & management in s Adapted from two presentations! by Massimo Coppola (CNR) and Lorenzo Blasi (HP) Italy)!

More information

Cloud Computing and Software Agents: Towards Cloud Intelligent Services

Cloud Computing and Software Agents: Towards Cloud Intelligent Services Cloud Computing and Software Agents: Towards Cloud Intelligent Services Domenico Talia ICAR-CNR & University of Calabria Rende, Italy talia@deis.unical.it Abstract Cloud computing systems provide large-scale

More information

Cloud Federations in Contrail

Cloud Federations in Contrail Cloud Federations in Contrail Emanuele Carlini 1,3, Massimo Coppola 1, Patrizio Dazzi 1, Laura Ricci 1,2, GiacomoRighetti 1,2 " 1 - CNR - ISTI, Pisa, Italy" 2 - University of Pisa, C.S. Dept" 3 - IMT Lucca,

More information

Simulation-based Evaluation of an Intercloud Service Broker

Simulation-based Evaluation of an Intercloud Service Broker Simulation-based Evaluation of an Intercloud Service Broker Foued Jrad, Jie Tao and Achim Streit Steinbuch Centre for Computing, SCC Karlsruhe Institute of Technology, KIT Karlsruhe, Germany {foued.jrad,

More information

Infrastructure as a Service (IaaS)

Infrastructure as a Service (IaaS) Infrastructure as a Service (IaaS) (ENCS 691K Chapter 4) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Moreno et al.,

More information

Cloud Computing: Computing as a Service. Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad

Cloud Computing: Computing as a Service. Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Cloud Computing: Computing as a Service Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Abstract: Computing as a utility. is a dream that dates from the beginning from the computer

More information

Enhancing an Autonomic Cloud Architecture with Mobile Agents

Enhancing an Autonomic Cloud Architecture with Mobile Agents Enhancing an Autonomic Cloud Architecture with Mobile Agents A. Cuomo 1, M. Rak 2, S. Venticinque 2, and U. Villano 1 1 Università del Sannio, Benevento, Italy, {antonio.cuomo,villano}@unisannio.it 2 Seconda

More information

Preliminary Design of a Platform-as-a-Service to Provide Security in Cloud

Preliminary Design of a Platform-as-a-Service to Provide Security in Cloud Preliminary Design of a Platform-as-a-Service to Provide Security in Valentina Casola 1, Alessandra De Benedictis 1, Massimiliano Rak 2 and Umberto Villano 3 1 Università Federico II di Napoli, Dipartimento

More information

A Study on Analysis and Implementation of a Cloud Computing Framework for Multimedia Convergence Services

A Study on Analysis and Implementation of a Cloud Computing Framework for Multimedia Convergence Services A Study on Analysis and Implementation of a Cloud Computing Framework for Multimedia Convergence Services Ronnie D. Caytiles and Byungjoo Park * Department of Multimedia Engineering, Hannam University

More information

Cloud deployment model and cost analysis in Multicloud

Cloud deployment model and cost analysis in Multicloud IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) ISSN: 2278-2834, ISBN: 2278-8735. Volume 4, Issue 3 (Nov-Dec. 2012), PP 25-31 Cloud deployment model and cost analysis in Multicloud

More information

Grid Computing Vs. Cloud Computing

Grid Computing Vs. Cloud Computing International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 6 (2013), pp. 577-582 International Research Publications House http://www. irphouse.com /ijict.htm Grid

More information

DDS-Enabled Cloud Management Support for Fast Task Offloading

DDS-Enabled Cloud Management Support for Fast Task Offloading DDS-Enabled Cloud Management Support for Fast Task Offloading IEEE ISCC 2012, Cappadocia Turkey Antonio Corradi 1 Luca Foschini 1 Javier Povedano-Molina 2 Juan M. Lopez-Soler 2 1 Dipartimento di Elettronica,

More information

Cloud Computing An Introduction

Cloud Computing An Introduction Cloud Computing An Introduction Distributed Systems Sistemi Distribuiti Andrea Omicini andrea.omicini@unibo.it Dipartimento di Informatica Scienza e Ingegneria (DISI) Alma Mater Studiorum Università di

More information

IAAS CLOUD EXCHANGE WHITEPAPER

IAAS CLOUD EXCHANGE WHITEPAPER IAAS CLOUD EXCHANGE WHITEPAPER Whitepaper, July 2013 TABLE OF CONTENTS Abstract... 2 Introduction... 2 Challenges... 2 Decoupled architecture... 3 Support for different consumer business models... 3 Support

More information

Monitoring Elastic Cloud Services

Monitoring Elastic Cloud Services Monitoring Elastic Cloud Services trihinas@cs.ucy.ac.cy Advanced School on Service Oriented Computing (SummerSoc 2014) 30 June 5 July, Hersonissos, Crete, Greece Presentation Outline Elasticity in Cloud

More information

Dynamism and Data Management in Distributed, Collaborative Working Environments

Dynamism and Data Management in Distributed, Collaborative Working Environments Dynamism and Data Management in Distributed, Collaborative Working Environments Alexander Kipp 1, Lutz Schubert 1, Matthias Assel 1 and Terrence Fernando 2, 1 High Performance Computing Center Stuttgart,

More information

Open Cloud Computing Interface - Service Level Agreements

Open Cloud Computing Interface - Service Level Agreements 1 2 3 4 Draft OCCI-WG Gregory Katsaros, Intel April 13, 2015 5 Open Cloud Computing Interface - Service Level Agreements 6 7 8 9 10 11 12 13 14 15 16 17 Status of this Document This document is a draft

More information

Aneka: A Software Platform for.net-based Cloud Computing

Aneka: A Software Platform for.net-based Cloud Computing Aneka: A Software Platform for.net-based Cloud Computing Christian VECCHIOLA a, Xingchen CHU a,b, and Rajkumar BUYYA a,b,1 a Grid Computing and Distributed Systems (GRIDS) Laboratory Department of Computer

More information

SERVICE ORIENTED APPLICATION MANAGEMENT DO CURRENT TECHNIQUES MEET THE REQUIREMENTS?

SERVICE ORIENTED APPLICATION MANAGEMENT DO CURRENT TECHNIQUES MEET THE REQUIREMENTS? In: New Developments in Distributed Applications and Interoperable Systems: 3rd IFIP International Working Conference (DAIS 2001), Cracow, Poland Kluwer Academic Publishers, September 2001 SERVICE ORIENTED

More information

Federation of Cloud Computing Infrastructure

Federation of Cloud Computing Infrastructure IJSTE International Journal of Science Technology & Engineering Vol. 1, Issue 1, July 2014 ISSN(online): 2349 784X Federation of Cloud Computing Infrastructure Riddhi Solani Kavita Singh Rathore B. Tech.

More information

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com ` CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS Review Business and Technology Series www.cumulux.com Table of Contents Cloud Computing Model...2 Impact on IT Management and

More information

Autoconfiguration and maintenance of the IP address in ad-hoc mobile networks

Autoconfiguration and maintenance of the IP address in ad-hoc mobile networks 1 Autoconfiguration and maintenance of the IP address in ad-hoc mobile networks M. Fazio, M. Villari, A. Puliafito Università di Messina, Dipartimento di Matematica Contrada Papardo, Salita Sperone, 98166

More information

University of Messina, Italy

University of Messina, Italy University of Messina, Italy IEEE MoCS 2011 Kerkyra - Greece June 28, 2011 Dr. Massimo Villari mvillari@unime.it Cross Cloud Federation Federated Cloud Scenario Cloud Middleware Model: the Stack The CLEVER

More information

Business applications:

Business applications: Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA Business applications: the COMETA approach Prof. Antonio Puliafito University of Messina Open Grid Forum (OGF25) Catania, 2-6.03.2009 www.consorzio-cometa.it

More information

VIRTUAL RESOURCE MANAGEMENT FOR DATA INTENSIVE APPLICATIONS IN CLOUD INFRASTRUCTURES

VIRTUAL RESOURCE MANAGEMENT FOR DATA INTENSIVE APPLICATIONS IN CLOUD INFRASTRUCTURES U.P.B. Sci. Bull., Series C, Vol. 76, Iss. 2, 2014 ISSN 2286-3540 VIRTUAL RESOURCE MANAGEMENT FOR DATA INTENSIVE APPLICATIONS IN CLOUD INFRASTRUCTURES Elena Apostol 1, Valentin Cristea 2 Cloud computing

More information

Document downloaded from: http://hdl.handle.net/10251/35748. This paper must be cited as:

Document downloaded from: http://hdl.handle.net/10251/35748. This paper must be cited as: Document downloaded from: http://hdl.handle.net/10251/35748 This paper must be cited as: García García, A.; Blanquer Espert, I.; Hernández García, V. (2014). SLA-driven dynamic cloud resource management.

More information

System Models for Distributed and Cloud Computing

System Models for Distributed and Cloud Computing System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems

More information

Sistemi Operativi e Reti. Cloud Computing

Sistemi Operativi e Reti. Cloud Computing 1 Sistemi Operativi e Reti Cloud Computing Facoltà di Scienze Matematiche Fisiche e Naturali Corso di Laurea Magistrale in Informatica Osvaldo Gervasi ogervasi@computer.org 2 Introduction Technologies

More information

Relational Databases in the Cloud

Relational Databases in the Cloud Contact Information: February 2011 zimory scale White Paper Relational Databases in the Cloud Target audience CIO/CTOs/Architects with medium to large IT installations looking to reduce IT costs by creating

More information

Agent Based Application Tools for Cloud Provisioning and Management

Agent Based Application Tools for Cloud Provisioning and Management Agent Based Application Tools for Cloud Provisioning and Management Luca Tasquier, Salvatore Venticinque, Rocco Aversa, and Beniamino Di Martino Department of Information Engineering, Second University

More information

CDBMS Physical Layer issue: Load Balancing

CDBMS Physical Layer issue: Load Balancing CDBMS Physical Layer issue: Load Balancing Shweta Mongia CSE, School of Engineering G D Goenka University, Sohna Shweta.mongia@gdgoenka.ac.in Shipra Kataria CSE, School of Engineering G D Goenka University,

More information

Portability and Interoperability in Clouds: contributions from the mosaic Project

Portability and Interoperability in Clouds: contributions from the mosaic Project Portability and Interoperability in Clouds: contributions from the mosaic Project Project mosaic: Open-Source API and Platform for Multiple Clouds http://www.mosaic-cloud.eu CLASS Conference Bled 23/10/2013

More information

Collaborative & Integrated Network & Systems Management: Management Using Grid Technologies

Collaborative & Integrated Network & Systems Management: Management Using Grid Technologies 2011 International Conference on Computer Communication and Management Proc.of CSIT vol.5 (2011) (2011) IACSIT Press, Singapore Collaborative & Integrated Network & Systems Management: Management Using

More information

Service-Oriented Architectures

Service-Oriented Architectures Architectures Computing & 2009-11-06 Architectures Computing & SERVICE-ORIENTED COMPUTING (SOC) A new computing paradigm revolving around the concept of software as a service Assumes that entire systems

More information

Towards service awareness and autonomic features in a SIPenabled

Towards service awareness and autonomic features in a SIPenabled features in a SIP-enabled network 1 Towards service awareness and autonomic features in a SIPenabled network Guillaume Delaire Laurent Walter Goix Giuseppe Valetto Telecom Italia Lab Outline 2 Context

More information

Cloud Computing 159.735. Submitted By : Fahim Ilyas (08497461) Submitted To : Martin Johnson Submitted On: 31 st May, 2009

Cloud Computing 159.735. Submitted By : Fahim Ilyas (08497461) Submitted To : Martin Johnson Submitted On: 31 st May, 2009 Cloud Computing 159.735 Submitted By : Fahim Ilyas (08497461) Submitted To : Martin Johnson Submitted On: 31 st May, 2009 Table of Contents Introduction... 3 What is Cloud Computing?... 3 Key Characteristics...

More information

Cloud computing: the state of the art and challenges. Jānis Kampars Riga Technical University

Cloud computing: the state of the art and challenges. Jānis Kampars Riga Technical University Cloud computing: the state of the art and challenges Jānis Kampars Riga Technical University Presentation structure Enabling technologies Cloud computing defined Dealing with load in cloud computing Service

More information

Performance Gathering and Implementing Portability on Cloud Storage Data

Performance Gathering and Implementing Portability on Cloud Storage Data International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 17 (2014), pp. 1815-1823 International Research Publications House http://www. irphouse.com Performance Gathering

More information

Resource Provisioning in Clouds via Non-Functional Requirements

Resource Provisioning in Clouds via Non-Functional Requirements Resource Provisioning in Clouds via Non-Functional Requirements By Diana Carolina Barreto Arias Under the supervision of Professor Rajkumar Buyya and Dr. Rodrigo N. Calheiros A minor project thesis submitted

More information

Environments, Services and Network Management for Green Clouds

Environments, Services and Network Management for Green Clouds Environments, Services and Network Management for Green Clouds Carlos Becker Westphall Networks and Management Laboratory Federal University of Santa Catarina MARCH 3RD, REUNION ISLAND IARIA GLOBENET 2012

More information

Cloud Federation to Elastically Increase MapReduce Processing Resources

Cloud Federation to Elastically Increase MapReduce Processing Resources Cloud Federation to Elastically Increase MapReduce Processing Resources A.Panarello, A.Celesti, M. Villari, M. Fazio and A. Puliafito {apanarello,acelesti, mfazio, mvillari, apuliafito}@unime.it DICIEAMA,

More information

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM Akmal Basha 1 Krishna Sagar 2 1 PG Student,Department of Computer Science and Engineering, Madanapalle Institute of Technology & Science, India. 2 Associate

More information

Test of cloud federation in CHAIN-REDS project

Test of cloud federation in CHAIN-REDS project Test of cloud federation in CHAIN-REDS project Italian National Institute of Nuclear Physics, Division of Catania - Italy E-mail: giuseppe.andronico@ct.infn.it Roberto Barbera Department of Physics and

More information

CHAPTER 6 MAJOR RESULTS AND CONCLUSIONS

CHAPTER 6 MAJOR RESULTS AND CONCLUSIONS 133 CHAPTER 6 MAJOR RESULTS AND CONCLUSIONS The proposed scheduling algorithms along with the heuristic intensive weightage factors, parameters and ß and their impact on the performance of the algorithms

More information

Cloud Computing Service Models, Types of Clouds and their Architectures, Challenges.

Cloud Computing Service Models, Types of Clouds and their Architectures, Challenges. Cloud Computing Service Models, Types of Clouds and their Architectures, Challenges. B.Kezia Rani 1, Dr.B.Padmaja Rani 2, Dr.A.Vinaya Babu 3 1 Research Scholar,Dept of Computer Science, JNTU, Hyderabad,Telangana

More information

Figure 1. The cloud scales: Amazon EC2 growth [2].

Figure 1. The cloud scales: Amazon EC2 growth [2]. - Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues

More information

Exploring Inter-Cloud Load Balancing by Utilizing Historical Service Submission Records

Exploring Inter-Cloud Load Balancing by Utilizing Historical Service Submission Records 72 International Journal of Distributed Systems and Technologies, 3(3), 72-81, July-September 2012 Exploring Inter-Cloud Load Balancing by Utilizing Historical Service Submission Records Stelios Sotiriadis,

More information

Load balancing model for Cloud Data Center ABSTRACT:

Load balancing model for Cloud Data Center ABSTRACT: Load balancing model for Cloud Data Center ABSTRACT: Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to

More information

Clearing the Clouds. Understanding cloud computing. Ali Khajeh-Hosseini ST ANDREWS CLOUD COMPUTING CO-LABORATORY. Cloud computing

Clearing the Clouds. Understanding cloud computing. Ali Khajeh-Hosseini ST ANDREWS CLOUD COMPUTING CO-LABORATORY. Cloud computing Clearing the Clouds Understanding cloud computing Ali Khajeh-Hosseini ST ANDREWS CLOUD COMPUTING CO-LABORATORY Cloud computing There are many definitions and they all differ Simply put, cloud computing

More information

Database Replication

Database Replication Database Systems Journal vol. I, no. 2/2010 33 Database Replication Marius Cristian MAZILU Academy of Economic Studies, Bucharest, Romania mariuscristian.mazilu@gmail.com, mazilix@yahoo.com For someone

More information

Middleware support for the Internet of Things

Middleware support for the Internet of Things Middleware support for the Internet of Things Karl Aberer, Manfred Hauswirth, Ali Salehi School of Computer and Communication Sciences Ecole Polytechnique Fédérale de Lausanne (EPFL) CH-1015 Lausanne,

More information

A Review On SLA And Various Approaches For Efficient Cloud Service Provider Selection Shreyas G. Patel Student of M.E, CSE Department, PIET Limda

A Review On SLA And Various Approaches For Efficient Cloud Service Provider Selection Shreyas G. Patel Student of M.E, CSE Department, PIET Limda A Review On SLA And Various Approaches For Efficient Cloud Service Provider Selection Shreyas G. Patel Student of M.E, CSE Department, PIET Limda Prof. Gordhan B. Jethava Head & Assistant Professor, Information

More information

Cloud Computing Architectures and Design Issues

Cloud Computing Architectures and Design Issues Cloud Computing Architectures and Design Issues Ozalp Babaoglu, Stefano Ferretti, Moreno Marzolla, Fabio Panzieri {babaoglu, sferrett, marzolla, panzieri}@cs.unibo.it Outline What is Cloud Computing? A

More information

An Introduction to Cloud Computing Concepts

An Introduction to Cloud Computing Concepts Software Engineering Competence Center TUTORIAL An Introduction to Cloud Computing Concepts Practical Steps for Using Amazon EC2 IaaS Technology Ahmed Mohamed Gamaleldin Senior R&D Engineer-SECC ahmed.gamal.eldin@itida.gov.eg

More information

Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation

Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation This paper discusses how data centers, offering a cloud computing service, can deal

More information

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing Volume 5, Issue 1, January 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Survey on Load

More information

A Cloud Monitoring Framework for Self-Configured Monitoring Slices Based on Multiple Tools

A Cloud Monitoring Framework for Self-Configured Monitoring Slices Based on Multiple Tools A Cloud Monitoring Framework for Self-Configured Monitoring Slices Based on Multiple Tools Márcio Barbosa de Carvalho, Rafael Pereira Esteves, Guilherme da Cunha Rodrigues, Lisandro Zambenedetti Granville,

More information

Portable Cloud Services Using TOSCA

Portable Cloud Services Using TOSCA Institute of Architecture of Application Systems Portable Cloud Services Using TOSCA Tobias Binz, Gerd Breiter, Frank Leymann, and Thomas Spatzier Institute of Architecture of Application Systems, University

More information

Analysis of Cloud Solutions for Asset Management

Analysis of Cloud Solutions for Asset Management ICT Innovations 2010 Web Proceedings ISSN 1857-7288 345 Analysis of Cloud Solutions for Asset Management Goran Kolevski, Marjan Gusev Institute of Informatics, Faculty of Natural Sciences and Mathematics,

More information

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load Pooja.B. Jewargi Prof. Jyoti.Patil Department of computer science and engineering,

More information

An Efficient Cost Calculation Mechanism for Cloud and Non Cloud Computing Environment in Java

An Efficient Cost Calculation Mechanism for Cloud and Non Cloud Computing Environment in Java 2012 International Conference on Computer Technology and Science (ICCTS 2012) IPCSIT vol. 47 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V47.31 An Efficient Cost Calculation Mechanism

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

How To Understand Cloud Computing

How To Understand Cloud Computing Cloud Computing: a Perspective Study Lizhe WANG, Gregor von LASZEWSKI, Younge ANDREW, Xi HE Service Oriented Cyberinfrastruture Lab, Rochester Inst. of Tech. Abstract The Cloud computing emerges as a new

More information

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 ISSN 2229-5518

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 ISSN 2229-5518 International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 An Efficient Approach for Load Balancing in Cloud Environment Balasundaram Ananthakrishnan Abstract Cloud computing

More information

Scheduler in Cloud Computing using Open Source Technologies

Scheduler in Cloud Computing using Open Source Technologies Scheduler in Cloud Computing using Open Source Technologies Darshan Upadhyay Prof. Chirag Patel Student of M.E.I.T Asst. Prof. Computer Department S. S. Engineering College, Bhavnagar L. D. College of

More information

A Survey Study on Monitoring Service for Grid

A Survey Study on Monitoring Service for Grid A Survey Study on Monitoring Service for Grid Erkang You erkyou@indiana.edu ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide

More information

Deploying Business Virtual Appliances on Open Source Cloud Computing

Deploying Business Virtual Appliances on Open Source Cloud Computing International Journal of Computer Science and Telecommunications [Volume 3, Issue 4, April 2012] 26 ISSN 2047-3338 Deploying Business Virtual Appliances on Open Source Cloud Computing Tran Van Lang 1 and

More information

An Efficient Checkpointing Scheme Using Price History of Spot Instances in Cloud Computing Environment

An Efficient Checkpointing Scheme Using Price History of Spot Instances in Cloud Computing Environment An Efficient Checkpointing Scheme Using Price History of Spot Instances in Cloud Computing Environment Daeyong Jung 1, SungHo Chin 1, KwangSik Chung 2, HeonChang Yu 1, JoonMin Gil 3 * 1 Dept. of Computer

More information

Chapter 19 Cloud Computing for Multimedia Services

Chapter 19 Cloud Computing for Multimedia Services Chapter 19 Cloud Computing for Multimedia Services 19.1 Cloud Computing Overview 19.2 Multimedia Cloud Computing 19.3 Cloud-Assisted Media Sharing 19.4 Computation Offloading for Multimedia Services 19.5

More information

Distributed Database for Environmental Data Integration

Distributed Database for Environmental Data Integration Distributed Database for Environmental Data Integration A. Amato', V. Di Lecce2, and V. Piuri 3 II Engineering Faculty of Politecnico di Bari - Italy 2 DIASS, Politecnico di Bari, Italy 3Dept Information

More information

Private Cloud in Educational Institutions: An Implementation using UEC

Private Cloud in Educational Institutions: An Implementation using UEC Private Cloud in Educational Institutions: An Implementation using UEC D. Sudha Devi L.Yamuna Devi K.Thilagavathy,Ph.D P.Aruna N.Priya S. Vasantha,Ph.D ABSTRACT Cloud Computing, the emerging technology,

More information

Eucalyptus: An Open-source Infrastructure for Cloud Computing. Rich Wolski Eucalyptus Systems Inc. www.eucalyptus.com

Eucalyptus: An Open-source Infrastructure for Cloud Computing. Rich Wolski Eucalyptus Systems Inc. www.eucalyptus.com Eucalyptus: An Open-source Infrastructure for Cloud Computing Rich Wolski Eucalyptus Systems Inc. www.eucalyptus.com Exciting Weather Forecasts Commercial Cloud Formation Eucalyptus - Confidential What

More information

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani

More information

cloud SOA www.cloud4soa.eu Research Guide

cloud SOA www.cloud4soa.eu Research Guide cloud SOA A Cloud interoperability framework and platform for user-centric, semantically-enhanced, service-oriented application design, deployment and distributed execution Research Guide www.cloud4soa.eu

More information

The International Journal Of Science & Technoledge (ISSN 2321 919X) www.theijst.com

The International Journal Of Science & Technoledge (ISSN 2321 919X) www.theijst.com THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE Efficient Parallel Processing on Public Cloud Servers using Load Balancing Manjunath K. C. M.Tech IV Sem, Department of CSE, SEA College of Engineering

More information

Elasticity Management in the Cloud. José M. Bernabéu-Aubán

Elasticity Management in the Cloud. José M. Bernabéu-Aubán Elasticity Management in the Cloud José M. Bernabéu-Aubán The Cloud and its goals Innovation of the Cloud Utility model of computing Pay as you go Variabilize costs for everyone But the Infrastructure

More information

Energy Efficient Systems

Energy Efficient Systems Energy Efficient Systems Workshop Report (September 2014) Usman Wajid University of Manchester United Kingdom Produced as a result of Workshop on Energy Efficient Systems @ ICT4S conference, Stockholm

More information

NIST Cloud Computing Reference Architecture

NIST Cloud Computing Reference Architecture NIST Cloud Computing Reference Architecture Version 1 March 30, 2011 2 Acknowledgements This reference architecture was developed and prepared by Dr. Fang Liu, Jin Tong, Dr. Jian Mao, Knowcean Consulting

More information

ECCO: An integrated solution for Environment Compatible COmputing systems. G. B. Barone, D. Bottalico, V. Boccia, L. Carracciuolo

ECCO: An integrated solution for Environment Compatible COmputing systems. G. B. Barone, D. Bottalico, V. Boccia, L. Carracciuolo ECCO: An integrated solution for Environment Compatible COmputing systems G. B. Barone, D. Bottalico, V. Boccia, L. Carracciuolo Outline Introduction, some remarks and general objectives The case study

More information

Six Strategies for Building High Performance SOA Applications

Six Strategies for Building High Performance SOA Applications Six Strategies for Building High Performance SOA Applications Uwe Breitenbücher, Oliver Kopp, Frank Leymann, Michael Reiter, Dieter Roller, and Tobias Unger University of Stuttgart, Institute of Architecture

More information

Cloud Computing Architecture: A Survey

Cloud Computing Architecture: A Survey Cloud Computing Architecture: A Survey Abstract Now a day s Cloud computing is a complex and very rapidly evolving and emerging area that affects IT infrastructure, network services, data management and

More information

A Study on Service Oriented Network Virtualization convergence of Cloud Computing

A Study on Service Oriented Network Virtualization convergence of Cloud Computing A Study on Service Oriented Network Virtualization convergence of Cloud Computing 1 Kajjam Vinay Kumar, 2 SANTHOSH BODDUPALLI 1 Scholar(M.Tech),Department of Computer Science Engineering, Brilliant Institute

More information

Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications

Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications Rouven Kreb 1 and Manuel Loesch 2 1 SAP AG, Walldorf, Germany 2 FZI Research Center for Information

More information

Deployment Guide: Unidesk and Hyper- V

Deployment Guide: Unidesk and Hyper- V TECHNICAL WHITE PAPER Deployment Guide: Unidesk and Hyper- V This document provides a high level overview of Unidesk 3.x and Remote Desktop Services. It covers how Unidesk works, an architectural overview

More information

White Paper Take Control of Datacenter Infrastructure

White Paper Take Control of Datacenter Infrastructure Take Control of Datacenter Infrastructure Uniting the Governance of a Single System of Record with Powerful Automation Tools Take Control of Datacenter Infrastructure A new breed of infrastructure automation

More information

STeP-IN SUMMIT 2013. June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)

STeP-IN SUMMIT 2013. June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case) 10 th International Conference on Software Testing June 18 21, 2013 at Bangalore, INDIA by Sowmya Krishnan, Senior Software QA Engineer, Citrix Copyright: STeP-IN Forum and Quality Solutions for Information

More information

Cloud Models and Platforms

Cloud Models and Platforms Cloud Models and Platforms Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF A Working Definition of Cloud Computing Cloud computing is a model

More information

A Novel Approach to QoS Monitoring in the Cloud

A Novel Approach to QoS Monitoring in the Cloud A Novel Approach to QoS Monitoring in the Cloud 2nd Training on Software Services- Cloud computing - November 11-14 Luigi Sgaglione EPSILON srl luigi.sgaglione@epsilonline.com RoadMap Rationale and Approach

More information

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study DISTRIBUTED SYSTEMS AND CLOUD COMPUTING A Comparative Study Geographically distributed resources, such as storage devices, data sources, and computing power, are interconnected as a single, unified resource

More information

Technical. Overview. ~ a ~ irods version 4.x

Technical. Overview. ~ a ~ irods version 4.x Technical Overview ~ a ~ irods version 4.x The integrated Ru e-oriented DATA System irods is open-source, data management software that lets users: access, manage, and share data across any type or number

More information

A Model for Accomplishing and Managing Dynamic Cloud Federations

A Model for Accomplishing and Managing Dynamic Cloud Federations A Model for Accomplishing and Managing Dynamic Cloud Federations London, CFM workshop 2014, December 8 th Giuseppe Andronico, INFN CT Marco Fargetta (INFN CT), Maurizio Paone (INFN CT), Salvatore Monforte

More information

JOURNAL OF OBJECT TECHNOLOGY

JOURNAL OF OBJECT TECHNOLOGY JOURNAL OF OBJECT TECHNOLOGY Online at www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2010 Vol. 9, No. 2, March-April 2010 Architected Cloud Solutions Revealed Mahesh H. Dodani,

More information

Author's personal copy

Author's personal copy Future Generation Computer Systems 32 (2014) 54 68 Contents lists available at ScienceDirect Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs An interoperable and self-adaptive

More information

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining

More information

CompatibleOne Open Source Cloud Broker Architecture Overview

CompatibleOne Open Source Cloud Broker Architecture Overview CompatibleOne Open Source Cloud Broker Architecture Overview WHITE PAPER October 2012 Table of Contents Abstract 2 Background 2 Disclaimer 2 Introduction 2 Section A: CompatibleOne: Open Standards and

More information