University Halle-Wittenberg Institute of Computer Science

Size: px
Start display at page:

Download "University Halle-Wittenberg Institute of Computer Science"


1 University Halle-Wittenberg Institute of Computer Science Proceedings of the PhD Symposium at the 2nd European Conference on Service-Oriented and Cloud Computing Wolf Zimmermann (Editor) Technical Report 2013/01


3 University Halle-Wittenberg Institute of Computer Science Proceedings of the PhD Symposium at the 2nd European Conference on Service-Oriented and Cloud Computing Wolf Zimmermann (Editor) September 2013 Technical Report 2013/01

4 Institute of Computer Science Faculty of Natural Sciences III Martin-Luther-University Halle-Wittenberg D Halle, Germany WWW: c All rights reserved. L A T E X-Style: designed by Winfried Geis, Thomas Merkle, University Stuttgart adapted by Paul Molitor, Halle Permit granted by University Stuttgart [25/05/2007]

5 Contents Application of Stable Marriage Algorithms and the Cooperative Game Theory to the Building of Cloud Collaborations. Olga Wenge, Supervisor: Ralf Steinmetz Selecting Cloud Data Centers for QoS-Aware Multimedia Applications Ronny Hans, Supervisor: Ralf Steinmetz A Model for Energy-Awareness in Federated Cloud Computing Systems with Service-Level Agreements. Alessandro Ferreira Leite, Supervisors: Alba Cristina Magalhaes Alves de Melo, Christine Eisenbeis, Claude Tadonki A Quality Model for Cloud Services in Use Yvonne Thoß, Supervisor: Alexander Schill Information Supply Chains 2Optimized Multi-Tier Service Selection based on Consumer Context and Utility Jens Kirchner, Supervisors: Andreas Heberle, Welf Löwe Leveraging Privacy in Identity Management as a Service through Proxy Re-Encryption. David Nunez, Supervisors: Isaac Agudo, Javier Lopez Towards Leveraging Semantic Web Service Technology for Personalized, Adaptive Automatic Ubiquitous Sensors Discovery in Context of the Internet of Things. Kobkaew Opasjumruskit, Supervisor: Birgitta König-Ries An agent-based architecture for resource allocation in Cloud Computing Nassima Bouchareb, Supervisors:Nacer Eddine Zarour, Samir Aknine

6 2

7 Preface The goal of the PhD-Symposium is to provide a forum for PhD students to present and to discuss their work with senior scientists and other PhD students working on related topics. As for the main conference, the topics focus on all aspects of Web Services, Service Oriented Architectures, and related fields. In contrast to the main conference, this work is usually unfinished or has just been started in the PhD projects. The programme committee carefully selected eight contributions. Each submission was reviewed by at least two PC-members. In addition to the precise description of the problem to be solved, preliminary results, and first ideas for solving the main problem, the contributions also include a workplan. All these issues are discussed at the symposium with selected senior scientist and the PhD students. We are grateful to the conference organizers Ernesto Pimentel and Carlos Canal for their organizatorial support. We thank the international programme committee consisting of Antonio Brogi (University of Pisa, Italy) Rik Eshuis (Eindhoven University of Technology, The Netherlands) Thomas Gschwind (IBM Zurich Research Lab, Switzerland) Birgitta König-Ries (University of Jena, Germany) Welf Löwe (Linnaeus University, Sweden) Claus Pahl (Dublin City University, Ireland) George Angelos Papadopoulus (University of Cyprus, Cyprus) Ulf Schreier (University of Applied Sciences Furtwangen, Germany) Massimo Villari (University of Messina, Italy) for their work. Finally we would like to thank all authors - no matter whether their contribution has been accepted. Wolf Zimmermann Halle, September 9, 2013 PhD Symposium Chair 3

8 4

9 Application of Stable Marriage Algorithms and Cooperative Game Theory to the Building of Cloud Collaborations Olga Wenge, supervised by Ralf Steinmetz Multimedia Communication Lab (KOM) Technische Universität Darmstadt, Germany Abstract. Today s cloud environments are very heterogeneous. This cloud heterogeneity, as the consequence of lacking cloud standards, builds technical and security barriers between cloud providers and blocks them from intended cloud collaborations within cloud marketplaces. A cloud broker, who acts on behalf of cloud providers, matches compatible collaborative partners according to their requirements and attempts to support the optimal exchange of cloud resources between them. The cloud brokerage matchmaking process must also consider security aspects. The fulfillment of security requirements in cloud collaborations usually involves providing risk assessments, which are still very timeconsuming. In our research we aim at the developing of appropriate optimal mechanisms for cloud provider selection for building cloud collaborations with respect to security requirements. In our paper, we present our initial ideas of the application of the cooperative game theory and stable marriage algorithms in order to provide a solution for proper cloud provider selection. Keywords: cloud collaborations, cloud provider selection, stable marriage problem, cooperative game theory, cloud market supervision 1 Introduction Today s cloud environments are built up of heterogeneous landscapes of independent clouds. The heterogeneity of clouds, as a consequence of still nonexistent technology, security and audit standards, presents a hurdle for a proper collaboration between clouds, necessary for the building of the cloud ecosystem and cloud marketplaces [1]. The reasons for cloud collaborations can be very different: enterprise acquisitions, storage and compute power extensions, disaster recovery plans, sub-contracting and service outsourcing, the necessity for a wider spectrum of services, etc. Such cloud collaborations bring cloud providers further advantages. Besides the eco-efficiency, due to shared usage of data centers and technologies [2], a better scalability and cost reduction can be achieved by the ad hoc selling of free resources and buying of additional external resources. This exchange of cloud resources forms the basis of the cloud brokerage service model [3]. Cloud brokerage enables cloud providers to find

10 an optimally suitable match for each other, i.e., to find a collaborative partner that meets all requirements of intended cloud collaboration. These requirements may include business aspects (pricing, timelines), technical aspects (compatibility, interoperability, availability), and of course legal and security aspects (level of data protection, security measures, compliance with different industrial regulations, etc.) [3, 4, 5]. The cloud broker is the main actor in the cloud brokerage service model, and acts as a mediator between cloud service providers and cloud service consumers, providing matchmaking, monitoring and governance of cloud collaborations [6]. The matchmaking of security and legal requirements and especially monitoring of their fulfillment during the cloud collaboration is not trivial. The security risks tend to accelerate by entering into cloud collaborations within cloud marketplaces, because collaborative partners may have different implemented security policies and standards [7]. Therefore, two main requirements must be met to provide secure and compliant cloud collaboration - the cloud broker must perform an optimally reliable security risk assessment prior to the collaboration, or on-demand; and the cloud broker must provide the security governance during the collaboration. The security risk assessments of cloud providers are widely discussed in the recent research, but, to the best of our knowledge, these assessments are still very timeconsuming and cannot be applied to ad hoc cloud collaborations [8]. In this context the following research question need to be answered: What are appropriate optimal mechanisms for cloud provider selection for building cloud collaborations with respect to security requirements? The remainder of the paper is organized as follows. In Section 2, we discuss the current cloud market environments and their supervision. In Section 3, we present the principles of the stable matching problem and the cooperative game theory. Section 4 provides our initial idea of application of the stable matching and cooperative game principles to the building of cloud collaboration and the cloud provider selection process. Finally, in Section 5, we outline steps of our future work. 2 Supervision of Cloud Collaborations within Cloud Marketplaces The current cloud market environments consist of heterogeneous clouds, cloud providers who sell services, customers who buy services, and cloud brokers who help to find the perfect match for their clients. In other words, cloud markets present the aggregate of possible buyers and sellers of cloud services and cloud resources and the transactions between them [9]. But the current cloud markets are still not organized and supervised, if compared to financial or energy markets [10]. The financial and energy markets are supervised by exchanges or other organizations that facilitate and oversee the trade, using physical locations (e.g., New York Stock Exchange (NYSE), Deutsche Börse (German Stock Exchange in Frankfurt), or European Energy Exchange (EEX) in Leipzig), or electronic systems (e.g., NASDAQ

11 - National Association of Securities Dealers Automated Quotations, XETRA - Xchange Electronic Trading). These are also regulated by different national and international authorities, e.g., U.S. Securities and Exchange Commission, Monetary Authority of Singapore, Energy Market Authority (EMA) in Singapore, Energy Community (EC) in Europe, etc [11]. Lack of control or supervision is one of main concerns of cloud collaborations within cloud marketplaces. The development of market supervision techniques and approaches for the current cloud marketplaces, to provide a fair and orderly cloud market, is still at an embryonic stage. Recently rolled-out Deutsche Börse Cloud Exchange is the next attempt to bring more transparency and safety for cloud market participants and to narrow the gap in cloud market supervision [12]. The trading of cloud resources within predefined cloud collaborations can be seen as an interim solution to provide desired supervision and information security governance in cloud markets [9]. Two main principles in the market design theory for the establishment of any fair and orderly market are stability and incentive compatibility [13]. Both principles are derived from the cooperative game theory [13, 14] and the stable marriage problem [13, 15] and found very wide application in economics. Coalitions building between game players in the cooperative game theory with the purpose of increasing their benefit (only if they play together and not individually) appears to be very similar to the idea of cloud collaborations building. Stability is one of the important drivers for involving new market participants and building collaborations. Incentive compatibility is necessary to prevent manipulations on the market and within collaborations. In our research we aim at clarification in which way and in what extent the principles of the cooperative game theory and the stable marriage problem can be applied to provide a solution for proper cloud provider selection in the building of cloud collaborations. 3 Principles of the Stable Marriage Problem and the Cooperative Game Theory Stable Marriage Problem The stable marriage problem (in its classical form) is a matching problem of two finite disjoint sets M = {,, } of men and W = {,, } of women. Each man and each woman has ordered preference lists with the names of preferred partners. The solution is a set of n monogamous marriages between M and W, i.e., bijection of M onto W, with considering of their preferences. The matching is called stable if all partners are married and there are no two people who would both prefer each other than their current partners [15]. Existing algorithms to this problem (e.g., Gale- Shapley algorithm [15], extended algorithms from Donald E. Knuth [16]) solve the problem in polynomial time. These algorithms found their application in very different industries to solve real-world situations, such as the assignment of medical gradu-

12 ates to hospitals, children to schools and other National Resident Matching Programs [17]. More complex problems, such as simultaneous assignment of married couples of medical graduates to hospitals or the student assignments to universities with predefined student quota, are NP-complete [16]. Introduced by Alvin E. Roth the New England Kidney Exchange Program is another real-world application of stable matching and game theory [17]. Cooperative Game Theory The cooperative game theory (mostly developed by Lloyd S. Shapley) is based on the coalitions building and usage of transferable utility [15]. The coalitions consist of players who wish to play (or to work) together to increase their benefits, as the cooperative game (or work) should bring more benefit as by playing alone. It is a principle of stability that makes coalitions attractive. Consider a set of players P = {1, 2, 3,, n}. C P is a coalition with a transferable utility (any sum of money or other recourses) for this coalition with the value v(s). Let denote the profit of each individual player in the coalition. The coalition is called stable, if ( ). The transferable utility can be strategically divided between players or transferred if necessary to any of them. To support the principle of incentive compatibility all players (as well as men and women in the stable marriage problem) must provide (ideally simultaneously) their (truly) information about payoffs (benefits, preferences) for running a revelation mechanism, a mechanism provided by a third party (or mediator) in the play who gathers this information, unveils and analyzes it, and provides with advices and decisions (e.g., matching, partner selection). Last but not least in this Section - in 2012, the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel was awarded to Lloyd S. Shapley and Alvin E. Roth "for the theory of stable allocations and the practice of market design [18]. 4 Initial Ideas to the Application of Stable Marriage Algorithms and the Cooperative Game Theory to the Building of Cloud Collaborations As mentioned above, the building of cloud collaborations has its similarity with the building of coalitions in the cooperative game theory and the selection process of a compatible collaborative partner has its similarity with the stable matching problems. In our research we aim at the elaboration of a solution for a stable allocation of cloud providers within cloud collaborations in cloud markets in accordance to their security requirements. We consider a set of cloud providers P = {1, 2, 3,, n} of any cloud market and C P as stable cloud collaboration (coalition) with a transferable utility (compute capacity, storage, etc.) for this collaboration with the value v(s). Let denote cloud

13 recourses, which each individual cloud provider possesses. So, ( ), and this stability motivates cloud providers to enter such cloud collaborations. We consider the incentives of individual cloud providers to form this collaboration to avoid any conflicts of interest within the collaboration that can be solved by binding agreements, policies, and contracts. The idea of transferable utility enables freely transfer and sharing of cloud resources among collaboration partners. Any collaboration has its preference list for new participants, in order to benefit from their entering. In the context of cloud markets this preference list can include technical aspects (number of virtual machines, storage sizes, and capacity parameters), financial aspects (prices, calculated budget for further investments, risk calculations) and security legal aspects (implemented security level, necessary compliance with governmental, local and industrial requirements, etc.). The preference lists can be also seen as a set of constraints for new partners to provide safety within collaborations, e.g., security rating = very high, datacenter tier level = 3, etc.) The market participants must be appropriately matched in order to trade with each other. The matching is unacceptable, if it is worse than remaining unmatched. To provide incentive compatibility, a centralized mechanism for running a revelation mechanism is necessary. Such mechanism in the cloud market context is supposed to be provided by a cloud broker or any cloud exchange. So, the participants of the market submit their ordered lists of preferences, ideally simultaneously, and a cloud broker allocates them. The truth-telling and the completeness of submitted information is very important here, as misrepresenting of the preferences can lead to the loss of benefits. The evaluation of our first ideas with extended (weighted) Gale-Shapley algorithm based on the deferred acceptance tactic [16] (i.e., the final matching of participants occurs only after the consideration of preferences of all participants) terminated in the polynomial time. Parameters, we included for matching, were: types of collaboration services (only IaaS cloud services), predefined overall minimum for the security rating (without particular values for security controls), maximum budget for collaboration and unlimited number of collaborative partner. 5 Future Work In our future work, we are going to extend our model and evaluate it for the cases when the information in preference lists is not complete. One possible solution here can be the usage of cloud providers historical data. The next extension that must be considered is a possible multiple collaboration of cloud providers. If cloud providers (simultaneously) enter different collaborations with different preference lists and what impact it can have especially for their security rating. The security rating approach is intended to be developed as well. Our next challenge is the definition of quota (limited number of partners) in collaborations and integration of waiting lists, in case the possible matching is unacceptable. Furthermore, more granular security requirements are planned to implement for different cloud services and for proper cloud provider selection.

14 Acknowledgment. This work is supported in part by E-Finance Lab e. V., Frankfurt am Main, Germany ( References 1. Kretzschmar, M., Golling, M.: Security management spectrum in future multi-provider Inter-Cloud environments - Method to highlight necessary further development. In: 5 th International DMTF Academic Alliance Workshop on Systems and Virtualization Management (SVM), pp. 1-8 (2011) 2. Guitart, J., Torres, J.: Characterizing Cloud Federation for Enhancing Providers' Profit. In: IEEE 3 rd International Conference on Cloud Computing (CLOUD), pp (2010) 3. Uttam Kumar, T., Wache, H.: Cloud Broker: Bringing Intelligence into the Cloud. In: IEEE 3rd International Conference on Cloud Computing (CLOUD), pp (2010) 4. Lampe, U., Wenge, O., Müller, A., Schaarschmidt, R.: Cloud Computing in the Financial Industry - A Road Paved with Security Pitfalls? In: 18 th Americas Conference on Information Systems (AMCIS), Association for Information Systems (AIS), (2012) 5. Siebenhaar, M., Wenge, O., Hans, R., Tercan H., Steinmetz, R: Verifying the Availability of Cloud Applications. In: 3 rd International Conference on Cloud Computing and Services Science (CLOSER 2013), accepted for publication 6. Gomes, E., Vo, Q.B., Kowalczyk, R.: Pure exchange markets for resource sharing in federated clouds. In: Concurrency and Computation: Practice and Experience, Vol. 24, Issue 9, pp (2012) 7. Wenge, O., Siebenhaar, M., Lampe, U., Schuller, D., Steinmetz, R.: Much Ado about Security Appeal: Cloud Provider Collaborations and their Risks. In: 1 st European Conference on Service-Oriented and Cloud Computing (ESOCC), Springer, pp (2012) 8. Schnjakin, M., Alnemr, R., Meinel, C.: Contract-based cloud architecture. In: International workshop on Cloud data management (CloudDB), pp (2010) 9. Garg, S.K., Vecchiola, C., Buyya, R.: Mandi: a market exchange for trading utility and cloud computing services. In: Springer Science+Business Media, (2011) 10. Garg, S.K., Versteeg, S., Buyyaa, R.: A framework for ranking of cloud computing services. In: Future Generation Computer Systems, Vol. 29, Issue 4, pp (2013) 11. Rahimi, A.F., Sheffrin, A.Y.: Effective market monitoring in deregulated electricity markets. In:, IEEE Transactions on Power Systems, Vol. 18, Issue 2, pp (2003) Roth, A.E.: The Economist as Engineer: Game Theory, Experimentation, and Computation as Tools for Design Economics, In: Econometrica, Vol. 70, Issue 4, pp (2002) 14. Shapley, L.S.: Markets as Cooperative Games, RAND Corporation (1955) 15. Roth, A.E.: The Shapley Value: Essays in Honor of Lloyd S. Shapley, Cambridge University Press, (2008) 16. Knuth D.E.: Stable Marriage and Its Relation to Other Combinatorial Problems: An Introduction to the Mathematical Analysis of Algorithms, American Mathematical Society (1996) 17. Roth, A.E., Sönmez, T., Ünver, M.U.: A kidney exchange clearinghouse in New England, In: American Economic Review (2005) 18.

15 Selecting Cloud Data Centers for QoS-Aware Multimedia Applications Ronny Hans (Advisor: Prof Dr.-Ing. Ralf Steinmetz) Multimedia Communications Lab (KOM), Technische Universität Darmstadt, Rundeturmstr. 10, Darmstadt, Germany Abstract. Cloud computing infrastructures are increasingly used to deliver sophisticated multimedia services. Since these services commonly pose stringent Quality of Service (QoS) requirements, the appropriate selection of data centers arises as a new research challenge. The corresponding Cloud Data Center Selection Problem is addressed in my work. In this paper, I provide a state of the art overview, initial research results, and an extensive outlook on future extensions. Keywords: cloud computing; multimedia; quality of service; cost; data center; selection 1 Introduction For many years, cloud computing has been used to deliver Information Technology (IT) services in countless application scenarios. Today, the delivery of sophisticated multimedia services increasingly gains in importance. A popular example of such multimedia services is cloud gaming. Thereby, video games are executed in the data centers of cloud providers and the content is delivered as audio/video stream via the Internet [1]. In such context, the fulfillment of stringent Quality of Services (QoS) requirements plays an outstanding role. For example, latency which determines the quality of experience in video gaming highly depends on the selection of appropriate data centers that are located geographically close to the users [2]. Nowadays, most cloud services are provisioned by a few, centralized data centers around the globe. The locations of these data centers are selected with the aim to minimize costs [2]. Due to these facts, the current cloud infrastructure is hardly able to provide multimedia services with stringent QoS requirements [2]. Hence, to ensure and improve the provisioning of multimedia software services, two basic questions arise: (1) How to design future cloud infrastructures, i. e., where to place new data centers? (2) How to distribute resources of existing data centers to offer QoS-sensitive software services to a maximum number of users? Thus, the selection process may either refer to choosing among potential data centers for construction at design time, or choosing among existing data centers for service delivery at run time. Both problems are closely related and essentially map onto a similar research problem, which we have previously introduced as 11

16 Cloud Data Center Selection Problem (CDCSP) [3]. The aim of my work consists in the development of corresponding optimization approaches, which permit to address the CDSCP and hence allow for a cost-efficient, QoS-aware selection of data centers both a design time and run time. The remainder of this paper is structured as follows: In Section 2, the current state of research is presented. Section 3 outlines initial optimization approaches and gives preliminary evaluation results. Future research directions are discussed in Section 4. Section 5 concludes the paper with a brief summary. 2 Current State of Research In my work I address the cost-efficient selection of data centers for multimedia applications during design time and run time. Related issues have been addressed by other researchers in the past. To start with, Goiri et al. [4] present an approach for efficient data center placement. Thereby, a location is determined by several factors, e. g., network backbones, cost of electric energy, and proximity to potential customers. The authors used a combination of optimal approaches and heuristics to find a solution. While Goiri et al. focus on the placement of new data centers at design time, my work additionally aims to provide solutions for appropriate data center resource distribution at run time. Larumbe and Sans [5] present an optimization approach which addresses three distinct, yet interlinked problems: First, the authors address the geographical location of data centers. Second, they address the location of software components that are hosted in network nodes. Finally, they investigate the issue of routing. Because the authors see a close connection between these problems, they integrated it in one mathematical framework using an optimal approach. Due to the chosen approach, which uses Integer Programming, the algorithm tends to be more suitable for design time, whereas my work covers design time and run time. Choy et al. [2] study the network delay of the existing Amazon EC2 cloud infrastructure. The authors show that the existing small numbers of large scale data centers are only able to meet latency requirements of multimedia applications for fewer than 70% of the US population. The authors propose to augment existing data centers by specialized servers, so called edge servers, located nearby end users. They claim that their proposal would allow 90% coverage in the U.S. However, they do not propose an optimization approach to decide on the placement or selection of these data centers and servers. Wang et al. [6] identify several cloud gaming challenges: Low round-trip latency, high bandwidth for video streaming, and high computation needs for servers, saying that these challenges could result in high costs. The authors propose an approach to schedule computing and network resources simultaneously. Thereby, they assume a dynamically changing resource demand. However, the proposed algorithm makes decisions for new requests only and does not try to find a solution for all uses at the same time. The focus on run time is also a major difference to my work. 12

17 3 Initial Approach and Results In the following, I describe my preliminary research results. These results include an exact (optimal) and a heuristic (non-optimal) solution approach, as well as initial evaluation results for those. As a basis, I will first introduce a set of formal notations. 3.1 Formal Notations I assume that the cloud provider considers a set of (potential or existing) geographically distributed data centers, D = {1, 2,..., D # } N, for selection. These data centers should serve a set of user clusters U = {1, 2,..., U # } N. Such user clusters are representations of a number of clients, which are located in certain geographical areas. Further, a provider defines a set of relevant QoS attributes Q = {1, 2,..., Q # } N. Each user cluster u U has a specific demands of services, S u N, expressed in, e. g., server units. Additionally, for the provided services, each cluster expects certain QoS requirements, QR u,q R, for each specific QoS attribute q Q. Without loss of generality, the QoS requirements are expressed as as upper bound, e. g., maximum latency; lower bounds can be expressed through negation. Each data center d D may provide server units within a minimal and maximal boundary, Kd min N and Kd max N. Regarding the user cluster u and QoS attribute q, a data center makes a QoS guarantee QR d,u,q R +, depending, e. g., on the network topology and distance. Each selected data centers results in is assumed to result in fixed cost CF d R + and variable CV d R + per provisioned server unit. The challenge for the provider is the cost-minimal selection of data centers, such that each user cluster is served its service demand at minimal cost under the given QoS constraints. 3.2 Exact Optimization Approach CDSCP-EXA.KOM An exact solution for the CDCSP, based on Integer Programming (IP), is provided Model 1. This approach which has been proposed in past research [3] is referred to as CDSCP-EXA.KOM in the following. In the model, Eq. 7 defines the decision variables: x d are binary variables, which indicate whether data center d will be constructed respectively used or not. y d,u are integer variables that denote how many resource units data center d provides to user cluster u in order to satisfy its service demand. Depending on the decision variables, the total cost C is determined in the objective function in Eq. 1. Eq. 2 represents the constraint that the service demands of all user clusters must be satisfied by corresponding data center capacities. Eqs. 3 and 4 functionally link the decision variables x and y and also assure that the capacity of each data center is chosen from the specified interval, i. e., Kd min to Kd max. Eq. 5 constrains the assignment between data centers and user clusters, depending on the variables p d,u from Eq. 6, which indicate whether the QoS requirements of a user cluster u are met by data center d or not. 13

18 Model 1 Cloud Data Center Selection Problem Min. C(x, y) = x d CF d + y d,u CV d (1) d D d D,u U y d,u S u u U (2) u U u U d D y d,u x d K max d d D (3) y d,u x d K min d d D (4) y d,u p d,u Kd max d D, u U (5) { p d,u = 1 if QG d,u,q QR u,q q Q 0 else (6) x d {0, 1} d D y d,u N d D, u U (7) x d R, 0 x d 1 d D y d,u R, y d,u 0 d D, u U (8) 3.3 Heuristic Optimization Approach CDSCP-REL.KOM As explained before, Model 1 constitutes an IP. This model can be solved using off-the-shelf solver frameworks, most notably using the branch-and-bound algorithm. Unfortunately, this algorithm is based on the principle of (intelligently) enumerating the solution space and thus features worst-case exponential time complexity. Accordingly, the past research has shown that solving larger problem instances may result in computation times in the order of magnitude of hours, which renders the approach unsuitable for application at run time and highlights the need for a heuristic approach. Hence, as an initial measure, I propose the application of Linear Program (LP) relaxation [7] to the IP formulation. The corresponding approach is referred to as CDCSP-REL.KOM. Through the relaxation, the decision variables are defined as real, rather than binary and integer numbers, i. e., Eq. 7 is replaced by Eq. 8. The resulting problem can be solved using commonly much more efficient methods, e. g., the Simplex algorithm or interior point approaches [8]. However, the relaxed formulation may result in suboptimal solutions, thus trading reduced computation time for higher cost of the solution. 3.4 Implementation and Preliminary Evaluation In order to assess the performance of the proposed optimization approaches CDCSP-EXA.KOM and CDCSP-REL.KOM, I have prototypically implemented 14

19 them in Java. As solver, I employ the commercial IBM ILOG CPLEX framework. The focus of the evaluation is on the required computation time and the solution quality, i. e., total cost, and the tradeoff between these two factors. For the evaluation, I created six different test cases with a predefined number of data centers ( D ) and user clusters ( U ), respectively. Latency was considered as sole QoS attribute. Each test case involved 100 problems that were randomly generated, based on actual data from the 2010 United States census 1. The evaluation was conducted using a dedicated laptop computer, equipped with an Intel Core i5-450m processor and 2 GB of memory, operating under Windows 7. Table 1 provides the results of my evaluation. As can be seen, the heuristic approach CDCSP-REL.KOM has substantial benefits over the exact approach CDCSP-EXA.KOM with respect to the absolute computation time, specifically for larger problem instances. This is also confirmed by the macro-averaged ratio, which indicates reductions of up to 99.3%. This reduction is traded against a moderate increase in cost, which is up to about 11% for the smaller problem classes, but shrinks with increasing problem size. Hence, CDCSP-REL.KOM appears as a promising non-exact solution approach to the CDCSP, specifically for application at run time under stringent time constraints. Table 1: Evaluation results, with 95% confidence intervals in parentheses. Ratios were computed using both macro- and micro-average and use CDCSP-EXA.KOM as baseline. Test case Abs. Comp. Time [ms] Rel. Comp. Time Rel. Cost D, U EXA REL Macro Micro Macro Micro 10, % (2.7%) 8.2% 110.8% (9.8%) 108.3% 10, % (2.0%) 13.8% 108.7% (5.2%) 107.3% 20, % (3.5%) 5.3% 103.9% (1.5%) 103.7% 20, % (4.7%) 5.2% 103.2% (0.7%) 103.1% 30, % (3.5%) 1.0% 102.0% (0.5%) 101.8% 30, % (4.3%) 0.7% 102.3% (0.3%) 102.3% 4 Future Research Directions The approaches and corresponding results assume a scenario that involves deterministic data and a set of predefined QoS requirements. Furthermore, resources are represented in a coarse-granular form using server units, rather than individual resource types such as CPU power or bandwidth. This scenario and the corresponding optimization approaches could be extended in the future through the consideration of the following aspects: Individual resource types: The current approaches based on the assumption that the resource allocation take place in terms of server units. While this model may constitute a good approximation for 1 15

20 many application scenarios, I additionally plan to consider individual resource types such CPU, GPU, or memory in the future. Stochastic parameters: In future approaches, I plan to drop the assumption that service demands and QoS properties are precisely known in advance. For that purpose, I will adapt the model to permit for stochastic parameters as input. Functional/non-functional correlation: The current approaches treat service demands and QoS properties as independent. In the future, I plan to further investigate the correlation and potential tradeoff between these factors, e. g., potential reductions in latency through changes in data center resource usage. 5 Conclusions The cost-efficient selection of cloud data centers for the provision of multimedia services is an important challenge. For the resulting Cloud Data Center Selection Problem (CDCSP), I proposed an exact optimization approach, CDCSP- EXA.KOM, based on Integer Programming and a heuristic optimization approach, CDCSP-REL.KOM, that uses Linear Programming relaxation. A preliminary evaluation indicated high computational requirements for solving larger problem instances using the exact approach. Using the heuristic, computation time can reduced by up to 99.3% with moderate cost increases of about 11%, hence providing a first viable to scenarios where a selection at run time is required. My future work will focus on the extensions that were outlined in Section 4, the development of additional heuristic approaches, and a more extensive evaluation. Acknowledgments This work was partially supported by the Commission of the European Union within the ADVENTURE FP7-ICT project (Grant agreement no ) and by the E-Finance Lab e. V., Frankfurt a.m., Germany ( References 1. U. Lampe, Q. Wu, R. Hans, A. Miede, and R. Steinmetz, To Frag Or To Be Fragged, in CLOSER, S. Choy, B. Wong, G. Simon, and C. Rosenberg, The Brewing Storm in Cloud Gaming: A Measurement Study on Cloud to End-User Latency, in NetGames, R. Hans, U. Lampe, and R. Steinmetz, QoS-Aware, Cost-Efficient Selection of Cloud Data Centers, in CLOUD, I. Goiri, K. Le, J. Guitart, J. Torres, and R. Bianchini, Intelligent Placement of Datacenters for Internet Services, in ICDCS, F. Larumbe and B. Sansò, Optimal Location of Data Centers and Software Components in Cloud Computing Network Design, in CCGrid, S. Wang, Y. Liu, and S. Dey, Wireless Network Aware Cloud Scheduler for Scalable Cloud Mobile Gaming, in ICC, W. Domschke and A. Drexl, Einführung in Operations Research. Springer, F. Hillier and G. Lieberman, Introduction to Operations Research. McGraw-Hill,

21 A Model for Energy-Awareness in Federated Cloud Computing Systems with Service-Level Agreements Alessandro Ferreira Leite 1,2 Supervisors Alba Cristina Magalhães Alves de Melo 1, Christine Eisenbeis 1,3, and Claude Tadonki 4,5 1 Université Paris-Sud 11 2 University of Brasilia 3 INRIA Saclay 4 MINES ParisTech / CRI Abstract. As data centers increase in size and computational capacity, numerous infrastructure issues become critical. Energy efficient is one of these issues because of the constantly increasing power consumption of CPUs, memory, and storage devices. A study shows that the whole energy consumed by data centers will be extremely high and it is like to overtake airlines in terms of carbon emissions. In that scenario, Cloud computing is gaining popularity since it can help companies to reduce costs and carbon footprint, usually distributing execution of services across distributed data centers. The research aims of this work are to propose and evaluate a Model for Federated Clouds that takes into account power consumption and Quality of Service (QoS) requirements. In our model, the energy reduction shall not result in negative impacts to the agreements between Cloud users and Cloud providers. Therefore, the model should ensure both energy-efficiency and QoS parameters, which sets up possibly conflicting objectives. 1 Introduction Nowadays, the energy cost can be seen as one of the major concerns of data centers, since it is sometimes nonlinear with the capacity of those data centers, and it is also associated with a high amount of Carbon emission (CO 2 ). Some projections considering the data center energy-efficiency [1] show that the total amount of electricity consumed by data centers in the next years will be extremely high and that the associated carbon emissions would reach unprecedented levels. Depending on the efficiency of the data center infrastructure, the number of watts that it requires can be from three to thirty times higher than the number of watts needed for computations [2]. And it has a high impact on the total operation costs [3], which can be over 60% of the peak load.

22 The question is how to increase the energy efficiency of the whole data centers without sacrificing Quality of Service (QoS) requirements, both for economical reasons and for making the IT environment sustainable [4]. Answering that questions is difficult since there are many variables that contribute to the power consumption of a resource. For instance, the power consumption of a resource does not depend only on its architecture or on the application it is running but also it depends on its position in the data center and on the temperature of the data center [5]. Most of the researches on energy efficiency have focused on hardware. The hardware energy efficiency has significantly improved, with particularly high gains in the energy efficiency of hardware power consumption. However, whereas hardware is physically responsible for power consumption, hardware operations are guided by software, which is indirectly responsible for energy consumption [6]. Moreover, many energy-efficient computing approaches focus on singleobjective optimizations, without considering the QoS parameters. Energy-saving schemes that result in too much degradation of the system performance or in violations of Service-Level Agreements (SLA) parameters would eventually cause the users to move to another provider. Therefore, there is a need to reach a balance between the energy savings and the costs incurred by these savings in the execution of the applications. In that context, Cloud computing is gaining popularity since it can help companies to reduce costs and carbon footprint, usually distributing the execution of their services across distributed data centers. A Cloud computing data centers usually employ virtualization techniques to provide computing resources as utilities to provision computational resources on-demand, and auto-scaling techniques to dynamically allocate resources to applications accordingly to the load, removing resources that would otherwise remain idle and wasting power consumption. In order to support a large number of consumers or to decentralize management, Clouds can be combined, forming a Federated Cloud environment. A Federated Cloud can move services and tasks among Clouds in order to achieve its goals. These goals are usually described as QoS metrics, such as minimum execution time, minimum price, availability, minimum power consumption and minimum network latency, among others. Federated Clouds are an elegant solution to avoid overprovisioning, thus reducing the operational costs in an average load situation, while still being able to give QoS guarantees to the users. In that case, our research aims are to propose and evaluate a model for Federated Clouds that takes into account power consumption and SLA constraints. In this proposal, the energy reduction shall not result in negative impacts to the Service-Level Agreements (SLA) between Cloud users and Cloud providers. Therefore, the model should ensure both energy-efficiency and QoS parameters, which sets up possibly conflicting objectives. 18

23 2 Related Works Basically there are two approaches to reduce power consumption in a data center. The first one is the method of energy-aware hardware design, which can be carried out at various levels, such as device-level power reduction, circuit and logic level intelligent power management and architecture power reduction. The second approach is the method of power-aware software design, known as Dynamic Voltage and Frequency Scaling (DVFS) [7], including the Operating System, the applications and resource allocation in general. In [8], a DVFS and temperature aware load balancing technique is presented to constrain core temperatures. The approach lets each core working at the maximum available frequency until a temperature threshold is reached. Experiments in a cluster with dedicated air conditioning unit show that a cooling saving of 57% can be achieved with 20% of timing penalty. In [9], Mitrani proposes a dynamic operating policy using Queueing Theory that considers power consumption and users defect, if they have to wait too long before the service starts. The servers are switched to on/off in block and they are put in a reserve state accordingly with the workload of the system. Considering numerical experiments, the results show that the cost of losing a request increases with the workload and that the benefit of using a dynamic policy, compared to leaving all servers powered on, is larger for lighter loads than for heavier ones. If the number of servers is fixed to the maximum in lighter load scenario, the powered on reserves are not necessary. And, for heavier scenarios, the servers should be powered on as soon as a queue appears. In [10], Khazaei, Misic and Misic model a Cloud data center as an M/G/m/m + r queue system, proposing an analytical technique based on an approximate Markov chain model to evaluate aspects related to performance indicators without imposing restrictions to the number of servers and assuming a general service time for requests. The aim of the authors was to evaluate the probability distribution of the response time, the number of tasks in the system, and the size of the buffer needed for the blocking probability, using a combination of transformed base analytical model and a homogeneous and ergodic Markov chain. The results show that the impact of the buffer size becomes imperceptible as the number of server increases, the probability of blocking a task decreases when the buffer size increases, and that, considering SLA parameters, it is better to have distinct homogeneous Clouds instead of having one heterogenous Cloud. In [11], a strategy based on Game Theory is proposed for resource allocation in Horizontal and Dynamic Federated Clouds. Two asynchronous utility games are proposed. The first one, called UtilMaxpCP game, aims to maximize the total profit for the buyer Cloud Provider (CP) and the second one (UtilMaxcCP game) tries to maximize the social welfare for the seller CPs. In that Federation environment, the CPs make decisions based on local knowledge and preferences and global decisions are achieved through interactions among them. The main goal is to seek an equilibrium for the whole system. 19

24 The results obtained with mathematical simulation show that the UtilMaxpCP game achieved the highest welfare, and the UtilMaxcCP game obtained the highest return on investment and it was more cost-effective. 3 Discussion and Research Approach Recently, Cloud computing has been receiving a lot of attention since it is able to provide utility computing in an elastic environment. The advantages of Cloud computing can be obtained at zero cost since many of the Public Clouds provide free usage slots, allowing users to run their applications for free in Cloud environments. Also, many Clouds can be put together and seen as a unique environment. Public Clouds can be suitable for execute scientific applications as they provide virtually unlimited amount of computing resources on-demand and nearly realtime, especially for users whose peak computing resources needs exceed the capacity of available resources. However, even though Public Clouds have numerous advantages, they also have several disadvantages for scientific applications. The first disadvantage is that there is significantly evidence that they cannot produce repeatable and reproducible scientific results [12]. The Cloud Providers can limit the number of resources that can be acquired in a period of time. Moreover, due the number of providers, the complexity increases as users have to deal with different Cloud interfaces, pricing schemes and Virtual Machines types. A Federated Cloud can be used to avoid the unavailability of the resources in case of a Cloud failures and in order to achieve better QoS, reliability and flexibility. Cloud Federation can be defined as a set of Cloud computing providers, public and private, that voluntarily interconnect their infrastructure through the Internet in order to share resources among each other [13, 14]. In a federated environment, Clouds interact and negotiate the most appropriate resources to execute a particular application/service. This choice can involve the coordination and orchestration of resources that belong to more than one Cloud, which will be used, for instance, in order to execute huge applications. On the other hand, we have Multi-Cloud that denotes the usage of multiple and independent Clouds by a client or service [15]. In this work we use the term Federated Cloud even when the Clouds are not voluntarily interconnected because we are considering the user viewpoint. In a Federated Cloud context, the usage of an efficient Cloud Broker is essential to abstract the complexity of the Clouds. The role of a Cloud Broker is two fold. First, it provides scheduling mechanisms required to optimize the placement of VM or of the applications among Clouds. Second, it offers a uniform interface with operations to manage the resources independently of a particular Cloud Provider. The scheduling mechanism in order to optimize the placement of VM or application must take into account the requirements such as the characteristics of the resources, service performance, data locality in order to avoid performance 20

25 degradation of the services, the cost, users QoS constraints and even power consumption. A Cloud scheduler finds an allocation of resources among Clouds Providers that optimizes the user criteria and adheres to some placement constraints. The Cloud Broker management interface must implement a software layer to translate between generic management operations to specific Cloud API providing a uniform view of the Clouds. Figure 1 shows the architecture of the proposed Cloud Broker. We assume a hierarchical and hybrid Federated Cloud model. In our model, we have a Provisioning module which is responsible for discovers the resources required to execute an application/service. A Cloud Coordinator to interact with the user and the Clouds. In that case, the Cloud Coordinator works with cooperation of the Provisioning module to control the execution of the applications/services. There is a module to monitor Clouds resources that utilizes QoS parameters or users constraints to take action such as migrating an application/service to another Cloud, consolidating VMs to reduce power consumption, notifying the users through the Provision module about eventually application power leak, or the Cloud Coordinator about the Green Performance Indicators. A Power data logging that is responsible for connect to a power meter and read the power measurements. That module is activated only if there is a power meter connected to the infrastructure. The Power analyzer is responsible for predict the power consumption of the Cloud using the data of the power meter and the data about the Cloud resources usage such as CPU, memory and I/O activity. The Communication layer is responsible to implement transparent communication among the Clouds. In that layer, generic requests are translated to specific request required by a Cloud provider. This layer is necessary because each Cloud environment may have different limitations, such as maximum request size, distinct request timeouts, distinct commands to interact with the environment and, in some cases, a different execution platform. Fig. 1. A Federated Cloud Broker Architecture 4 Conclusion and Future Work This work presented a hybrid Federated Cloud model that takes into account the power consumption and QoS parameters. For us, a Federated Cloud is an elegant 21

26 solution to avoid overprovisioning with a good performance and operational cost. It is also a solution to help data centers to reduce power consumption and its associated carbon footprint. As future work, we intend to implement the architecture and evaluate it running different applications considering users constraints such as cost, execution time, and Green Performance Indicators (GPIs) related to energy consumption and carbon footprint in a way that the user should pay according to the efficiency of his/her applications in terms of resource utilization and power consumption. References 1. Greenpeace: Make it green: Cloud computing and its contribution to climate change. Technical report, Greenpeace International (March 2010) 2. Stanford, E.: Environmental trends and opportunities for computer system power delivery. In: 20th ISPSD. (2008) Hoelzle, U., Barroso, L.A.: The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines. M. C. Pub. (2009) 4. Berl, A., Gelenbe, E., Di Girolamo, M., Giuliani, G., De Meer, H., Dang, M.Q., Pentikousis, K.: Energy-efficient cloud computing. Comput. J. 53 (2010) Orgerie, A.C., Lefevre, L., Gelas, J.P.: Demystifying energy consumption in grids and clouds. In: GREENCOMP. (2010) Agosta, G., Bessi, M., Capra, E., Francalanci, C.: Dynamic memoization for energy efficiency in financial applications. In: IGCC. (July 2011) Chandrakasan, A.P., Brodersen, R.W.: Minimizing power consumption in digital cmos circuits. Proceedings of the IEEE 83(4) (1995) Sarood, O., Kale, L.V.: A cool load balancer for parallel applications. In: SC. (2011) 21:1 21:11 9. Mitrani, I.: Service center trade-offs between customer impatience and power consumption. Performance Evaluation 68(11) (2011) Khazaei, H., Misic, J., Misic, V.B.: Performance analysis of cloud computing centers using m/g/m/m+r queuing systems. TPDS 23(5) (2012) Hassan, M.M., Hossain, M., Sarkar, A., Huh, E.N.: Cooperative game-based distributed resource allocation in horizontal dynamic cloud federation platform. Information Systems Frontiers (2012) Schad, J., Dittrich, J., Quiané-Ruiz, J.A.: Runtime measurements in the cloud: observing, analyzing, and reducing variance. VLDB Endowment 3(1-2) (2010) Buyya, R., Ranjan, R., Calheiros, R.N.: Intercloud: utility-oriented federation of cloud computing environments for scaling of application services. In: ICA3PP. (2010) Celesti, A., Tusa, F., Villari, M., Puliafito, A.: How to enhance cloud architectures to enable cross-federation. In: CLOUD. (2010) Ferrer, A.J., HernáNdez, F., Tordsson, J., Elmroth, E., Ali-Eldin, A., Zsigri, C., Sirvent, R., Guitart, J., Badia, R.M., Djemame, K., Ziegler, W., Dimitrakos, T., Nair, S.K., Kousiouris, G., Konstanteli, K., Varvarigou, T., Hudzia, B., Kipp, A., Wesner, S., Corrales, M., Forgó, N., Sharif, T., Sheridan, C.: Optimis: A holistic approach to cloud service provisioning. Future Generation Computer System 28(1) (Jan 2012) 22

27 A Quality Model for Cloud Services in Use Yvonne Thoss Faculty of Computer Science, Technische Universität Dresden, Dresden, Germany Supervisor: Prof. Dr. rer. nat. habil. Dr. h. c. Alexander Schill, Technische Universität Dresden Abstract. The number of available cloud services is constantly increasing. It is up to the cloud user to decide which cloud service or cloud provider fits his needs best. In order to compare different cloud services, individual requirements such as security, reliability or performance issues and economic efficiency have to be considered. For cloud users it is complicated to evaluate the quality of their cloud services appropriate since support is missing. My work introduces a quality model that provides the essential quality dimensions in order to define the quality of cloud services from the cloud user s view. Keywords: Cloud Computing, Cloud User, Cloud Service Quality Model 1 Introduction Cloud Computing enables the demand-oriented access to distributed computing resources over a broadband network. The capabilities available for provisioning seem to be unlimited at any time from the cloud user s view. The use of cloud services is constantly increasing and the satisfaction of the users regarding the fulfilment of their wishes and desires is essential. Hence, both functional and non-functional properties (FPs, NFPs) exercise controlling influence, whereby the focus is on the latter. The user s requirements with regard to the NFPs refer to both measurable Quality of Service (QoS) metrics (e. g. availability, response time) and subjectively perceived Quality of Experience (QoE) criteria (e. g. usability, cost effectiveness) [1]. Users need support to evaluate the quality of the NFPs during both the service discovery and usage. For instance there is a lot of uncertainty with regard to the security of personal data, the cost effectiveness or the cloud provider s sense of responsibility. Therefore, the users perceive their cloud as a black box. The goal is to increase transparency by means of a Cloud Service Quality Information System (CSQInfS) in order to establish a quality consciousness. The research focus is on giving design recommendations for visualizing the so called health status of the user s personal cloud. The basis is provided by a cloud quality model. Further, it is assumed that all metrics and criteria are available. In the first step, a Cloud Service Quality in Use Model (CSQiUseM) is proposed in chapter 2. To demonstrate the relevance of the quality dimensions and the CSQInfS s acceptance the aim and results of a user survey are presented in chapter 3. Finally, the model is compared with several related models and differences are illustrated (chapter 4).

28 2 Proposal for a Cloud Quality Model The CSQiUseM proposed in this study is specified below and shown in Fig. 1. Eight quality dimensions have been defined with the help of their significant quality factors and examples of associated metrics for measurement. Fig. 1. Quality dimensions and associated quality factors of the CSQiUseM 1. Performance: The degree to which the service satisfies performance-related quality attributes regarding time efficiency (e. g. response time), consumption efficiency (e. g. capacity, utilization) and computing efficiency (e. g. million instructions per second, floating point operations per second) [10]. 2. Economic Efficiency: The degree of how financial viable the usage is, depending on the usage fees and their calculation base [11]. The latter depends on the usage behaviour and the usage context. 3. Reliability: The degree of how reliable the service functionalities are executed. First it is evaluated by the service s availability and thus the ability of the user to use it or access data at a particular time [12]. Second the service s stability and robustness (e. g. loss rate, mean time between failures or to recover) as well as the actuality of the hardware and software components have to be considered. Finally the fulfilments of guaranteed service levels have to be monitored and analysed. 4. Data Security: The degree of how secure the users data are, and therefore, how many security measures are implemented to prevent the intrusion, wiretapping, manipulation or erasing of the users personal data. Hence confidentiality, integrity and availability are the three main security objectives [15]. Further important security objectives (e. g. authenticity) are assignable to the core objectives. 5. Provider s Sense of Responsibility: The degree of how responsible the provider acts. This includes the effort to certify the services [16] and the usage of environmentally friendly hardware and software, e. g. determined by the value Power Usage Effectiveness [17]. Also general credibility aspects are interesting as well, such as the financial stability, the size and existence of the provider or his competence. 6. Customer Service: The degree of how much the provider supports the user to ensure the most effective usage of the service. This becomes possible by sufficient in-

29 formation provisioning (e. g. instructions for use, online help systems) and available individual support, both synchronous (e. g. hotline, chat) or asynchronous (e. g. forum, ). In this connection, the quality of the advices plays an important role due to its impact on how expedient and quick user requests are answered. Next to the provider s expertise supporting costs have to be considered as well. 7. Usability: The degree of how effective, efficient and satisfying the usage of the service is experienced by the user to achieve specified goals regarding the context of use [13]. The effectiveness can be equated with the usefulness of the service from user s perspective and measured via e. g. the level of achieved objectives. The time to perform specific tasks defines the perceived efficiency and is measured via e. g. the variety of tasks during a defined time period. Both gets effected by the understandability of the concept and application of the service as well as the learning time, the amount of effort or its accessibility collectively referred to as Ease of Use [2-3, 14]. Very subjective and difficult to determine is the user s satisfaction, which describes the service s emotional quality. The user is satisfied if his requirements are fulfilled and the usage was enjoyable ( Joy of Use ). 8. Flexibility: The degree of the user s flexibility and independency, determined by the scalability, portability and interoperability of the service. Moreover a variety of contract terms regarding e. g. the contract period, the payment model or payment mechanisms or negotiable service level objects have to be considered as well. 3 Evaluation By means of a user survey the proposed CSQiUseM was evaluated at the computer expo CeBIT The first aim was to find out if the eight quality dimensions are relevant from the user s view. Second aim was to determine the existence of a clear rank order with regard to the importance of the information. Moreover the perceived need of a CSQInfS was identified. A total of 30 with the knowledge of cloud computing characteristics participated. The survey showed, that in average information of all described quality dimensions has been evaluated as moderately, very or extremely important on a six-point scale (Fig. 2). Information about the dimension Data Security was rated as extremely important (83 % stated so). Information about the dimension Reliability was rated as second important (57 % voted for extremely important). Information of the further six dimensions were considered in average between moderately and very important (Fig. 2). Only a maximum of two participants evaluated some of these six dimensions as low or not at all important, partly because they did not see any need for receiving this information thanks to their existing knowledge. When asked about further quality information, only one participant requested information about the added value. In the second part the participants had to sort the five most important dimensions in decreasing order. The analysis revealed a very user-dependent rank order; therefore, no explicit order could be derived. However, Data Security information was mainly rated as most important, followed by Reliability information. Information about the Quality of the Customer Service, the Usability and the provider s Sense of Responsibility were mainly not mentioned. The

30 Data Security Reliability Economic Efficiency Performance Sense of Responsibility Flexibility Customer Service Usability rank position of information about the service s Performance or Flexibility differed significantly among the participants. Contrary to first results less attention was paid to Economic Efficiency information. Finally, the third part of the survey showed that users consciously care about NFPs (83 %). Due to their lack of knowledge (67 %) or not provided quality information (83 %) they wish support in order to assess the quality of their services in use (87 %), for instance by using a CSQInfS. Fig. 2. Evaluated relevance of the eight quality dimensions of the CSQiUseM Not at all important Low importance Slightly important Moderately important Very important Extremely important 4 Related Quality Models To concretise relevant quality requirements for services models that define the quality of software, e-services (interactive, content-centered, and Internet-based customer services), IT services (combination of persons, processes, and information technologies) and cloud services were analysed [5, 8]. Our analysis showed that no quality model covers all quality characteristics relevant for services (Table 1). Software quality models such as the ISO 9126 [2], the Volere Requirements Specification Template [3] or the FURPS(+) model [4] focus on software requirements of both FP and NFP and ignore economic aspects, the provider s sense of responsibility (e. g. certification, energy efficiency) or the provided customer service. E-services including online retailing, banking or shopping sites have a long history and there is also a wide range of quality models available for measuring both their quality level and the user s satisfaction. Li and Suomi have evaluated different models and they describe that several are based on the multiple-item scale SERVQUAL [5]. SERVQUAL is used to measure the user s subjective perceptions of the quality of non-internet-based services, whereby the term quality stands for the difference between the perceived and the expected provision of service [6]. Later on Parasuraman et al. developed the multiple-item scale E-S-QUAL for measuring the service quality delivered by Web sites [7]. Li and Suomi adopted the E-S-QUAL dimensions scale as well and subdivided the online transaction process into three stages [5]. However, their analysis has shown that the usability and design of the e- service as well as the dealer s reliability are the most important quality criteria since they were mentioned most frequent, followed by the dealer s sense of responsibility, the available customer service, implemented security mechanisms and finally the quality of the purchased services or products. In considering services these aspects are relevant as well. The usage of e-services is often free, thus economic aspects such as

31 the price may be irrelevant. Similarly, flexibility concerns are not an issue. The Information Technology Service Management Forum (itsmf) developed an assessment model to evaluate the quality of the process results of IT services from user s view. In contrast, standards (e. g. ISO 9001, 27001) or reference models (e. g. ITIL) focus on the provider s view during the process design, requirements and improvement phases [8]. The evaluation criterions focus on the contract, the employees, the sustainability, the service provisioning, the communication possibilities, and the emergency management. All criteria are adaptable for services, however, economic or flexibility aspects are not considered. Finally, Li et al. developed a quality model based on 46 academic studies to evaluate commercial Platform or Infrastructure as Services, unfortunately no Software as Services [9]. The metrics are assigned to the three aspects performance, economics, and security. Moreover for services relevant quality aspects related to their usability and reliability as well as to the provider s sense of responsiveness or the provided customer service are also not considered. In summary, all related quality models can be used as a basis to evaluate the quality of services from user s view. However, each model ignores for sometimes uncertain reasons some of the essential quality requirements with regard to services. Therefore, a definition of a cloud quality model as described was necessary. Table 1. Characteristics supported by related quality models/literature Quality Existing Quality Models Own dimensions Software [2, 3, 4] E-Services [6, 7, 5] IT Services [8] Cloud Services [9] Model Performance Reliability - Data Security Usability - Flexibility - - Economy Responsibility - - Customer Service Conclusion and Future Work The satisfaction of a user is influenced by the quality of both its FP and NFP, whereby normally information about the former is provided in detail. Conversely, the users are not sufficient aware about the quality of the NFP. As shown, users want and need support to appropriately evaluate the quality of their services, taking individual interests into account. Thus, the question which quality attributes have to be considered is an interesting research topic. Since there is no quality model available, the CSQiUseM with its eight core quality dimensions was provided. Within a user survey the dimension s relevance and the positive demand for a CSQInfS was demonstrated. Based on the result the next step is to determine, how these quality information can be

32 visualized in an optimal way (retrieving the quality information is another interesting topic but not in the focus). Techniques for user interface evaluation (e. g. guidelines, cognitive walkthrough) are going to be considered [18]. The user s individual values and needs as well as a commonly understandable transfer of knowledge are important aspects to consider. Finally a visualization guideline to support the development of a CSQInfS is going to be proposed. Acknowledgements This work has received funding under project number by means of the European Regional Development Fund (ERDF), the European Social Fund (ESF) and the German Free State of Saxony. References 1. ITU-T: Definitions of terms related to quality of service, Recommendation E.800 (2008) 2. ISO/IEC 9126: Software Engineering - Product Quality. ISO (2001) 3. Robertson, S., Robertson, J. C.: Mastering the Requirements Process (2nd Edition). Addison-Wesley Professional (2006) 4. IBM developerworks: The FURPS+ System for Classifying Requirements. (2005) 5. Li, H., Suomi, R.: Evaluating Electronic Service Quality: A Transaction Process Based Evaluation Model. (2007) 6. Parasuraman, A., Zeithaml, V., Berry, L.: SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality. In: Journal of Retailing. (1988) 7. Parasuraman, A., Zeithaml, V., Malhotra, A.: E-S-QUAL - A Multiple-Item Scale for Assessing Electronic Service Quality. (2005) 8. Dohle, H., Lasch, C., Wiedermann, I., Wild, P.: IT-Servicequalität messbar machen - Das itsmf-bewertungsmodell. Symposion Publishing GmbH, Düsseldorf (2013) 9. Li, Z., O Brian, L., Zhang, H., Cai, R.: On a Catalogue of Metrics for Evaluating Commercial Cloud Services. (2012) 10. Terplan, K., Voigt, C.: Cloud Computing. Mitp, Heidelberg (2011) 11. NIST: US Government Cloud Computing Technology Roadmap Volume II Release 1.0 (Draft) - Useful Information for Cloud Adopters. (2011) 12. DIN : Zuverlässigkeit Begriffe. Beuth Verlag, Berlin (1990) 13. ISO Ergonomic requirements for office work with visual display terminals (VDTs) Part 11: Guidance on usability. ISO (1998) 14. Hartson, R., Pyla, P.: The UX Book. Elsevier, Waltham (2012) 15. Fraunhofer AISEC: Cloud Computing Sicherheit - Schutzziele. Taxonomie. Marktübersicht. Garching b. München (2009) 16. Sunyaev, A., Schneider, S.: Cloud Services Certification How to address the lack of transparency, trust, and acceptance in cloud services. Communications of the ACM, Vol. 58, No. 2 (2013) 17. The Green Grid: Green Grid Data Center Power Efficiency Metrics. (2008) 18. Jeffries, R. Miller, J., Wharton, C., Uyeda, K.: User interface evaluation in the real world: a comparison of four techniques. CHI '91 Proceedings, Pages (1991)

33 Information Supply Chains Optimized Multi-Tier Service Selection based on Consumer Context and Utility Jens Kirchner Supervisors: Andreas Heberle, Welf Löwe Karlsruhe University of Applied Sciences {Jens.Kirchner, Linnaeus University {Jens.Kirchner, Abstract. In a service-oriented market such as the Internet, service functionality can be provided by more than one service provider. Service consumers are therefore able to select among different services, which distinguish themselves in their non-functional characteristics. For an optimized service selection, the actually experienced non-functional characteristics of a service matter. However, services can have further sub-dependencies in order to provide their functionality, like in a supply chain. Unlike real-life supply chains, Information Supply Chains are more flexible due to the service-oriented computing (SOC) technology. Within such a supply chain, a service consumer s optimization goals can be passed trough, which influence the structure of the supply chains because sub-services can also be selected due to end consumers goals in order to increase their utility. However, end consumers optimization goals can vary. Therefore, services are affected in their non-functional characteristics due to variety in their supply chain. In this paper, we present a novel approach of an multi-tier optimized selection based on machine learning, which considers the structural variations of Information Supply Chains, consumer context and consumer utility. 1 Introduction Web services distinguish themselves in functional and non-functional characteristics. Obviously, when choosing a Web service, the functional characteristics are most important. However, if there are more services providing the same functionality, services are selected according to their non-functional characteristics in reference to their importance towards a consumer s utility and quality goals. In [1], we described how Service-Oriented Computing (SOC), Software as a Service (SaaS), and Cloud Computing indicate that the Internet develops itself to a market of services where functionalities can be dynamically and ubiquitously consumed. The main distinctive characteristic of such a market is that the service consumer does not know anything about the implementation or the

34 system environment of the provided service. These things, although, have an influence on non-functional characteristics. Non-functional characteristics, in turn, are important for service selection. We also explained why Service Level Agreements (SLAs) provided by service providers are not sufficient for service selection in an open market. We therefore introduced a framework, where service selection is based on the actual consumer experience of the non-functional characteristics of services, which we call Service Level Achievements. From the consumer side, the experienced non-functional characteristics of a service are not only affected by the performance of the particular service itself, but also by the performance of its sub-services. The experienced service quality, performance, and non-functional characteristics are the result of each service component behind a service interface. Like in a supply chain scenario, the provision of service functionality is often the outcome of a multi-stage call of services depending on each other. In an SOC setting, such as the Internet, sub-dependencies are not evident to consumers. Because of the similarity, we call such multi-stage calls Information Supply Chains. An Information Supply Chain is a multi-stage dependent interconnection of consuming and providing services in order to provide information or certain functionalities for an (end) consumer. In this paper, we introduce a multi-tier approach for optimized service selection based on the context and utility of a service consumer. The multi-tier approach focuses on machine learning and the propagation of optimization goals in the form of utility functions for the selection of sub-services within an Information Supply Chain in order to achieve a higher individual utility for end consumers. 2 Information Supply Chains As previously described, service functionality is the product of a service and its sub-services. Such Information Supply Chains accumulatively affect the performance and so the non-functional characteristics of a called service, which is illustrated in Figure 1. Service L1 t: 34 (Δ: 14) c: 115 (Δ: 30) Service L2 t: 20 (Δ: 10) c: 85 (Δ: 80) Service L3 t: 10 c: 5 Fig. 1: Accumulation of non-functional characteristics at each level within an Information Supply Chain The illustration shows a simple Information Supply Chain in which an end consumer calls a service (Service L1) which, in turn, has a two-layer sub-dependency (Service L2 and Service L3) in order to provide its service functionality. As we can see, each service has two cumulative non-functional characteristics: t may stand for response time and c may stand for monetary costs, which have to be 30

35 expended in total by the end consumer. As a result of the sub-dependency, the characteristics accumulate on each level 1. In an open SOC world, services conceal their actual provision (i. e. implementation, infrastructure, sub-dependencies, etc.) from their consumers. Therefore, services are in total selected according to their non-functional characteristics and those of their sub-chain underneath. Since these sub-services are, in turn, selected due to their non-functional characteristics (cf. [1]), an efficient provision of their sub-providers is necessary. However, consumers within a supply chain can have different quality/optimization goals which we express in utility functions. Consumers in a supply chain can be the actual end consumers, but also all intermediary services 2 which consume one or more sub-services in order to provide service functionality. Information Supply Chains have similar characteristics to other supply chains. However, there is one big difference: Information Supply Chains are more dynamic due to their technical flexibility in the SOC context. 3 Optimized service selection along an Information Supply Chain from a consumer s perspective In an open market SOC scenario, utility and context context play an important role in the determination of an optimal service candidate: Utility When service consumers are able to select among several services for a specific functionality, they select the one whose non-functional characteristics contribute most to their benefit in terms of quality-, economic goals or Service Level Requirements. Within our framework a consumer s utility is expressed in a function mapping measured quality metrics and numeric representations of stated service characteristics to a numerical overall utility value. Such functions are called utility functions. [1] Consumer context The second important aspect during service selection is context, since Service Level Achievements base on measurements from a consumer s perspective. Because of different environments and conditions during service calls, context differentiation in the form of information about details such as location (e. g. city, state, country), network connection (e. g. maximal bandwidth, connection classification), but also call time have to be taken into account when generating Service Level Achievements as well as recommending/selecting an appropriate service candidate. 1 The delta contribution of each service itself is written in brackets. This delta is obviously also dependent on the implementation and composition of a service and can therefore not be seen unrelated to the sub-chain. 2 Service consumers and service providers are not disjunct at all times; service intermediaries are service providers, but also service consumers when they call subservices. [1] 31

36 3.1 Single-tier optimization The most obvious form of an optimized service selection is the local or singletier optimization. It reduces the selection scenario to two roles: One service consumer is able to select among more than one services in order to consume a certain functionality. Within our framework, service consumers send requests to the broker containing the requested service functionality, their context and their utility function. Based on the context, the broker takes the Service Level Achievements and calculates the utility value of each service candidate with the consumer s utility function. The service candidate with the highest numerical utility value will be recommended to the service consumer. As previously said, the non-functional characteristics of a service is the accumulation within its supply chain. Therefore, service candidates are not selected due to their own performance but the performance of their Information Supply Chain. Service L1A t: 48 (Δ: 11) c: 98 (Δ: 40) U: -290 (Δ: -84) Service L2A t: 37 (Δ: 25) c: 58 (Δ: 50) U: -206 (Δ: -150) Service L3A t: 12 c: 8 U: -56 Service L1B t: 34 (Δ: 14) c: 115 (Δ: 30) U: -251 (Δ: -86) Service L2B t: 20 (Δ: 10) c: 85 (Δ: 80) U: -165 (Δ: -120) Service L3B t: 10 c: 5 U: -45 Service L1C t: 35 (Δ: 25) c: 145 (Δ: 45) U: -285 (Δ: -145) Service L2C t: 10 (Δ: 5) c: 100 (Δ: 90) U: -140 (Δ: -110) U: ((-t)*4 - c) MAX Service L3C t: 5 c: 10 U: -30 Fig. 2: Single-tier optimized service selection Figure 2 shows a simplified example of an end consumer with the opportunity to select between the service interfaces of Information Supply Chains for a specific functionality. In this example, the provision of a fictive functionality is a supply chain of three levels (A = {L1A, L1B, L1C}, B = {L2A, L2B, L2C}, C = {L3A, L3B, L3C}) 3. Within each level, services can be substituted with each other. Therefore, we theoretically have the choice among 27 possible Information Supply Chains ( A B C ). The selection options are indicated with dotted arrows. The non-functional characteristics are the input for the end consumer s utility function (U), which is used to determine the selection option that creates the highest utility. In the figure, the calculated, individual utility value is added to each service. Keep in mind, that this utility value is specific for this end consumer. Utility functions can vary, so the selection of the sub-services of the in- 3 For illustration purpose, the length/depth of the chains in the example are equal. In reality, however, the depth and structure of information supply chains with similar functionality can vary due to differences in their implementation. 32

37 termediary services can therefore be based on different utility functions. Marked with gray arrows, we see the actual statically bound sub-services of each intermediary service according to their own utility function. For instance, the utility function of Service L1A is U : (( t) c) MAX. So Service L2A was picked since it created the highest utility value (L2A: U = 95; L2B: U = 105; L2C: U = 110). It can also be the case that services statically bind themselves to a service without any optimization (Service L2A L3A). As described in our SOC scenario, the consumer is only aware of its direct sub-services. All other sub-dependencies are concealed by the directly called interface. In this example, the consumer can only choose between the three, predefined Information Supply Chains as a whole. As a result, the selection is based on the overall utility, which is the output of the utility function (U) with the respective accumulated non-functional characteristics as its input. We can see that, based on the given information as well as the utility function, the consumer selects the supply chain in the middle, since it has the best overall utility (marked with black arrows). 3.2 Multi-tier optimization Supply chains in general have the problem that they focus on consumers needs, but they have to set themselves up and need to select the appropriate subproviders beforehand. Hence, they have to first select a specific kind of consumers and focus on their quality goals. For instance, a deli shop sets a higher focus on the quality of its products than on costs, while a discounter has a stronger focus on costs. Based on their focus, they are selected by consumers with similar goals. So, although the focus of supply chains is consumer-oriented, its set up comes from the opposite direction. Because of the technical flexibility, Information Supply Chains are not limited to that drawback. With our multi-tier approach of optimized service selection in an open market scenario, we show how a higher utility can be reached at the consumers side, which also increases providers benefit: Service L1A t: 21 (Δ: 11) c: 140 (Δ: 40) Service L2A Δt: 25 Δc: 50 Service L3A t: 12 c: 8 Service L1B Δt: 14 Δc: 30 Service L2B Δt: 10 Δc: 80 Service L3B t: 10 c: 5 Service L1C Δt: 25 Δc: 45 Service L2C t: 10 (Δ: 5) c: 100 (Δ: 90) U: ((-t)*4 - c) MAX Service L3C t: 5 c: 10 Fig. 3: Multi-tier optimized service selection Figure 3 picks up the previous example. While within the single-tier optimization, the consumer was only able to select at his level, we now focus on a real consumer-oriented approach to increase his overall utility. 33

38 As previously described, service interfaces conceal implementation and subdependencies, and so, sub-chains on each level. However, in order to convey the idea of this approach, this theoretical illustration shows each component. Services alongside a (sub-) chain contribute to the overall non-functional characteristics towards the end of the chain. The actual delta of each service cannot be easily determined since it is also related to the process sequence on each level. For simplification reasons, we assume within this example that this is not the case and that all deltas of each service will sum up towards the end of the chain (cf. Figure 3). From a global perspective within this example, we can calculate the non-functional characteristics of each possible (sub-) chain. With the provided utility function (U), each utility value can be determined and also the global optimum. Table 1 lists the dynamically calculated top 5 sub-chains on each level for utility function U. Ranking Chain combination t c U 1 L1A L2C L3C L1B L2C L3C L1A L2B L3C L1B L2B L3C L1A L2C L3B L2C L3C L2B L3C L2C L3B L2B L3B L2C L3A L3C L3B L3A Table 1: Top 5 chain paths within a perfect combinatorial selection scenario We now leave the perfect world with all the simplifications. In reality, we cannot see beyond the service interfaces and we cannot see the sub-dependencies and how supply chains look like, either. However, they are still there and the way they look affects non-functional characteristics. In our framework, we pass utility alongside the supply chain and on each level, the sub-chain is selected according to the end consumer s utility function and the context of the direct consumer on each level. Obviously, in a changing world such as the Internet, results are often sub-optimal, since the overall optimal paths have not been discovered yet. But learning on each level leads the continuous improvement of the whole chain. Only chain paths that have already been learned can be considered during selection. New chain paths can be discovered, e. g., through different utilities: When a new consumer comes up with a new utility function, a new chain path 34

39 can be selected in a multi-tier selection, when this new utility is passed through. So the broad variety of chain combinations arises through the diversity of utility functions, hence, the diversity of consumer requirements. In the example, let us assume the sub-chain combination L2C L3C has not been discovered yet. So the known optimal chain of all three levels behind a service interface is L1A L2B L3C. Now, a service consumer looks for a functional service, which comprises a functional sub-chain of Level 2 and Level 3 services. In a multi-tier selection, L2C in combination with L3C has never been selected because it has so far not been a combination with a high utility for a consumer. However, for the new consumer s utility, this combination turns out to be an optimal combination, so after consumption, the actual overall non-functional combination for the service interface with that sub-chain combination has been learned. After that, the next time, the service interface of L1A is selected with the initial utility function, there is a better service interface of a Level 2 3 sub-chain available (L2C L3C), which is now picked. After the consumption of this new combination of all three levels, the final optimal combination L1A L2C L3C with its overall non-functional characteristics has been learned. The Internet as an open market of services is continuously changing. Therefore, services join and leave the market. But also the non-functional characteristics of services change, due to diverse reasons. So this approach has to deal with change in general. So far, we talked about the structural information of Information Supply Chains, however, one of the major ideas of SOC is separation of concerns, therefore, a service consumer does not need to know about the actual provision of a service, neither about the sub-dependencies. As for our framework, instead of concerning about the concrete sub-chain structure details behind a service interface too much, we focus on the information from a consumer side. We assume that based on the experienced non-functional characteristics within a certain consumer context, consumers with similar utility functions are recommended the same chain path. The actual optimization within Information Supply Chains is therefore path-related, although the knowledge about precise structure details is not necessary for the actual optimization since it only focuses on the actual consumer experience of the non-functional characteristics of a service and its supply chain. The optimization for consumers is based on their utility function and their context. Within a single-tier optimized selection of services (with no sub-dependency), the learning about the experienced non-functional characteristics is based on a consumer s context. However, within a multi-tier approach, service context and a consumer s utility is important for learning, because of the structural chain differences and so different experience of the non-functional characteristics. Therefore, within the example, the service consumer of the Level 2-3 sub-chain contributed with a different utility function to the learning process which helped service consumers with other utility functions for the selection of this particular sub-chain. The optimization and the learning process of it is, hence, not limited to a specific service functionality for consumers with a spe- 35

40 cific context and utility function. Although, service consumers themselves are only interested in their own optimization, through the separation of the selection levels and the through-passing of an end consumer s utility function in the multi-tier approach, the diversity in the optimization process also contributes to the benefit of others. 4 Technical implementation In [1], we introduced the basic principles of our SOC framework whose main purpose is the optimized service selection in an open market scenario. We extended the broker of the traditional SOC model and added new functionalities which are mainly a) the collection of service call measurements by service consumers in order to b) learn from them and to build up a data base for the c) recommendation of optimal service candidates to new/other consumers. All these functionalities take consumer context as well as utility functions of consumers into account. The framework provides in general two recommendation approaches: Dynamic recommendations Semi-static recommendations based on dynamic bindings With dynamic recommendations, optimal service candidates are requested with each service call. On contrary, dynamic bindings work in a semi-static way: Instead of a lookup with each service call, the consumer can register a specific functional binding with the logical central broker. With this binding, the service consumer gets an initial recommendation which he statically uses. The broker, however, knows about the binding and in case there is a better service candidate, he will notify the consumer (publish-subscribe pattern). Besides the central broker component, there is also an extension for local integration environments which pursues the following tasks: a) administration of framework participants with the central component, b) manage dynamic bindings, c) conduct measurements during service calls which are submitted to the central broker for the learning procedure. The described multi-tier approach of optimized service selection is implemented in the outlined functional extensions to the existing SOA model. An individual optimal Information Supply Chain arises through the through-passing of an end consumer s utility functions, as on each selection level, a sub-provider candidate is selected on basis of the consumer context of the direct requesting consumer 4 and this end consumer s utility function. Due to measurement during a service call on every level and due to different utility functions, new sub-chains arise 5. Each measurement will then be learned with its consumer context and its utility function. The consumer context is relevant for a later recommendation since it differentiates between possible consumer classifications. Although 4 Consumers within an Information Supply Chain are Intermediary services. 5 For a selected service with a sub-dependency of n levels, at least n + 1 (sub-)chain measurements are created. 36

41 utility functions mainly matter for the actual selection of an appropriate service candidate for a consumer, they are important within a multi-tier approach because they differentiate the actual classification of their structure (e. g. premium vs. discount version of a supply chain) which is also important for the actual learning process, since measurements for different chain classifications must not be mixed up with each other. So instead of saving structural details of a supply chain which might change frequently, we argue that based on a certain consumer context and a certain utility function, all consumers with a similar specification will be recommended the same chain path for a specific functional Information Supply Chain. This approach has two major benefits: Preservation of the encapsulation idea in SOC Easier manageable approach towards the perpetual change characteristic of an open market When selecting an appropriate service candidate for a consumer, it will be processed with the respective provided details: 1. Service functionality (e. g.: Translation service: German Swedish) 2. Consumer context (e. g.: Location is Germany; LTE Internet connection) 3. Utility function (e. g.: A fast response time is more important (weight: 0.8) than the monetary costs (weight: 0.2)) In contrast to our former approach, which we presented in [1], we developed a machine learning component which learns Service Level Achievements of service candidates based on consumer context and utility. For the recommendation of an appropriate service candidate for a specific service consumer, this component looks up the expected non-functional characteristics of each service candidate (together with its sub-chain) based on the consumer s context and utility. These expected non-functional characteristics of each candidate are then the input for the utility function in order to determine the best-fit service, which is expected to create the highest individual utility. On the contrary to the former approach, the consumer (context and utility) classification at design time is now not relevant anymore, since classification happens automatically as part of learning. This approach is a more flexible and omits the problem of finding appropriate consumer classifications. In order to learn new Information Supply Chain paths, the framework has a few mechanisms: The participating consumers within the framework have the opportunity to propose and initially use a new service candidate (dynamic binding). When there is a better service candidate, the framework updates the binding. However, the new service candidate is learned and can be recommended in the future. Through different utility functions on all levels across a chain, new (sub-) chain structures arise, like in the outlined example in the previous section. Occasionally, the frameworks also recommends sub-optimal service candidates in order to ensure additional variations and also to ensure that formerly sub-optimal ranked service candidates have the chance to prove themselves. 37

42 With the new machine learning approach, a deployment time training is not needed, since the machine learning approach focuses on continuous learning. The outlined methods also ensure a growing knowledge basis of possible optimal Information Supply Chains. 5 Related work So far, we have not found any optimized selection approaches on a multi-tier level. Although there are many selection and negotiation approaches of services, which are mainly based on Service Level Agreements (SLAs) (cf. [2 8]), all of them only focus on a local (single-tier) optimization. We described, however, that the selection of a service is also dependent on its sub-services, because non-functional characteristics of sub-services have an influence on those of the directly called service. In [9], the authors describe a Quality of Service (QoS) prediction approach of services. They also outline that for a specific functional service, there can be several service candidates to choose from, and that the selection of an appropriate candidate is based on the non-functional aspects. Although they do not speak about consumer context regarding network aspects (mainly response time), they focus on differences in the consumer experience. In their approach, they classify the services into poor services category and ordinary service category, and use different predicting methods based on the category which the target service belongs to. Although, they also focus on the consumers experience in the network related non-functional characteristics, they only focus on the prediction of expected QoS attributes, but do not focus on the actual selection and they do not take into account, that consumers can have different optimization goals, either. There are several recommendation approaches using collaborative filtering (CF) (cf. [10 12]). CF-based approaches also use the exploitation of shared knowledge about services in order to recommend services to similar consumers before the actual consumption on an automated basis. In order to find consumerrelated similarities, the approaches only work with experienced consumers, as similarities in their preferences have to be found out beforehand. With our approach, by the definition of consumer context and utility functions, new consumers can already benefit from existing knowledge. CF approaches also do not take into account that consumers can have different optimization goals 6 or preferences (even among different service bindings) and only some approaches (cf. [11, 12]) consider differences between consumers regarding their context. In [13], the authors tackle the lack in consideration of a consumer s preferences and interests, however, they do not take consumer context into account. The authors of [14] describe an approach to tackle the mentioned cold-start problem within CF. In [15], the author also highlight that non-functional characteristics of services differ depending on call context (in their case they focused on time and 6 Optimization goals within our framework is expressed in utility functions. 38

43 input). We already described this relevant aspect in [1]. With the approach described in this paper, by means of machine learning, we take context-dependency in Service Level Achievements and the calculation of an consumer s individual utility into account in the recommendation of a service. 6 Conclusion In this paper, we presented a novel multi-tier approach of optimized service selection, which tackles the drawback of a single-tier (local) optimization. In the SOC world, consumers select service functionality in the form of service interfaces, which conceal technical details such as infrastructure, implementation, but also sup-dependencies. We have outlined, that the selection of services is also a selection of Information Supply Chains. Such a supply chain affects the non-functional characteristics of the directly called service. Furthermore, with technical flexibility in the SOC context, these supply chains can be built dynamically in order to increase an end consumer s utility. This is not only advantageous for the end consumer, but also for providers. With our machine learning approach, Service Level Achievements are learned based on consumer context and utility in order to increase the quality of the expected non-functional characteristics, which are the input for the utility function, which in turn is used for the determination of the service candidate which creates the highest utility for a specific end consumer. For our multi-tier approach, the actual utility function is also important, since Information Supply Chains behind a service interface are built up in order to achieve a high end consumer s utility. With the technical flexibility of this approach, a specific service interface and its supply chain underneath can perform 7, e. g., as a premium service (with high monetary costs, but fast response times) or as a low-cost service (with low monetary costs, but slower response times) at the same time. Therefore, Service Level Achievements have to be learned according to the calling conditions (context) and according to how they are supposed to perform (utility). This approach addresses several benefits, without losing the abstraction and concealment idea of the SOC technology. In general, supply chains position themselves with their own quality goals. Consumers can only choose the best-fit supply chain from existing, predefined supply chains. According to how supply chains position themselves in terms of quality goals, they are consumed by consumers with similar goals. However, although they may be similar, they might not be identical and so the overall utility for the actual end consumer is sub-optimal. Through the flexible adjustment towards the optimization goals of end consumers, a broader variety of possible Information Supply Chains can be achieved, from which prospective new consumers can also benefit. Hence, by passing through an end consumer s utility function, supply chains can align themselves better to their customers needs. If they appeal to potential consumers, they are more likely to be consumed, hence, from an economical point-of-view, they can achieve a higher turnover. 7 In terms of non-functional characteristics, not functionality! 39

44 The outlined conceptual work has been implemented. For the verification of the approach, we are going to set up a realistic simulation scenario. Because of the concealment within SOC technology, a scenario of real-world services cannot be used for the verification. 7 Research plan The conduction of the research is planned as followed: 1. Conceptual solution Question: What non-functional characteristics are relevant during service selection from a consumer s perspective? Assumption: More than one non-functional characteristic are relevant for service selection. However, their relevance is weighted. Selection is based not only on a consumer s preferences and the measured (and condensed) characteristics of service candidates, but also on a consumer s call context (which also affects non-functional characteristics). Multi-tier service selection through the propagation of (end) consumer s preferences (in the form of a utility function) 2. Analysis and determination of relevant non-functional characteristics during service selection (consumer perspective) Analysis based on SOC-related conference papers of the recent years. 3. Framework implementation Solution pursuits no need for adaptation of existing service implementations (black box treatment of services). Therefore, the architectural solution targets middleware products which are used as an environment for services. Plug-in for integration environments (JBoss Enterprise Service Bus) Central broker comprises measurement data collection, condensation and learning components. 4. Setting up a realistic distributed scenario with the presumed service market characteristics The presumed market does not exist in its presumed characteristics, yet. Also for validating and benchmarking, a certain scenario needs to be reproducible. Laboratory set up with mock-up services whose non-functional characteristics can be manipulated. These services are connected to several ESBs with our plug-in and hosted in different cities (even countries). 5. Validation of the conceptual solution by the developed framework Within the laboratory set-up, several reproducible 8 multi-tier service call scenarios are used for the validation. Statistical evaluation and benchmarking of the measured results. 8 Some of the measured non-functional characteristics of a service are also influenced by the Internet as the transport medium for (Web) services. In general, due to the characteristics of the Internet, the network-related part of service calls over the Internet are not fully reproducible. 40

45 References 1. J. Andersson, A. Heberle, J. Kirchner, and W. Löwe, Service Level Achievements - Distributed knowledge for optimal service selection, in Ninth IEEE European Conference on Web Services (ECOWS), 2011, pp B. L. Duc et al., Non-functional data collection for adaptive business processes and decision making, in MWSOC 09: Proceedings of the 4th International Workshop on Middleware for Service Oriented Computing. New York, NY, USA: ACM, 2009, pp L. Zeng et al., Monitoring the QoS for Web Services, in Service-Oriented Computing ICSOC 2007, 2008, pp L. Zeng et al., QoS-aware middleware for Web services composition, IEEE Trans. Softw. Eng., vol. 30, no. 5, pp , L. Zeng, B. Benatallah, M. Dumas, J. Kalagnanam, and Q. Z. Sheng, Quality driven Web services composition, in Proceedings of the 12th International Conference on World Wide Web. ACM, 2003, pp L. Li, J. Wei, and T. Huang, High performance approach for multi-qos constrained Web service selection, in Service-Oriented Computing ICSOC 2007, 2008, pp S. Reiff-Marganiec, H. Yu, and M. Tilly, Service selection based on non-functional properties, in Service-Oriented Computing - ICSOC 2007 Workshops, 2009, pp D. Mukherjee, P. Jalote, and M. G. Nanda, Determining QoS of WS-BPEL compositions, in Service-Oriented Computing ICSOC 2008, 2008, pp D. Yu, M. Wu, and Y. Yin, A combination approach to QoS prediction of Web services, in Service-Oriented Computing ICSOC 2012 Workshops, 2012, pp Z. Zheng, H. Ma, M. Lyu, and I. King, QoS-aware Web service recommendation by collaborative filtering, Services Computing, IEEE Transactions on, vol. 4, no. 2, pp , M. Tang, Y. Jiang, J. Liu, and X. Liu, Location-aware collaborative filtering for QoS-based service recommendation, in Web Services (ICWS), 2012 IEEE 19th International Conference on, 2012, pp L. Kuang, Y. Xia, and Y. Mao, Personalized services recommendation based on context-aware QoS prediction, in Web Services (ICWS), 2012 IEEE 19th International Conference on, 2012, pp G. Kang, J. Liu, M. Tang, X. Liu, B. Cao, and Y. Xu, AWSR: Active Web service recommendation based on usage history, in Web Services (ICWS), 2012 IEEE 19th International Conference on, 2012, pp Q. Yu, Decision tree learning from incomplete QoS to bootstrap service recommendation, in Web Services (ICWS), 2012 IEEE 19th International Conference on, 2012, pp F. Wagner, A. Klein, B. Klopper, F. Ishikawa, and S. Honiden, Multi-objective service composition with time- and input-dependent QoS, in Web Services (ICWS), 2012 IEEE 19th International Conference on, 2012, pp

46 Leveraging Privacy in Identity Management as a Service through Proxy Re-Encryption David Nuñez, Isaac Agudo, and Javier Lopez Network, Information and Computer Security Laboratory, Universidad de Málaga, Málaga, Spain Abstract. The advent of cloud computing has provided the opportunity to externalize the identity management processes, shaping what has been called Identity Management as a Service (IDaaS). However, as in the case of other cloud-based services, IDaaS brings with it great concerns regarding security and privacy, such as the loss of control over the outsourced data. As part of this PhD thesis, we analyze these concerns and propose BlindIdM, a model for privacy-preserving IDaaS with a focus on data privacy protection through the use of proxy re-encryption. Keywords: Identity Management as a Service, Cloud computing, Privacy 1 Introduction Within the internal processes of most organizations, identity management stands out for its ubiquitous nature, as it plays a key role in authentication and access control. However, it also introduces an overhead in cost and time, and in most cases, specialized applications and personnel are required for setting up and integrating identity management systems. As has already happened for other services, the cloud paradigm represents an innovative opportunity to externalize the identity management processes. Identity Management as a Service (IDaaS) is the cloud industry s response to the problem of identity management within organizations, allowing them to outsource these services from their internal infrastructures (on-premise model) and deploy it in the cloud (on-demand model). Althought cloud computing has raised great expectations regarding efficiency, cost reduction and simplification of business processes, it has also increased security and privacy risks. This very same conflict also applies to the IDaaS case: although it offers organizations a great opportunity to cut capital costs, it also introduces a variant of one of the classic problems of cloud computing, namely, the loss of control over outsourced data, which in this case is information about users identity. Users entrust their personal information to identity providers, which then have a privileged position in order to read users data that is in their custody. Although there are several regulatory, ethical and economic reasons for discouraging this possibility, the fact is that nothing actually prevents Supervisors of this PhD thesis 42

47 identity providers from accessing users information at will. Even if we assume that the identity provider is not dishonest and that its internal policy is respectful regarding identity information, it is still possible that a privacy disclosure occurs, for example through security breaches, insider attacks, or legal requests [1]. Traditionally, cloud providers have tackled these problems defining Service Level Agreements (SLAs) and internal policies; however, these measures simply reduce this issue to a trust problem. It is therefore desirable to count with more advanced security mechanisms that enable users to benefit from cloud computing and still preserve their privacy and the control over their information, ideally through criptographic means [2]. Hence, the principal motivation behind this research line is putting the identity provider into the cloud landscape, where data storage and processing could be offered by possibly untrusted cloud providers, but still offer an identity management service that guarantees user s privacy and control. To this end, we define BlindIdM, a privacy-preserving IDaaS system where identity information is stored and processed in a blind manner, removing the necessity of trusting that the cloud identity provider will not read the data. 2 Identity Management as a Service The federated identity management model enables information portability between different domains, which permits both a dynamic distribution of identity information and delegation of associated tasks, such as authentication or user provisioning. One of the key aspects of this model is the establishment of trust relationships between the members of the federation, which enables them to believe the statements made within the federation. The federated model is widely used in organizations, deployed as an on-premise service. The main actors that participate in the identity interactions are [3]: (i) Users, the subjects of the identity information, and generally the actors that request resources and services through their interaction with applications and online services; (ii) Service Providers (SP), the entities that provide services and resources to users or other entities; and (iii) Identity Providers (IdP), which are specialized entities that are able to authenticate users and to provide the result of this authentication to service providers. Figure 1a shows a high-level view of a federated identity setting, where a host organization acts as a federated identity provider. In this setting, an employee from the host organization requests a service from the service provider, who in turn asks the organization for identity information about its employee. Although federated identity management has led to great advantages with respect to interoperability of identities, it has also introduced cost and time overheads, since it usually requires specialized applications and personnel for setting up, integrating and managing this process. IDaaS can be seen as a refinement of the federated model, which takes the efficiency of the cloud in its favour for offering specialized outsourcing of identity management. Among the benefits of Identity Management as a Service we find: (i) more flexibility, scalability and 43

48 stability for high demand environments, with a growing number of users and thousands of identities; (ii) reduction of costs, since IDaaS providers can focus on providing more efficient and specialized identity services to organizations; (iii) better security measures, implemented in dedicated systems and facilities; and (iv) improved compliance and business processes audits due to the high specialization and security standards that an IDaaS provider can achieve. However, there are also risks associated with Identity Management as a Service, such as: Identity providers are appealing targets to attackers as they represent a single point of failure because they centralize users personal information. Cloud providers are susceptible to being subpoenaed for users data, in the case there is some legal, administrative or criminal investigation running. In the absence of cryptographic means, it is not possible to actually limit the access of cloud providers to the data they steward. That is, there is almost no risk of being discovered accessing users information without their consent. Cloud providers may be located in foreign countries with different, and possibly conflicting, laws and regulations regarding privacy and data protection. Hence, it is obvious that externalizing the management of identity information to the cloud implies a loss of control for users and organizations. This in turn signifies an empowerment of cloud identity providers and facilitates the users to incur damages, losses or risks in the case of a disclosure of private data. 3 BlindIdM: Privacy-Preserving IDaaS The aforementioned concerns led us to conceive of the concept of Blind Identity Management (BlindIdM), a model whereby the cloud identity provider is able to offer an identity information service, without knowing the actual information of the users; that is, it provides this service in a blind manner. This is a great innovation with respect to current identity management systems, where users identity information is managed by the identity provider and the user is obliged to trust that the provider will make proper use of his data and will guarantee its protection. Our intention is that this model will enable organizations to choose a cloud identity provider without necessarily establishing a strong bond of trust with it. The novel aspect of our proposal lies in the protection of data: the host organization encrypts users identity information prior to outsourcing it to the cloud, in such a way that it is still manageable by the cloud identity provider. It is interesting to think about what kind of incentives may motivate a cloud identity provider to offer its services in a blind manner. Among them we find: (i) compliance with data privacy laws and regulations, since a privacy-preserving approach like ours, which achieves data confidentiality through encryption mechanisms, could be very useful to help cloud identity providers to comply with data protection regulations; (ii) minimization of liability, since outsourced data is encrypted prior to arriving the cloud and the cloud provider does not hold the decryption keys, and (iii) data confidentiality as an added value, as offering secure data processing and confidentiality could be considered as a competitive 44

49 advantage over the rest of identity services, and in the future can lead to a business model based on the respect for users privacy and data confidentiality. In our model, we will assume a federated identity setting, similar to that shown in Figure 1a, but where the host organization partially outsources the identity management processes to a cloud identity provider, while retaining the authentication service on-premises. The cloud identity provider now acts as an intermediary in the identity interactions, and is also in charge for storing and supplying identity information; Figure 1b shows this setting. direct trust indirect trust Host Organization (Identity Provider) belongs to retrieves identity provides identity information Service Provider requests service Cloud Identity Provider outsources identity management direct trust Host Organization retrieves identity provides identity information belongs to Service Provider direct trust requests service Employee Employee (a) Federated Identity Management (b) BlindIdM model Fig. 1: Relation between entities in different models With regard to trust assumptions, we consider the cloud identity provider as an adversary, and in particuar, we will assume it to be data-curious, a type of honest-but-curious adversary, which behaves correctly with respect to protocol fulfillment, but has no hindrance to try to access users data. In [4], we describe a particular instantiation of BlindIdM that uses SAML 2.0 as the underlying identity management protocol and proxy re-encryption techniques to achieve end-to-end confidentiality of the identity information, while allowing the cloud to provide an identity service. From a high-level viewpoint, a proxy re-encryption scheme [5] is an asymmetric encryption scheme that permits a proxy to transform ciphertexts under Alice s public key, p A, into ciphertexts under Bob s public key, p B. In order to do this, the proxy is given a re-encryption key, r A B, which makes this process possible. In the scenario proposed by this model, the host organization (including all the employees) acts as the user, and the identity management of the organization is outsourced to a cloud identity provider. Identity information flows from the user (in our case, from the host organization), acting as a source of information, to the service provider, acting as a consumer of information. Specifically, the host organization encrypts the identity information under its public key p H prior sending it to the cloud identity provider. The use of proxy re-encryption enables the identity provider to transform these ciphertexts into encrypted attributes under the public key of the service provider, p SP ; in order to do so, the identity provider needs a re-encryption key r H SP generated by the host organization 45

50 and provided beforehand. More details are given in [4] on how this process is framed within the SAML protocol using standard extension mechanisms. The first ideas towards our proposal were presented in [6], where we describe a user-centric IDaaS system based in OpenID and proxy re-encryption. Although conceived as a proof of concept, this is the first work that achieves blind processing of identity information; however, trust issues arise as OpenID does not provide proper mechanisms for establishing trust. BlindIdM solves these problems and provides more solid mechanisms of integration with the identity management protocol. From a practical point of view, it is also crucial to determine whether our proposal is economically feasible. Most of cryptography-based proposals only provide theoretical analysis of security and complexity, but do not tackle the economic viability. In [4], we provide an economic assessment of our proposal and estimate the cost of proxy re-encryption operations in USD cents; these expenses are a consequence of the incurred cost of the cryptographic computations in a cloud environment. For instance, it can be seen that the re-encryption operation, which is the one executed by the cloud provider, has an estimated cost of 4.79E-04 USD cents; in other words, the cloud identity provider can perform approximately 2087 re-encryptions for one USD cent. From these figures we can conclude that the cryptographic overhead is reasonable, as it permits an IDaaS system to serve thousands of encrypted attributes for a few cents, considering the costs that an organization could incur in the case of a security breach. 4 Research Plan This PhD thesis is aimed to tackle with the following research challenges: Leveraging user-centricity in identity management: Most current identity management systems are provider-centric. Identity providers are in a privileged position to learn information about users. We want to create means for empowering the users with respect identity providers. Enhancing users privacy in digital transactions that involve their identity: Privacy and confidentiality of identity information is threatened on a daily basis. Ideally, strong safeguards for protection this information should be in place. We believe that cryptographic tools are needed for solving this issue. Interoperability of the solutions: Any new solution to these problems should take open standards in consideration in order to facilitate and enhance interoperability. Solutions that reduce the trade-off between anonymity and accountability: it is a big challenge to design solutions that support both aspects; we need to enhance accountability in digital transactions, but at the same time, it is necessary to respect users privacy. 46

51 5 Conclusions and Future Work As part of this PhD thesis, we propose a solution to the problem of privacy for Identity Management as a Service. IDaaS is a recent trend, powered by cloud computing technologies, that allows companies and organizations to benefit from outsourcing identity management processes. The reduction of costs and timeconsuming tasks associated with managing identity services are the main reasons behind this externalization. However, as is the case for other cloud-based services, there is much concern regarding the inversion of the control of the data, as users lose almost all control over their data. We propose BlindIdM, a privacy-preserving model for IDaaS system that guarantees user s privacy and control even when data storage and processing is performed by untrusted clouds. In this model, the cloud identity provider is able to offer an identity information service without knowing the actual personal information of the users. We believe that the approach presented in this paper opens up new possibilities regarding privacy in the fields of identity management and cloud computing. With regard to forthcoming work, we plan to deploy a prototype of our system in a real cloud setting, such as Amazon EC2 or Google AppEngine; in addition, more recent proxy re-encryption schemes could be used in order to provide more efficiency and security. As to future research, we are exploring other cryptographic techniques and investigating how to extend the protection of privacy to users access behaviour. Acknowledgements This work was partly supported by the Junta de Andalucía through the project FISICCO (P11-TIC-07223). The first author has been funded by a FPI fellowship from the Junta de Andalucía through the project PISCIS (P10-TIC-06334). References 1. Cloud Security Alliance. Top threats to cloud computing, version 1.0, Isaac Agudo, David Nuñez, Gabriele Giammatteo, Panagiotis Rizomiliotis, and Costas Lambrinoudakis. Cryptography goes to the cloud. In Secure and Trust Computing, Data Management, and Applications, pages Springer, E. Maler and D. Reed. The venn of identity: Options and issues in federated identity management. Security & Privacy, IEEE, 6(2):16 23, D. Nuñez and I. Agudo. BlindIdM: A Privacy-Preserving Approach for Identity Management as a Service. International Journal of Information Security, In Press. 5. G. Ateniese, K. Fu, M. Green, and S. Hohenberger. Improved proxy re-encryption schemes with applications to secure distributed storage. ACM Transactions on Information and System Security (TISSEC), 9(1):1 30, D. Nunez, I. Agudo, and J. Lopez. Integrating OpenID with proxy re-encryption to enhance privacy in cloud-based identity services. In 4th IEEE Intl. Conf. Cloud Computing Technology and Science (CloudCom), pages IEEE,

52 Towards Leveraging Semantic Web Service Technology for Personalized, Adaptive Automatic Ubiquitous Sensors Discovery in Context of the Internet of Things Kobkaew Opasjumruskit Advisor:Birgitta Koenig-Ries Institute for Computer Science, Friedrich-Schiller-University Jena Abstract. Semantic web service technology equipped with user-context and environment awareness can enhance the automatic discovery of services. This thesis focuses on the service discovery aspect of MERCURY, a platform for straightforward, user-centric integration and management of heterogeneous devices and services via a web-based interface. The service discovery is used to find appropriate sensors, services, or actuators to perform certain functionalities required within each user-defined scenario. In contrast to existing works, the proposed service discovery approach is geared towards non-it-savvy end users and supports various service-description formalism. Moreover, the matchmaking algorithm should be user-aware and environmentally adaptive rather than depends entirely on text-based searches. Hence, the goal of this thesis is to develop a service discovery module on top of existing techniques, which shall serve user requests according to their personal interests, expertise and circumstances. Keywords: Service discovery, Adaptive computing, Semantic Service Description, User centric device 1 Motivation & Problem Statement The idea of the Internet of Things[1] where physical things are becoming smart devices has been around for a decade. The technology allows us to combine sensors, actuators and services together seamlessly. Especially, in the era of pervasive mobile devices like these days, we are equipped with heterogeneous sensors and Internet access ubiquitously. To control and monitor devices remotely, a prominent approach is to expose them as web services and allow users to create new applications from services through development tools. Consequently, the non-it-savvy users are withhold from utilizing this technology. Though various works devote to the Internet of Things by enabling users to control smart devices through web-based applications, none of them can disentangle non-technical users from the integration of sensors, e.g., [2] requires

53 programming skills in order to formulate the required tasks. Even though [3] solves the aforementioned problem, it provides a limited set of web services, which of most are social media applications. While [4] offers a broader range of sensors and services, but it requires programming and hardware skills. The key aspect of this thesis is to find the best-fitting sensor among numerous candidates by leveraging semantic web services technology. This distinguishes MERCURY from the other competitors with a simple integration of sensors. The remainder of this paper is organized as follows: beginning with the brief introduction to MERCURY by describing how it assists users with IoT, then explaining the situations in which the service discovery is used, and discussing the existing service discovery techniques. Next, the details of the service discovery module are elaborated. Lastly, the status of this work and future plan are concluded. 2 Brief Introduction to MERCURY MERCURY is a platform for straightforward, user-centric integration, and management of heterogeneous devices and services via a web-based interface. It offers any user to connect sensors, actuators and services together from any sites via any accessible devices. First, these devices and services must be registered to the system. The sensor/service discovery module aids the registration process by utilizing a service description repository. Afterwards, the user can search devices and services, and links them together using the scenario modelling module. When this user saves a scenario, the executable script will be created from the scenario model and sent to the execution engine module. During the runtime, execution results can be monitored via the runtime UI module. In [5], I demonstrated how to aid a user in accomplishing the desired task by utilizing environment-adaptive capabilities. Consider the following example: Imagine a user, Ann, decides to be woken up at 6 A.M. and go jogging if it does not rain in the morning. Otherwise, she prefers to receive an alarm message at 7 A.M., and postpone her jogging schedule to the evening. To accomplish this scenario, a GPS locator, rain sensors close to Ann s current locations and her calendar application need to be registered. Afterwards, Ann can wire sensors or services to meet the desired functionality. 3 The Thesis 3.1 The Topic From the previous scenario, the user may experience some difficulties in sensor discovery. During the registration process, if there are sensors with the same name appear in the same network at the same time, this user might be unable to specify the preferred sensor. Although this problem can be resolved by using technical information to identify the preferred one, this approach is obviously inconvenient for non-technical users. 49

54 When the user starts to add a rain sensor into a defined scenario, the relevant devices or services should be suggested to aid modelling the scenario. Afterwards, the scenario can be saved for further execution. This user can use the dynamic service discovery function to enhance the execution process. For example, when the rain sensor in Ann s backyard is disconnected, the service discovery module should automatically search for a nearby sensor and replace it in this defined scenario. The usage of the automatic sensor and service discovery can be deployed in three use cases. Service Registration: In order to register the right service, the user must search for the service through the automatic service discovery, which uses the semantic web service technology. Scenario Modelling: During a scenario creating process, a service request is created implicitly, and the matched services are put on the recommendation list. The recommendation is generated based on collaborative filtering, such as user rating, or based on descriptions of services. Then, the user can save and define a description for each scenario in order to discover them with the same technique later. Scenario Execution: If this user needs to change devices or services in a scenario according to his or her current context during the runtime, it is possible to define a generic scenario that is automatically updated with a proper device or service. Furthermore, the service discovery can be enhanced by utilizing the personalized information, such as preferences or locations. 3.2 State of the Art The basic idea behind the automatic service discovery is to annotate services with machine interpretable descriptions, which is explained in [6]. When a user or an application looks for a service, a service request with the capabilities of the service in a predefined format must be specified. The matchmaking component then compares available service descriptions with the request, and returns the service(s) that best match the user request. This work assumes that the descriptions of all web services are in WSDL (Web Services Description Language). Because a WSDL description alone is insufficient for an automatic service discovery, ontology-based techniques, such as OWL-S (Web Ontology Language for Web Services) and WSMO (Web Service Modelling Ontology) are used to solve this problem. Nonetheless, these approaches are heavyweight and require significant efforts to provide their descriptions. SAWSDL (Semantic Annotations for WSDL) [7] is created to cope with semantic annotations in WSDL using arbitrary ontologies. A tag-based description technique proposed in [8] is another lightweight option as well. For these service description approaches, there exist several initiatives to evaluate different description formalisms and matchmaking algorithms, e.g., the 50

55 S3 (Semantic Service Selection) Contest [9]. Since this work aims to integrate arbitrary services, any assumptions regarding the description framework cannot be made. Therefore, the solution is to support several different description formalisms, including OWL-S, SAWSDL, and tag-based. The major challenges would be to answer the following questions: how to support various service description formalisms; how to make these formalisms configurable without tampering with the code to support new description languages in the future. In the real-time processing, how to improve the matchmaking time; how to guarantee the reliability of the dynamic service discovery during its runtime. And finally, how to choose the suitable information to add to the service request. 3.3 Initial Assumption In MERCURY, all sensors and actuators are assumed to be available as web services with WSDL descriptions. Optionally, the services may have other descriptions, e.g., OWL-S, SAWSDL or social-tagging, etc. MERCURY itself does not support semantic annotation of services, though it does provide an interface for tagging. This thesis is not proposing new service matchmakers, instead, this work employs existing service matchmaking approaches. It is compulsory to specify the domain of discoverable services by defining the ontology repository, so that MERCURY can automatically assign the ontology to a request from a user correctly. Otherwise, the request with ambiguous descriptions can lead to wrong analysis results, for example, apple can be defined in either domain of fruit or brand. 3.4 Solution Outline Figure 1 shows the architecture of the MERCURY service discovery module. First, a user explicitly creates a request via the Request UI. Imagine a user looks up a weather forecast service, she or he needs to specify a city name as a service input and weather forecast as a service output. From these user inputs and, if possible, user context, the request converter module creates a service request. Afterwards, existing service matchmakers, such as [10] and [11] are used. Since each service may have more than one type of description, the user does not have to specify the description type of the service she or he is looking for. The matchmaker(s) will compare the request message with description files, which are already known by matchmakers. If the results from matchmakers contradict each other, the result integrator will assign weight values based on the ranking result from S3 Contest to determine the rankings. Optionally, the user can manually adjust these weight values. Consequently, the result integrator module will merge and rearrange the returned results from all matchmakers. The same methodology is also applied to an implicitly created request, which is automatically created during the modelling and execution process. Currently, the request converter is supporting SAWSDL-based and OWL-Sbased matchmakers. The module is designed to be externally re-configurable, 51

56 Fig. 1. MERCURY Service Discovery Architecture so that the new semantic service description language can be introduced to the system without requiring to recompile the code. Before processing the final result, the whole system has to wait for the slowest matchmaker. Therefore, users are recommended not to activate all available matchmakers concurrently. From S3 contest results, there is always a trade-off between the accuracy of result and the computing time. This weakens the reliability of the automatic discovery process significantly. Therefore, this challenge will be further studied. The source of user preferences can be explicitly gained from questioning each user, or be implicitly gained from activities of an individual; e.g. frequently used sensors, and friend recommended services. Moreover, their activities in social media applications can be analyzed and utilized in the sensor discovery module. 4 Research Plan In the first year of the project, started in September, 2011, I presented the core architecture and the prototype in [5]. During the second year, I focused my research on the topic of sensor and service discovery. In April, 2013, the automatic service discovery was deployed in the registration process, and the idea how to utilize it in the context of MERCURY was submitted in [12]. To evaluate service matchmaking process, I set up the semantic service description repository from two sources: one is the testing collection from the S3 contest, which is used to measure the quality of service matchmaking; and the real service descriptions from The first prototype of automatic service discovery in the scenario modelling module was demonstrated in June, Since the scenario modelling module is still in the process of development, the service discovery in this part is expected to be completed by October, The runtime service discovery will be integrated into execution process and is expected to be completed around April, The user evaluation should be established by May, 2014, and finally, the service discovery should be improved due to feedbacks from users, which is expected to be finished in October,

57 5 Conclusion & Future Directions The focus of this thesis is to leverage the semantic web services technology to improve the automatic service discovery in the Internet of Things. The service discovery module should be aware of user context and personalized information to choose the most appropriate device or service for each user in any circumstance. It is assumed that the service description contains at least a WSDL file. On top of the existing works and the initial assumption, I presented the architecture of the service discovery. The current challenge is to find the right balance between the service matchmaking computing time and the accuracy of results. In the near future, I will implement the dynamic service discovery during the runtime. The evaluation of service recommendation can be made following the guidelines in the S3 Contest. References 1. Mattern, F., & Floerkemeier, C.: From the Internet of Computers to the Internet of Things. LNCS Vol. 6462, Berlin, Heidelberg : Springer (2010) 2. Kovatsch, M., Lanter, M., & Duquennoy, S.: Actinium: A RESTful Runtime Container for Scriptable Internet of Things Applications. In Proceedings of the 3rd International Conference on the Internet of Things, pp (2012) 3. IFTTT - If this then that, 4. Cloud Business Apps Integration CloudWork, 5. Opasjumruskit, K., Exposito, J., Koenig-Ries, B., Nauerz, A., & Welsch, M.: MERCURY: User Centric Device and Service Processing Demo paper. Paper presented at 19th Intl. workshop on Personalization and Recommendation on the Web and Beyond, Mensch & Computer, Konstanz, Germany (2012) 6. Maleshkova, M., Kopeck, J., & Pedrinaci, C.: Adapting SAWSDL for Semantic Annotations of RESTful Services. In Proceedings of the Confederated International Workshops and Posters on On the Move to Meaningful Internet Systems, pp Berlin, Heidelberg : Springer (2009) 7. Semantic Annotations for WSDL, 8. Gawinecki, M., Cabri, G., Paprzycki, M., & Ganzha, M. Evaluation of Structured Collaborative Tagging for Web Service Matchmaking. In: Semantic Web Services Advancement through Evaluation, pp Berlin, Heidelberg : Springer (2012) 9. S3 Contest, klusch/s3/index.html. 10. Wei, D., Wang, T., Wang, J., & Bernstein, A.: SAWSDL-iMatcher: A customizable and effective Semantic Web Service matchmaker. Web Semantics: Science, Services and Agents on the WWW, 9(4), (2011) 11. Klusch, M., Kapahnke, P., & Zinnikus, I.: SAWSDL-MX2: A Machine-Learning Approach for Integrating Semantic Web Service Matchmaking Variants. In IEEE International Conference on Web Service, pp (2009) 12. Opasjumruskit, K., Exposito, J., Koenig-Ries, B., Nauerz, A., & Welsch, M. Service discovery with personal awareness in smart environments. Book chapter in Creating Personal, Social, and Urban Awareness through Pervasive Computing : IGI Global (2013) 53

58 An agent-based architecture for resource allocation in Cloud Computing Nassima Bouchareb 1, supervised by Nacer Eddine Zarour 1 and Samir Aknine 2 1 LIRE Laboratory, Department of Computer Science, University Constantine 2, Algeria 2 GAMA Laboratory, Department of Computer Science, University Lyon 1, France Abstract. The resource allocation in dynamic environments presents numerous challenges since the requests of consumers are important. Unfortunately, the minimization of Quality of Service (QoS) violations and, at the same time, the reduction of energy consumption is a conflicting. In this paper, we propose an agent-based architecture to automatically manage Cloud resources, in order to simultaneously achieve suitable QoS levels and reduce as much as possible the amount of energy used by providers. We propose also a new strategy that helps in the decision-making process to calculate the coalition's utility. Finally, this paper presents an implemented example which illustrates our mechanism and some experimentation. Keywords: Cloud Computing, Resource Management, Multi-Agent Systems, Coalition, Green Computing, Virtualization. 1 Introduction Given the complexity of organizations that grows increasingly, Computing technology searches more power with lower costs. One solution that takes a place of honor in recent years is the Cloud Computing (CC).CC is "... a dynamic allocation of Computing resources (hardware, software, etc.) of tiers over a network" [1]. So, one of its essential characteristics is the availability and sharing of resources, which allow user to access distributed resources with the same ease of access to local resources. However, using CC encounters some difficulties, especially when the number of requests is important; among these problems the selection of resources and the provider cost. It has been argued, that energy costs are among the most important factors impacting on provider total cost [2, 3]. Some studies have shown that CC can use some technologies to minimize energy consumption. The key technology is "virtualization" [4]. This concept allows an optimized management of physical resources, because one physical machine may hosts many virtual machines (VMs) used by many users [2].When virtualization is used, it reduces the number of physical servers and increases the utilization rates [3], in order to minimize the energy consumption. 54

59 To handle these problems, we use the agent paradigm, which has been proven in the resource management even at large scale. We propose an agent-based architecture to maximize provider gains, and minimize Service Level Agreement (SLA) violations and energy consumption. On the other hand, we give a new decision strategy to calculate the coalition's utility. To properly present our resource management mechanism: first, we discuss similar works in section 2. In section 3, we present our proposed Cloud architecture and the suggested resource management protocol. We illustrate our solution with an implemented case of study and some experimentation in Section 4, before concluding and giving some future directions in Section 5. 2 Related Work The availability and sharing of resources in CC represent a big problem and few studies have examined this problem. In [5]; authors propose a non-agent based architecture for the SLA-based resource virtualization and service provision without any coalition or Green Computing (GC) aspect. As the CC is a distributed system, and when designing this type of system, the agent technology proves suitable, because the SMA not only allows the sharing and distribution of knowledge, but also the fulfillment of a common goal, that is why we use an agent-based architecture. Authors in [6] propose an agent approach; but they impose that agents communicate only with their neighbors when forming coalitions. While a non neighbor Cloud can be more beneficial than a neighbor. So, in our mechanism we just give priority to Clouds according to their neighborhood when calculating confidence in utility equation. M. Macias and J. Guitart [7] propose in their work an interesting utility function with four parameters: maximization of revenue, client classification, services execution in off-peak hours and provider reputation. However, there is no aspect of GC in this utility equation, which is the main parameter in our proposed utility function. In [8] authors study the cooperative behavior of multiple Cloud providers to obtain stable coalitional structures, and the paper presented in [9] extends the work in [8] where the cooperative game is also used to analyze the resource and revenue sharing in Cloud Computing but without taking into account the Green Computing. Among the works that treat GC in CC, [10] where authors propose efficient green enhancements in CC, using power-aware scheduling techniques, and live migration of VMs, but they don't propose a mechanism for resource management; or detail the different cases that the Cloud may encounter in the resource management. Our contribution is based on the results of this framework [10], but also takes into consideration what has been proven in [11, 12]. In [11]; authors have performed experiments to evaluate the cost of live migration of VMs. The SLA can be violated in some situations, especially when two migrations are performed in a short period of time. In [12]; authors experimentally show that live migration has indeed an energy overhead, which depends on factors such as the size of VM and the bandwidth in the network. So, in this paper, we propose a new agent-based architecture for resource management in CC, which maximize provider gains, and minimize energy consumption, by using: virtualization, VMs migration and coalitions. 55

60 3 Cloud Computing Architecture and Resource Management Protocol The proposed Cloud architecture contains three cognitive agents, which reason before making decisions (see Fig. 1).We explain our protocol by detailing activities of each agent. 3.1 Cloud Agent (CA)Level Fig.1.Agent-basedCloud architecture The CA represents all the Cloud. When it receives the client request; it detects the quantity of VMs (Q) needed for the request, the duration of use (D), price (P) and the customer's country. In memory, there is a table that summarizes the minimum prices depending on (Q, D, and country).if the customer price is higher than the minimum price, (Q) and (D) are sent to the adequate AIA. Otherwise, a rejection message is sent to the client(there may be a negotiation to maximize provider gains).we suppose that resources are grouped according to their geographical locations, so depending on the origin of the request, CA selects the corresponding AlA to minimize the time, cost of transfer and minimize energy consumption. 3.2 Allocator Agent (AlA) Level The AlA represents a set of nodes (a cluster). When AlA receives Q and D, it selects the resources according to: GC "a fully charged resource consumes less energy than many resources with low load" [10], CA should know when migrating VMs because migration consumes energy [12]. We detail some cases of resource allocation in CC (see Fig. 2), we use symbols presented in table 1: 1 st case: There's at least one resource with free VMs that fulfill the request, choose the one with the min number of free VMs greater than or equal to X. Example: X= 3,nbr (R 1 ) =10, nbr (R 2 ) =3, nbr (R 3 ) =7, nbr (R 4 ) =2,min (nbr) = 3 =>allocate R 2. 2 nd case: No resource with free VMs to fulfill the request, share the request on resources with unoccupied VMs, starting with the one with the max of free VMs. Example: X = 9, nbr (R 1 ) = 7, nbr (R 2 ) = 2 =>allocate R 1 then R 2. 3 rd case: If the number of unoccupied VMs in resources is the same, select first the resource that will be released as soon as possible to not stay started up and consumes energy for a negligible number of VMs. Example: X = 3, nbr (R 1 ) = nbr (R 2 ) = 2,D (R 1 ) = 2months, D (R 2 ) =1 month=>d (Rx)=>Allocate R 2 then R 1. 56

61 4 th case: Migrate the occupied part to another machine, to avoid sharing of the request; it is preferred when the unoccupied part is much greater than the occupied part. Example: X =9, nbr (R 1 ) =8, nbr (R 2 ) =2 =>Migrate the two VMs from R 1 to R 2 and allocate R 1 (R 1 has 10 VMs: 8 are free and 2 are occupied so 8>>>2). 5 th case: The sum of the free VMs is less than X; activate a resource (Rx) according to the number of required VMs. If the selected resource is still insufficient, look for a resource with state =0, by always choosing the one with the minimum number of free VMs; greater than or equal to the rest of the request(y = X nbr (Rx)) (looking for the proper allocation of Y by following the same steps for X, possibility of activating other completely free resources if necessary).example: X =12,nbr (R 1 ) =10, nbr (R 2 ) =2, nbr (R 3 ) =3, nbr (R 4 ) =4.State (R 1 ) =-1, state (R 2, R 3, R 4 ) =0, = 9 < 12 => allocater 1 (State (R 1 ) = 1),then R 2 because Y = 2. 6 th case: There is no unoccupied resource, or even with the activation of resources, the request can't be accomplished. The AIA sends a message to the CA to contact the CoA which launches an offer to form a coalition. Example: X = 12, nbr (R 1 ) = 0, nbr (R 2 ) = 2, nbr (R 3 ) = 3, nbr (R 4 ) = 4=>2+3+4 = 9 < 12. It looks for 3 VMs. Fig.2. The 6 cases of resource allocation Table 1.Denotations 1. Symbol X nbr (Ri) min (nbr) State (R) D (R) Signification Number of VMs required by the request. Number of unoccupied VMs in a resource Ri. The minimum number of unoccupied VMs. (-1) unoccupied resource, (0) partially occupied resource, (1) occupied resource. duration of use of unoccupied VMs 3.3 Coalition Agent Level "CoA" When a CA receives a coalition offer, it contacts the CoA to calculate the utility of this coalition to decide whether to accept or reject it. It is calculated as follows: Utility= w 24 * U rev + w 25 * U all + w 26 * U conf (see table 2) Weights are fixed by the Cloud, in CoA memory. Their valuesare chosen according to the importance of the corresponding parameters and w 24 + w 25 + w 26 =1. 57

62 Table 2.Denotations 2. Symbol Signification Explanation U rev Offered revenues It gives priority to offers with highest prices, to maximize revenues. U all Resources allocation It gives priority to GC. It prefers requests that use the maximum of free VMs of activated resources. After, those which activate resources. After requests that use resources with low load (energy loss) [5]and finally requests that oblige the provider to form coalitions. U conf Cloud's confidence It gives priority to providers with highest confidences (honesty in sharing gains, in respecting the required quality, neighbors, etc). 4 Case Study and Implementation Overview There are two requests, AL07 from Aleman and ES04 from Spain. The first one requires one VM, and the second one thirteen VMs. Supposing that their prices are acceptable but in the Cloud there are just three resources with free VMs (see table 4). So we attribute A02 to the first request, but to satisfy the second one, the AlA1 has to send a message to the CA, to contact the CoA and form a coalition CO3 with C1.Supposing that C1has received another coalition CO4, and it can't accept both at the same time, so it calculates their utilities to accept the highest one (see table3). Table 3. Coalition's utilities "CoA Data Base" Table 4.Resources with free VMs "AlA Data Base" We have realized our simulation by using JADE platform; JDBC interface and EasyPHP tool. The following graph shows the energy consumedforexecutingal07 when: "Series 1"activatinga new resource (296 Watts), "Series 2" we use the free VM in A02 (275Watts), "Series 3" we are looking for a coalition (273 Watts) (depending on[10]).we note that the cost of series 3is the least, but there is the cost of the coalition which is more expensive than the two other series. So the choice is betweenseries1 and 2.Wenote that the use of A02andlet A05freeconsumes less energy than activating A05. So our mechanism gives better solution (see Fig.3). Fig. 3.Energy consumption in different cases of the execution of the requestal07 5 Conclusion (research plan) We are interested in this work to the resource allocation in CC, which is a very important area because it is the base of the provider's gain. Our contribution was to maximize the provider gains, and minimize the SLA violations and energy which is still 58

Just in Time Clouds: Enabling Highly-Elastic Public Clouds over Low Scale Amortized Resources

Just in Time Clouds: Enabling Highly-Elastic Public Clouds over Low Scale Amortized Resources Just in Time Clouds: Enabling Highly-Elastic Public Clouds over Low Scale Amortized Resources Rostand Costa 1,2, Francisco Brasileiro 1 1 Federal University of Campina Grande Systems and Computing Department

More information

THÈSE. En vue de l obtention du DOCTORAT DE L UNIVERSITÉ DE TOULOUSE. Présentée et soutenue le 03/07/2014 par : Cheikhou THIAM

THÈSE. En vue de l obtention du DOCTORAT DE L UNIVERSITÉ DE TOULOUSE. Présentée et soutenue le 03/07/2014 par : Cheikhou THIAM THÈSE En vue de l obtention du DOCTORAT DE L UNIVERSITÉ DE TOULOUSE Délivré par : l Université Toulouse 3 Paul Sabatier (UT3 Paul Sabatier) Présentée et soutenue le 03/07/2014 par : Cheikhou THIAM Anti

More information

A Survey of Design Techniques for System-Level Dynamic Power Management

A Survey of Design Techniques for System-Level Dynamic Power Management IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 3, JUNE 2000 299 A Survey of Design Techniques for System-Level Dynamic Power Management Luca Benini, Member, IEEE, Alessandro

More information

Reducing Cluster Energy Consumption through Workload Management

Reducing Cluster Energy Consumption through Workload Management Reducing Cluster Energy Consumption through Workload Management Sara Alspaugh Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2012-108

More information

Risk perception and risk management in cloud computing: Results from a case study of Swiss companies

Risk perception and risk management in cloud computing: Results from a case study of Swiss companies Risk perception and risk management in cloud computing: Results from a case study of Swiss companies Nathalie Brender Haute Ecole de Gestion de Genève Campus de Battelle, Bâtiment F 7 route de Drize, 1227

More information

Risk assessment-based decision support for the migration of applications to the Cloud

Risk assessment-based decision support for the migration of applications to the Cloud Institute of Architecture of Application Systems University of Stuttgart Universittsstrae 38 D 70569 Stuttgart Diplomarbeit Nr. 3538 Risk assessment-based decision support for the migration of applications

More information

Differentiation practices and related competition issues in the scope of net neutrality

Differentiation practices and related competition issues in the scope of net neutrality BoR (12) 132 Differentiation practices and related competition issues in the scope of net neutrality Final report 26 November 2012 Executive summary... 4 1 Introduction... 11 2 The Internet value chain...

More information

Conference Paper Sustaining a federation of Future Internet experimental facilities

Conference Paper Sustaining a federation of Future Internet experimental facilities econstor Der Open-Access-Publikationsserver der ZBW Leibniz-Informationszentrum Wirtschaft The Open Access Publication Server of the ZBW Leibniz Information Centre for Economics Van Ooteghem,

More information

Cloud Computing. White Paper. Authors (in alphabetical order)

Cloud Computing. White Paper. Authors (in alphabetical order) White Paper Cloud Computing Authors (in alphabetical order) Olivier Brian, wissenschaftlicher Mitarbeiter, Berner Fachhochschule Thomas Brunschwiler, IBM Research Zurich Heinz Dill, CBusiness Services

More information

Design and Evaluation of a Wide-Area Event Notification Service

Design and Evaluation of a Wide-Area Event Notification Service Design and Evaluation of a Wide-Area Event Notification Service ANTONIO CARZANIGA University of Colorado at Boulder DAVID S. ROSENBLUM University of California, Irvine and ALEXANDER L. WOLF University

More information



More information

AUGURES, Profit-Aware Web Infrastructure Management

AUGURES, Profit-Aware Web Infrastructure Management UNIVERSITAT POLITÈCNICA DE CATALUNYA AUGURES, Profit-Aware Web Infrastructure Management by Nicolas Poggi M. Advisor: David Carrera A thesis submitted in partial fulfillment for the degree of Doctor of

More information


PROJECT FINAL REPORT PROJECT FINAL REPORT Grant Agreement number: 212117 Project acronym: FUTUREFARM Project title: FUTUREFARM-Integration of Farm Management Information Systems to support real-time management decisions and

More information

P2P-VoD on Internet: Fault Tolerance and Control Architecture

P2P-VoD on Internet: Fault Tolerance and Control Architecture Escola Tècnica Superior d Enginyeria Departamento d Arquitectura de Computadors i Sistemas Operatius P2P-VoD on Internet: Fault Tolerance and Control Architecture Thesis submitted by Rodrigo Godoi under

More information

FEDERAL CLOUD COMPUTING STRATEGY. Vivek Kundra U.S. Chief Information Officer

FEDERAL CLOUD COMPUTING STRATEGY. Vivek Kundra U.S. Chief Information Officer FEDERAL CLOUD COMPUTING STRATEGY Vivek Kundra U.S. Chief Information Officer FEBRUARY 8, 2011 TABLE OF CONTENTS Executive Summary 1 I. Unleashing the Power of Cloud 5 1. Defining cloud computing 5 2.

More information

B.2 Executive Summary

B.2 Executive Summary B.2 Executive Summary As demonstrated in Section A, Compute Canada (CC) supports a vibrant community of researchers spanning all disciplines and regions in Canada. Providing access to world- class infrastructure

More information

DRAFT Cloud Computing Synopsis and Recommendations

DRAFT Cloud Computing Synopsis and Recommendations Special Publication 800-146 DRAFT Cloud Computing Synopsis and Recommendations Recommendations of the National Institute of Standards and Technology Lee Badger Tim Grance Robert Patt-Corner Jeff Voas NIST

More information

Cloud Service Level Agreement Standardisation Guidelines

Cloud Service Level Agreement Standardisation Guidelines Cloud Service Level Agreement Standardisation Guidelines Brussels 24/06/2014 1 Table of Contents Preamble... 4 1. Principles for the development of Service Level Agreement Standards for Cloud Computing...

More information

The Design of the Borealis Stream Processing Engine

The Design of the Borealis Stream Processing Engine The Design of the Borealis Stream Processing Engine Daniel J. Abadi 1, Yanif Ahmad 2, Magdalena Balazinska 1, Uğur Çetintemel 2, Mitch Cherniack 3, Jeong-Hyon Hwang 2, Wolfgang Lindner 1, Anurag S. Maskey

More information

Power System Control Centers: Past, Present, and Future

Power System Control Centers: Past, Present, and Future Power System Control Centers: Past, Present, and Future FELIX F. WU, FELLOW, IEEE, KHOSROW MOSLEHI, MEMBER, IEEE, AND ANJAN BOSE, FELLOW, IEEE Invited Paper In this paper, we review the functions and architectures

More information

WHITEPAPER CLOUD. Possible Use of Cloud Technologies in Public Administration. Version 1.0.0. 2012 Euritas

WHITEPAPER CLOUD. Possible Use of Cloud Technologies in Public Administration. Version 1.0.0. 2012 Euritas WHITEPAPER CLOUD Possible Use of Cloud Technologies in Public Administration Version 1.0.0 2012 Euritas THE BEST WAY TO PREDICT THE FUTURE IS TO CREATE IT. [Willy Brandt] 2 PUBLISHER'S IMPRINT Publisher:

More information

Advanced Manufacturing Systems and Enterprises

Advanced Manufacturing Systems and Enterprises University of Minho School of Engineering Advanced Manufacturing Systems and Enterprises Goran D. Putnik, Hélio Castro, Luís Ferreira, Rui Barbosa, Gaspar Vieira, Cátia Alves, Vaibhav Shah, Zlata Putnik,

More information

Convergence of Social, Mobile and Cloud: 7 Steps to Ensure Success

Convergence of Social, Mobile and Cloud: 7 Steps to Ensure Success Convergence of Social, Mobile and Cloud: 7 Steps to Ensure Success June, 2013 Contents Executive Overview...4 Business Innovation & Transformation...5 Roadmap for Social, Mobile and Cloud Solutions...7

More information

Data security in multi-tenant environments in the cloud

Data security in multi-tenant environments in the cloud Institute of Parallel and Distributed Systems University of Stuttgart Universitätsstraße 38 D 70569 Stuttgart Diplomarbeit Nr. 3242 Data security in multi-tenant environments in the cloud Tim Waizenegger

More information



More information

CloudGenius: Decision Support for Web Server Cloud Migration

CloudGenius: Decision Support for Web Server Cloud Migration CloudGenius: Decision Support for Web Server Cloud Migration ABSTRACT Michael Menzel Research Center for Information Technology Karlsruhe Institute of Technology Karlsruhe, Germany Cloud

More information

Best practice in the cloud: an introduction. Using ITIL to seize the opportunities of the cloud and rise to its challenges Michael Nieves. AXELOS.

Best practice in the cloud: an introduction. Using ITIL to seize the opportunities of the cloud and rise to its challenges Michael Nieves. AXELOS. Best practice in the cloud: an introduction Using ITIL to seize the opportunities of the cloud and rise to its challenges Michael Nieves White Paper April 2014 Contents 1 Introduction 3 2 The

More information

The HAS Architecture: A Highly Available and Scalable Cluster Architecture for Web Servers

The HAS Architecture: A Highly Available and Scalable Cluster Architecture for Web Servers The HAS Architecture: A Highly Available and Scalable Cluster Architecture for Web Servers Ibrahim Haddad A Thesis in the Department of Computer Science and Software Engineering Presented in Partial Fulfillment

More information

Introduction to Grid Computing

Introduction to Grid Computing Front cover Introduction to Grid Computing Learn grid computing basics Understand architectural considerations Create and demonstrate a grid environment Bart Jacob Michael Brown Kentaro Fukui Nihar Trivedi

More information

Contracting Guidance to Support Modular Development. June 14, 2012

Contracting Guidance to Support Modular Development. June 14, 2012 Contracting Guidance to Support Modular Development June 14, 2012 Table of Contents I. Purpose and Scope... 2 II. Overview... 2 III. Background... 3 IV. Modular Development: Key Terms, Principles and Risks...

More information