Software. FP7 Project Portfolio. & Services. Cloud Computing, Internet of Services & Advanced Software Engineering

Size: px
Start display at page:

Download "Software. FP7 Project Portfolio. & Services. Cloud Computing, Internet of Services & Advanced Software Engineering"


1 Software & Services FP7 Project Portfolio Cloud Computing, Internet of Services & Advanced Software Engineering Objective ICT Call 8 of FP7-ICT June 2012 Digital Agenda for Europe

2 LEGAL NOTICE By the Commission of the European Union, Communications Networks, Content & Technology Directorate-General, Software & Services, Cloud Computing Unit Neither the European Commission nor any person acting on its behalf is responsible for the use which might be made of the information contained in the present publication. The European Commission is not responsible for the external web sites referred to in the present publication. The views expressed in this publication are those of the authors and do not necessarily reflect the official European Commission view on the subject. ISBN doi: /85125 European Union, 2012 Reproduction is authorised provided the source is acknowledged. Printed in Belgium

3 Table of contents Objective 1.2 Cloud Computing, Internet of Services and Advanced Software Engineering.. 4 Introduction... 6 ARTIST...10 BET aas Bigfoot CELAR 18 CloudScale.. 20 Cloudspaces 22 COMPOSE. 24 HARNESS.. 26 LEADS 28 MARKOS 30 MIDAS 32 MODACLOUDS 34 OCEAN.. 36 OpenI.. 38 OSSMETER PaaSage PROSE 44 PROWESS. 46 RISCOSS 48 SUCRE 50 U-QASAR.. 52

4 ICT Work programme Objective ICT Cloud Computing, Internet of Services and Advanced Software Engineering Target outcomes a) Cloud Computing - Intelligent and autonomic management of cloud resources, ensuring agile elastic scalability. Scalable data management strategies, addressing the issues of heterogeneity, consistency, availability, privacy and supporting security. - Technologies for infrastructure virtualisation, cross platforms execution as needed for service composition across multiple, heterogeneous environments, autonomous management of hardware and software resources. - Interoperability amongst different clouds, portability, protection of data in cloud environments, control of data distribution and latency. - Seamless support of mobile, context-aware applications. - Energy efficiency and sustainability for software and services on the cloud. - Architectures and technologies supporting integration of computing and networking environments; implications of Cloud Computing paradigm on networks. - Open Source implementations of a software stack for Clouds. b) Internet of Services - Service engineering principles, methods and tools supporting development for the Internet of Services, including languages and tools to model parallelism. - Services enabled by technologies for seamless integration of real and virtual worlds, through the convergence with Internet of Things and Internet of Contents. - Massive scalability, self-management, verification, validation and fault localisation for software-based services. - Methods and tools to manage life cycle of secure and resilient Internet-scale applications from requirements to run-time and their adaptive evolution over time. c) Advanced software engineering - Advanced engineering for software, architectures and front ends spanning across all abstraction levels.

5 - Quality measure and assurance techniques which adapt to changing requirements and contexts, to flexibly deal with the complexity and openness of the Future Internet. - Management of non-functional requirements typical of Internet-scale applications, like concurrency levels which will be orders of magnitude larger than in today's applications, huge data stores and guaranteed performance over time. - Tools and methods for community-based and open source software development, composition and life cycle management. d) Coordination and support actions - Support for standardization and collaboration in software and services technologies. - Support for the uptake of open source development models in Europe and beyond. - Collaboration with Japanese entities on: cloud computing, particularly on common standards for data portability and on interoperability; services having more efficient energy usage. Expected impact Emergence of European interoperable clouds contributing to an internal market of services in the EU whilst providing very significant business opportunities to SME's; improved trust in cloud-based applications and storage for citizens and business. Availability of platforms for easy and controlled development and deployment of value-added services through innovative service front-ends. Lower barriers for service providers and users to develop, select, combine and use value-added services through significant advances in cloud computing technologies and standardised and open interfaces. Efficient implementation of mainstream software applications on massively parallel architectures. Easier evolution of legacy software over time, thanks to innovative methods and tools managing the complete lifecycle of software from requirements to run-time. Fast innovation cycles in service industry, e.g. through the use of Open Source development model. A strengthened industry in Europe for software-based services offering a large choice of services satisfying key societal and economical needs, with reinforced capabilities to engineer and produce software solutions and on-line services.

6 Introduction Objective ICT "Cloud Computing, Internet of Services and Advanced Software Engineering" The objective focuses on technologies specific to the networked, distributed dimension of software and access to services and data. It supports long-term research on new principles, methods, tools and techniques enabling software developers in the EU to easily create interoperable services based on open standards, with sufficient flexibility and at a reasonable cost. The call was structured around three research oriented target outcomes: a) Cloud Computing, b) Internet of Services, and c) Advanced Software Engineering. An additional target outcome focussed on coordination and support activities. Cloud Computing Within the target outcome Cloud Computing, a number of topics were called for. Intelligent and autonomic management of cloud resources, ensuring agile elastic scalability is addressed by PaaSage. Its open and integrated platform will allow users to manage Cloud resources and to have autonomic support at execution time to optimise application deployment. CELAR will dynamically allocate and de-allocate resources required by applications to be executed over cloud platforms, thanks to an intelligent decision-making module, which decides on the optimal expansion or contraction of allocated resources and their type in real-time. LEADS will provide the automatic management of CPU, network, and data resources across multiple clouds and it will provide elastic scalability across multiple levels of abstractions, i.e. providing agile vertical scaling mechanisms at the VM level and agile horizontal scaling within a cloud and across multiple clouds. Scalable data management strategies, addressing the issues of heterogeneity, consistency, availability, privacy and supporting security is addressed by BigFoot that will develop a novel, scalable system for processing and interacting with large volumes of data, tackling issues related to scalability, data and storage heterogeneity. CloudSpaces deals with consistency and replication over heterogeneous repositories in Personal Clouds. Technologies for infrastructure virtualisation, cross platforms execution as needed for service composition across multiple, heterogeneous environments, autonomous management of hardware and software resources is addressed by HARNESS and to a lesser extent CELAR. HARNESS will study new virtualisation techniques tailored to heterogeneous hardware and network resources for flexible allocation and reallocation and advanced techniques for optimising cost/performance allocation trade-offs. CELAR offers an innovative approach for autonomous management of hardware and software resources. Interoperability amongst different clouds, portability, protection of data in cloud environments, control of data distribution and latency is addressed by PaaSage, LEADS, and to a lesser extent CloudSpaces. PaaSage will allow dynamic interoperability among different Clouds, and transparent simultaneous use of both private Clouds and commercial Clouds, thanks to its Cloud modelling language and PaaSage platform. LEADS will ensure portability across different clouds by wrapper libraries, and it will provide data protection via encryption and access control mechanisms. CloudSpaces will study semantic interoperability of Personal Cloud data, as well as privacy-aware data sharing among different Personal Clouds.

7 OpenI will develop technical infrastructures that enable the sharing of data and content across a user s applications, devices and platforms. LEADS will use a novel approach of energy-aware scheduling in the cloud. The last Cloud Computing topic is Open Source implementations of a software stack for Clouds, which is a secondary objective of several projects. BigFoot aims to deploy a novel, scalable system for processing and interacting with large volumes of data as a full-fledged software stack for private cloud environments. OpenI will deliver a Cloud platform under an open source governance model, entirely built on open cloud stacks, and compatible from the start with existing cloud hosting providers that support open cloud technologies. Building on past/current projects that have contributed to a software stack for a Cloud infrastructure, PaaSage will offer a multi-cloud development platform that will work on top of existing infrastructure software, complementing what is currently available and completing the open source software stack for Clouds. Internet of Services Within the target outcome Internet of Services, service engineering principles, methods and tools supporting development for the Internet of Services, including languages and tools to model parallelism are addressed by and to a lesser extent by CloudScale, COMPOSE, and ARTIST. considers the model of Cloud Service Brokerage emerging as a central component of the future Internet of Services, and therefore places the implications of this model with respect to principles, methods and tools for service management as its central topic of investigation. CloudScale provides engineering principles, methods and tools that support the development of scalable services and service compositions. COMPOSE provides methods and tools necessary to discover, integrate, combine, execute, and manage highly distributed services, objects, and content. ARTIST will provide a complete solution with methods, principles and tools supporting the adaptation of legacy applications to the Future Internet, and preparing them for being offered as SaaS over the Internet. Services enabled by technologies for seamless integration of real and virtual worlds, through the convergence with Internet of Things and Internet of Contents are well addressed by COMPOSE and BETaaS. COMPOSE will provide support for integrating the Internet of Things, the Internet of Contents, and the Internet of Services, ensuring smooth convergence of the real towards virtual worlds. To this end, COMPOSE will support seamless virtualisation of existing content provisioning systems and objects into semantic services that can better be discovered, integrated, combined, executed, and managed. BETaaS will lay the foundations of fast and cost-effective development of Machine-to-Machine applications, also providing an environment for their efficient execution. The BETaaS platform will provide a set of services smart things in the Internet of Things with their representation in the Internet of Contents. Massive scalability, self-management, verification, validation and fault localisation for softwarebased services are addressed by CloudScale and to a less extent by BETaaS. CloudScale provides methods and tools to better achieve massive scalability of new and legacy software services. Because of the envisaged growth of the Internet of Things, massive scalability is a key requirement for future adoption of BETaaS, which will use a fully distributed architecture hosted on the Machine-to- Machine gateways. Furthermore, BETaaS addresses self-management through the use of semantic technologies to model the behaviour of things.

8 Methods and tools to manage life cycle of secure and resilient Internet-scale applications from requirements to run-time and their adaptive evolution over time are addressed by COMPOSE and COMPOSE will provide a marketplace and supporting infrastructure for the discovery, creation, combination, and exploitation of highly adaptive and distributed added-value Internet services built on top of existing content, objects, and services. will deliver methods and mechanisms helping brokers to address the issue of cloud service lifecycle management including keeping track of the evolution of services over time and understanding the impact that changes can have on different consumers. Advanced Software Engineering Within the target outcome Advanced Software Engineering a number of topics were called for. Advanced engineering for software, architectures and front ends spanning across all abstraction levels is well addressed by MODAClouds and ARTIST. MODAClouds will deliver an advanced model-based approach and an Integrated Development Environment to support system developers in building and deploying applications, with related data, to multi-clouds spanning across the full Cloud stack (IaaS/PaaS/SaaS). ARTIST will provide tools for a Reverse Engineering process that will extract all necessary information from an original legacy application and will produce models of the latter. Furthermore, through Forward Engineering techniques and tools, ARTIST will produce final models of a service-oriented application, at different levels of abstraction, complexity and concerns, in order for this to be deployed in a target cloud environment. Quality measure and assurance techniques which adapt to changing requirements and contexts, to flexibly deal with the complexity and openness of the Future Internet is very well addressed by several prioritised proposals. PROWESS aims to automate quality assurance, reducing its cost and improving effectiveness, based on properties of the system that should hold. It will provide a development process and tools that ensure dependable quality of service through directly verifying high-level properties of a system. U-QASAR will create a flexible Quality Assurance, Control and Measurement Methodology to quantify the quality of Internet-related software development projects and their resulting products. In addition to design-time and run-time quality measures and assurance techniques, MODAClouds will offer prediction mechanisms and adaptive policies for addressing automatically specific critical cases. MIDAS will provide an integrated framework for SOA testing automation that spans all testing activities: test generation, execution, evaluation, planning and scheduling, on the functional, interaction, fault tolerance, security and usage-based testing aspects. As for Management of non-functional requirements typical of Internet-scale applications, like concurrency levels which will be orders of magnitude larger than in today s applications, huge data stores and guaranteed performance over time, MIDAS will offer security testing as well as fault tolerance testing, i.e. testing of a system s capability to cope with the unavailability of resources necessary to deliver/use services. Tools and methods for community-based and open source software development, composition and life cycle management are well addressed by several prioritised proposals. MARKOS will identify the relationship between components located at different forges or sites, and simplify the discovery of Open Source components distributed on the Net through software metadata published as Linked Data with a uniform and central access. RISCOSS will offer novel risk identification, management and mitigation tools and methods for community-based and industry-supported Open Source Software (OSS) development, composition and lifecycle management to individually, collectively and/or collaboratively manage OSS adoption risks. OSSMETER targets automated analysis and measurement of open-source software, and will develop a platform that will support decision makers in the process of discovering, comparing, assessing and monitoring the health, quality, impact and activity of OSS. It will compute trustworthy quality indicators by advanced analysis and integration of

9 information from diverse sources including project metadata, source code repositories, communication channels and bug tracking systems of OSS projects. Coordination and Support Actions As for Coordination and Support Actions, three topics were called for. Support for standardization and collaboration in software and services technologies is addressed by OCEAN, which will analyse the current status and standardisation needs for the emergence of an Open Cloud in Europe. OCEAN s goal is to foster the emergence of a sustainable open source cloud offering and boost market innovation in Europe, by generating greater efficiency and economies of scale among European FP7 collaborative research projects on open source cloud computing. Support for the uptake of open source development models in Europe and beyond is addressed by PROSE, OCEAN, and SUCRE. PROSE is principally concerned with providing a platform to coordinate FLOSS from FP7 projects. OCEAN will help FP7 research projects on Cloud Computing to spot open source projects with which they could collaborate and to identify what software components they could use to achieve their objectives. SUCRE will explore the adoption of Cloud Computing coupled with the use of Open Source development models in the Public Sector and the Health Care Industry. It will also operate an expert group with industrial representatives from Europe and Japan to discuss, amongst others, the uptake of open-source development models. Objective ICT "Cloud Computing, Internet of Services and Advanced Software Engineering" Note: some projects may appear in several areas More detailed factsheets of each project are provided in the remainder of this brochure. The factsheets are listed in alphabetical order of the projects' acronyms.

10 ARTIST ARTIST proposes a software modernization approach based on Model Driven Engineering techniques to automate the reverse and forward engineering of legacy applications to and from platform independent models. It reduces the risk, time and cost of migrating legacy software and lowers the barriers for service companies wanting to take advantage of the latest Cloud Computing and Software as a Service based technologies and business models. AT A GLANCE Project title: Advanced software-based service provisioning and migration of legacy SofTware Project reference: Project coordinator: Clara Pezuela, ATOS Spain SA, SPAIN Partners: Fundacion Tecnalia Research & Innovation, (ES) Inria - French National Institute For Research In Computer Science and Control, (FR) Fraunhofer-Gesellschaft zur Foerderung der Angewandten Forschung E.V., (DE) Technische Universität Wien, (AT) Engineering, (IT) Institute Of Communication and Computer Systems, (GR) Sparx Systems Software GMBH, (AT) Athens Technology Center SA, (GR) Spikes NV, (BE) Duration: 36 months Total cost: 9,69M Website: Concept The Cloud-based service delivery model has the potential to create tremendous new business opportunities for software companies. On-going improvements in the Internet s connections, both in speed and reliability as well as reach, have made Internet native solutions an attractive alternative, and the rate of innovation driving software and service evolution is still accelerating. Innovations in the technological space affect the systems that the software has to support or needs to adapt to. Innovations in the business space not only affect the licensing and usage model, but also the core value proposition to the customer. To remain viable, legacy software solutions have to be improved with regard to these new circumstances, but without disrupting the business continuity of existing customers. Software service companies need to transition to a new opportunity model, without abandoning their client portfolios. The complete lifecycle of software, from requirements to delivery and operations, has to be re-adapted to the new technological and business conditions, requirements and challenges. There is a need for tools and methods to support software evolution and adaptation as a key value for next generation service-based software modernization. Following this approach, companies face the following challenges: The decision whether to migrate their existing products or to start from scratch;

11 The estimation of the impact and effort required to implement the modernization of a system is difficult and uncertain; Time-to-market is critical. Therefore the software development cycles need to change; High requirements for specialized skills due to a low degree of process automation. A complete approach is needed that helps companies bring their applications and services into the Internet of Services, taking into account the implications of current architectures, and forecasting the implications of future ones. This requires the development of a new vendor and platform independent methodology and a new automation oriented toolset for reengineering, migration, maintenance and evolution. This is the mission of ARTIST. Objective To prepare, support and increase the competitiveness of the European Software and Services Industry in a global Cloud and Software as a Service (SaaS) business environment, ARTIST develops a set of methods, tools and techniques that facilitate the transformation and modernization of legacy software assets and businesses. The project creates tools to assess, plan, design, implement and validate the automated evolution of legacy software to SaaS and the Cloud Computing delivery model. By focusing on reusability during this transition, the methods and tools are generic enough to cover future shifting efforts, e.g. deployment to future platform delivery paradigms. Creates a unified performance modelling framework; Identifies dynamic deployment methodologies; Fosters reusability of the (modelling) artefacts produced during the migration process through the usage of a repository; Implements an innovative and thorough testing and continuous validation process that will span across all layers of the multilevel ecosystem; Enhances, enforces and promotes the usage of an integrated certification model for Cloud application providers. Impact Easier evolution of legacy software over time, thanks to innovative methods and tools managing the complete lifecycle of software from requirements to run-time. This major impact is expected to be produced by ARTIST through allowing software vendors and users of open source software to migrate their legacy software to a new software paradigm in an automated, easy and cost effective way. This means that legacy software can be transformed so that it receives the benefits of that new software paradigm such as performance enhancement, cost effectiveness, and better interoperability. Approach In order to reach its objective, ARTIST: Develops an innovative and combined technical and business analysis on the maturity and prospect of the legacy application; Provides a large scale model-based approach for representing the source and target applications as well as infrastructures/platforms;

12 BETAAS In BETaaS a platform will be developed for the execution of machine-to-machine (M2M) applications built on top of services deployed in a local cloud of gateways. Scalability, security, dependability, context and resource awareness, and quality of service (QoS) will be embedded by design into the platform. Validation will be done through experiments targeting the Smart City and Home Automation use cases. Motivation AT A GLANCE Project title: Building the Environment for the Things as a Service Project reference: Project coordinator: Elena Cordiviola, INTECS, ITALY Partners: ATOS, (ES) Hewlett-Packard Italy, (IT) Converge ICT Solutions and Services, (GR) Tecnalia, (ES) Aalborg University CTIF, (DK) Università di Pisa, (IT) Duration: 30 months Total cost: 3,79M Website: Today there are countless devices at work to improve productivity and quality of life of human beings, in all technological domains. In most cases they operate in isolation or with very little cooperation from their likes, and serve a well-defined single purpose for which they have been engineered. Such situation is sub-optimal because: (i) finegrained (raw) data have to be conveyed in a centralized manner over the Internet from sensors up to the remote center, thus, the things and gateways are effectively separated from the back-end both physically and logically; (ii) the current approach is vertical, i.e. each M2M application has its own remote center for data storage and processing. Concept To overcome the limitations of the current systems for M2M applications, we propose a novel approach based on the following principles: 1. Storage and processing of data are as close as possible, in space and time, to where they are generated and consumed. 2. Important non-functional requirements, namely security, reliability and QoS, are supported by means of a tight integration of the high-level services provided with the physical resources of the peripheral devices, i.e. things and gateways. 3. Energy efficiency and scalability of the systems are achieved through the distribution

13 of on-the-spot inferred content, rather than raw data. These principles will be realized by defining a content-centric platform distributed over a local cloud, hosted by the gateways, providing an environment for applications accessing M2M services and devices through a set of services. Its deployment will be dynamic so as to follow the time-varying M2M services and the changing characteristics of the applications. Project goal To design and realize a runtime platform for the deployment and execution of contentcentric M2M applications, which relies on a local cloud of gateways. The proposed platform will provide a uniform interface and services to map content (information) with things (resources) in a context-aware fashion. Deployment of services for the execution of applications will be dynamic and will take into account the computational resources of the low-end physical devices used. To this aim, the platform will need to be based on a suitable defined Internet of Things model, which will allow the integration of the BETaaS components within the future Internet environment. Impact The barriers to enter the M2M segment today are high, since the market is very fragmented and user requirements and expectations are very heterogeneous. This scenario is especially harsh for small and medium players, who do not have a sufficiently vast commercial influence or the strength for massive marketing campaigns. By defining open interfaces, which will be also pushed for standardization in the relevant technical committees, M2M service providers have reduced risks of the investment and less training/setup costs, since they can reuse the skills and experience over different projects and also across various domains. The BETaaS platform will be released as open source to achieve the following goals: i) to benefit from the contributions of the open source community of developers (in terms of customization, testing, developments ); ii) to allow M2M service providers to focus on the application-specific aspects, without the need for working on common features, which reduces the development costs and time-tomarket.

14 BIGFOOT The aim of BigFoot is to design, implement and deploy a Platform-as-a-Service solution for processing and interacting with large volumes of data coming from ICT Security, Smart Grid and other application areas. The BigFoot stack builds upon and contributes to the Apache Hadoop ecosystem and the Apache OpenStack project. AT A GLANCE Project title: Big Data Analytics of Digital Footprints Project reference: Project coordinator: Pietro Michiardi, EURECOM, FRANCE Partners: Ecole Polytechnique Federale de Lausanne, (CH) Technische Universität Berlin, (DE) Symantec Lab, (IE) CEO, GridPocket SAS, (FR) Duration: 36 months Total cost: 3.54M Website: The Context The amount of digital information in our world has been exploding and new technologies and services will continue to fuel exponential growth of large pools of data that can be captured, stored, and analyzed. Nowadays, however, tools and services to store, process and interact with data are still in their infancy, represented by scattered solutions that fall short in having a unified vision, that lack common interfaces, and that only offer best-effort services. The aim of BigFoot is to overcome current drawbacks by designing, implementing and evaluating a Platform-as-a-Service. The BigFoot stack features automatic and selftuned deployments of storage and processing services for private clouds, going beyond best-effort services currently available in the state-of-the-art. BigFoot takes a novel, cross-layer approach to system optimization, which is evaluated with a thorough experimental methodology using realistic workloads and datasets from two representative applications, namely ICT Security and Smart Grid data analytics. In addition, BigFoot aims at making data interaction easy by supporting high-level languages and by taking a service-oriented approach to support and optimize latency sensitive queries.

15 BigFoot Approach BigFoot merges several research domains, including the design of scalable algorithms for data mining, large-scale, fault-tolerant distributed systems, and virtualization technologies. These areas blend together resulting in a full-fledged software stack, illustrated by the Figure below, and described as follows: The Virtualization Layer includes mechanisms for the virtualization of the BigFoot infrastructure (including server machines and network), the deployment of BigFoot instances, and their optimization. The Data Stores Layer takes care of the optimization of data partitioning and placement in conjunction with the virtualization layer and offers highavailability support to overcome SPOFs. communities. In particular, BigFoot builds upon the Apache Hadoop ecosystem and the Apache OpenStack project: as such, BigFoot uses the Apache Software Foundation v2.0 License. BigFoot uses a two-pronged approach to software development: it maintains public GIT repositories for experimental-level code which are incrementally populated and it uses the appropriate ticketing (JIRA) system to contribute to existing projects. Experimental Platforms, to run thorough, reproducible comparative analyses of system performance as achieved by the BigFoot stack and other existing (even commercial) solutions to Big Data analysis. BigFoot will contribute to Symantec s WINE initiative, where it is going to be deployed as a PaaS, and to GridPocket s energy services platform. The Batch and Interactive Engines Layer implement several data-flow optimizations and provide a new approach to express batch data analysis using a highlevel language. The service-oriented query engine is a novel component to optimize and facilitate the interaction with data. The BigFoot Impact Several sources suggest that we are on the cusp of a tremendous wave of innovation, productivity, and growth as consumers, companies, and economic sectors exploit the potential of large amounts of data directly or indirectly produced by their interaction. The aim of BigFoot is to produce: High quality research, by focusing outstanding problems in the domain of large-scale data management, with an approach that blends systems and theory research. The cross-layer approach to system optimization brings additional challenges that have not yet been addressed in the literature. Finally, measurement-based analyses of typical workloads (as obtained from real-world applications and data) represent an invaluable contribution to the research community. Open-source Software, by contributing the BigFoot software stack and a selection of its inner components to relevant BigFoot Software Stack Data mining Algorithms Applications Batch Analytics and Interactive Query Engines Distributed Data Stores Infrastructure Virtualization Virtualization Layer High-level query to low-level program translation Compiler Optimization Library Data Partitioning and Placement Statistical Patterns Analysis Hadoop MapReduce Service Deployment Hadoop Distributed Filesystem Virtualization Layer Interactive Queries Service-oriented Query Engine Data Store Support Distributed Datastore Service Optimization WP2 WP3 WP4 WP5

16 The goal of the project is to develop a framework that will equip cloud service intermediaries with advanced methods and mechanisms for continuous quality assurance and optimization of software-based cloud services. The framework will allow enterprise cloud service brokers to monitor the obligations of providers towards consumers, as well as to detect opportunities for optimising service consumption. AT A GLANCE Project title: Enabling Continuous Quality Assurance and Optimization in Future Enterprise Cloud Service Brokers Project reference: Project coordinator: Geir Horn, SINTEF, NORWAY Partners: CAS Software, (DE) Institute of Communication and Computer Systems, (GR) SAP, (DE) South-East European Research Centre, (GR) SingularLogic, (GR) The University of Sheffield, (UK) Duration: 36 months Total cost: 4.96 M Website: Motivation As enterprises increasingly adopt the model of cloud computing, their IT environments are transformed into a matrix of interwoven infrastructure, platform and application services delivered by multiple providers. In order to deal with the complexity of consuming large numbers of cloud services from diverse sources, enterprises will need assistance from specialised cloud service delivery intermediaries. These will need to offer an array of sophisticated brokerage services which will go far beyond the kinds of intermediation capabilities available today. The challenge The challenge taken up by is to research and to develop solutions with respect to some of the most valuable and technically demanding types of brokerage capabilities foreseen for future enterprise cloud service brokers. We envisage developing a brokerage framework which will allow cloud intermediaries to equip their platforms with advanced methods and mechanisms for continuous quality assurance and optimization of software-based cloud services. Those software-based services can range from simple programmaticallyaccessible web APIs, to complex software applications delivered as cloud services, i.e. on-demand Software-as-a-Service offerings.

17 Employing the capabilities provided by the framework will assist future enterprise cloud service brokers in providing assurances towards consumers with respect to how reliable and how optimal the delivered services are. The goal is to provide future cloud intermediaries with advanced means of monitoring both the obligations of each cloud service provider towards consumers (as well as towards the intermediary itself), and the opportunities for optimising the services each consumer receives, as soon as these surface. framework The brokerage framework, most of which will be released as Open Source Software, will comprise the following core building blocks: 1. Capabilities for cloud service governance and quality control (lifecycle management, dependency tracking, policy compliance, SLA monitoring, certification testing) 2. Capabilities for cloud service failure prevention and recovery (event monitoring, reactive and proactive failure detection, adaptation analysis and recommendation) 3. Capabilities for continuous optimization of cloud services (optimisation opportunity detection and analysis based on cost, quality, or functionality preferences) 4. Interfaces and methods for platformneutral description of enterprise cloud services (technical, operational and business aspects, static and dynamic views) The validation of project results will be done through two pilot case studies, during which we will integrate the framework into two different enterprise cloud service delivery platforms (CAS Open and SingularLogic Galaxy). Impact The results of are expected to be of significant value to the enterprise software industry, which is presently hardpressed to understand the implications of the ongoing paradigm shift towards cloud computing, and to redefine roles and opportunities in the emerging setting. Dissemination and an open source development model is the strategy through which the consortium will seek to achieve this impact.

18 CELAR The CELAR project provides automatic, fine-grained resource allocation for Cloud applications. This enables the commitment of just the right amount of resources based on application demand, performance and requirements, resulting in optimal use of infrastructure and significant reductions in costs. AT A GLANCE Project title: CELAR: Cloud ELAsticity provisining Project reference: Project coordinator: Nectarios Koziris, ATHENA Research and Innovation Centre in Information, Communication and Knowledge Technologies, GREECE Partners: University of Cyprus, (CY) Vienna University of Technology, (AT) GRNET S.A., (GR) Playgen, (UK) Institute of Cancer Research, (UK) Sixsq Sarl, (CH) Flexiant Limited, (UK) Duration: 36 months Total cost: 3.46Μ Website: Project Rationale - the idea behind CELAR Auto-scaling resources is one of the top obstacles and opportunities for Cloud Computing: consumers can minimize the execution time of their tasks without exceeding a given budget; cloud providers maximize their financial gain while keeping their customers satisfied and minimizing administrative costs. Many systems claim to offer adaptive elasticity, yet the throttling is usually performed manually, requiring the user to figure out the proper scaling conditions. In order to harvest the benefits of elastic provisioning, it is imperative that it is performed in an automated, fully customizable manner. CELAR delivers a fully automated and highly customizable system for elastic provisioning of resources in cloud computing platforms. CELAR Technical Realization The main outcome of the project is a complete set of open-source tools that will allow the enhancement of a Cloud platform towards automatic, intelligent, multi-grained resource provisioning according to the needs of user applications. Specifically, the CELAR contribution comprises four main parts: (i) the elasticity provisioning subsystem, which is a middleware that adaptively and automatically manages platform resources; (ii) the c-eclipse framework to provide plug-ins for accessing and managing Cloud resources on the envisioned platform; (iii) a Cloud information subsystem, which contains a Cloud resource description framework and search capabilities for Cloud

19 resources; (iv) a scalable, multi-layer Cloud Monitoring tool that gathers a rich set of platform, infrastructure and application-side metrics and evaluates them in a composite fashion. These modules are both generic in nature and open-source, in order to allow for maximum utilization and ease of adaptation with existing commercial, academic and community systems. Providing added value and simplifying application deployment over CELAR, the project also develops a framework for the cloudification of any elasticity-demanding cloud application with the CELAR system, offering this integration into a single installable software package. The main outcome of the project (featuring a well-defined 3-layer architecture as described above) is depicted in the figure. CELAR Impact and means of achievement CELAR actively contributes to two major goals defined by the digital agenda: interoperability and open access for ICT products and services. Its technology brings forth advantages that are manifold. The optimal use of cloud resources results in significant cost-reductions and increased application performance, required by both infrastructure providers and users alike. Equally important, CELAR provides open, standardized access over the complete stack of the cloud ecosystem. This results in avoidance of vendor lock-ins, increased applicability and ease of service development and deployment, giving the project potential to become an invaluable European technological and economic asset. Achieving the expected impact relies on three pillars: 1) Visibility for the developed technology 2) Meeting the needs of potential users and offering new types of services to the respective customers. 3) Easing Adoption of the Technology and clearly showcasing its potential benefits Two exemplary applications, in the areas of online gaming and scientific computing, showcase and validate the developed technology, providing a clear path towards the adoption of the CELAR rationale and increasing the visibility and impact of the project.

20 CLOUDSCALE The goal of CloudScale is to aid service providers in analysing, predicting and resolving scalability issues, i.e. support scalable service engineering. The project extends existing and develops new solutions that support the handling of scalability problems of software-based services. AT A GLANCE Project title: Scalability Management for Cloud Computing Project reference: Project coordinator: Richard Sanders, SINTEF, NORWAY Partners: SAP, (DE) Ericsson Nikola Tesla, (HR) XLAB, (SI) University of Paderborn, (DE) Duration: 36 months Total cost: 4.70M Website: The Problem Cloud providers theoretically offer their customers unlimited resources for their applications on an on-demand basis. However, scalability is not only determined by the available resources, but also by how the control and data flow of the application or service is designed and implemented. Implementations that do not consider their effects can either lead to low performance (under-provisioning, resulting in high response times or low throughput) or high costs (over-provisioning, caused by low utilisation of resources). Objectives CloudScale provides an engineering approach for building scalable cloud applications and services. Our objectives: 1. Make cloud systems scalable by design so that they can exploit the elasticity of the cloud, as well as maintaining and also improving scalability during system evolution. At the same time, a minimum amount of computational resources shall be used. 2. Enable analysis of scalability of basic and composed services in the cloud. 3. Ensure industrial relevance and uptake of the CloudScale results so that scalability becomes less of a problem for cloud systems.

21 Results CloudScale enables the modelling of design alternatives and the analysis of their effect on scalability and cost. Best practices for scalability further guide the design process. CloudScale provides tools and methods that detect scalability problems by analysing code. Based on the detected problems, CloudScale offers guidance on the resolution of bad practice. As a basis for all of this is a language (ScaleDL) that service providers use to specify the scalability properties of basic and composed cloud services. Value Chain CloudScale provides tools and methods supporting inherently massively scalable services architectures, enabling the European industry, including SMEs, to gain an advantage when developing services for the cloud. The results of CloudScale are aimed at different types of people, organisations and roles, offering benefits to each: End users: Improved scalability of system deployed in clouds means satisfied users even during peak load. For developers of software services, improved scalability management becomes a selling point. CloudScale tools help developers make sensible decisions about which parts of the system most require gold plating. System architects, the composers of software services are able to understand and predict the scalability of services resulting from compositions. Service providers are able to make timely decisions about purchase or deployment of more hardware in order to prevent scalability bottlenecks before they show up. They are also able to plan reduction in non-essential features to retain core functionality during periods of extreme demand. IaaS (Infrastructure as a Service) providers may lose some business due to more effective use of resources by their customers, thanks to improved scalability. However, they are able to serve more customers with the same hardware through better management of scalability of their own systems, and can thereby operate with a smaller safety margin and greater profit.

22 CLOUDSPACES The CloudSpaces project advocates for a paradigm shift from application-centric to user-centric models where users will retake the control of their information. To this end, CloudSpaces will devise an open service platform providing privacyaware data sharing as well as interoperability mechanisms among heterogeneous Personal Clouds. AT A GLANCE Project title: CloudSpaces: Open Service Platform for the Next Generation of Personal clouds Project reference: Project coordinator: Pedro García López, Universitat Rovira i Virgili, SPAIN Partners: École Polytechnique Fédérale de Lausanne, (CH), Institut Eurecom, (FR), Canonical Limited, (UK), eyeos, (ES), TISSAT, (ES) Duration: 36 months Total cost: 4.01Μ Website: Towards personal cloud 2.0 In the next few years, users will require ubiquitous and massive network storage to handle their ever-growing digital lives. In this line, the Personal Cloud model defines a ubiquitous storage facility enabling the unified and location agnostic access to information flows from any device and application. Popular providers like Dropbox or Ubuntu One already provide unified synchronization and sharing services to millions of users. But Personal Clouds are in their infancy, and two major problems must be solved: First, there is a big privacy problem that precludes the adoption of this model by many users, companies and public institutions. Most Personal Clouds follow a simple centralized synchronization model that stores all information in the Cloud as a remote file system. The entire data management process is in the hands of the Cloud providers, so the users really lose control of where their information is stored and who can access it. Another important problem is the lack of interoperability between Personal Cloud services impeding information sharing, but also precluding information portability among them. This generates what is known as vendor lock-in: a best decision now may leave a customer trapped with an obsolete provider later, simply because the cost to switch from one provider to another is prohibitively expensive.

23 CloudSpaces aims to create the next generation of Personal Clouds, namely Personal Cloud 2.0, offering advanced issues like interoperability, advanced privacy and access control, and scalable data management of heterogeneous storage resources. Furthermore, it will offer an open service platform for third-party applications leveraging the capabilities of the Open Personal Cloud. CloudSpaces Platform CloudSpaces aims to create the next generation of open Personal Clouds using three main building blocks: CloudSpaces Share, CloudSpaces Storage and CloudSpaces Services. CloudSpaces Share will deal with interoperability and privacy issues. The infrastructure must ensure privacy-aware data sharing and trustworthy assessment from other Personal Clouds. It will also overcome existing vendor lock-in risks thanks to open APIs, metadata standards, personal data ontologies, and portability guarantees. CloudSpaces Storage takes care of scalable data management of heterogeneous storage resources. In particular, users retaking control of their information implies control over data management. This new scenario clearly requires novel adaptive replication and synchronization schemes dealing with aspects like load, failures, network heterogeneity and desired consistency levels. Finally, CloudSpaces Services provides a high level service infrastructure for thirdparty applications that can benefit from the Personal Cloud model. It will offer data management (3S: Store, Sync, Share), dataapplication interfaces, and a persistence service to heterogeneous applications with different degrees of consistency and synchronization. Impact We are now in a decisive turning point that will definitely influence how we interact with the information in the following years. If the major players dominate this market with vertical walled garden solutions, there will be little space left for European Cloud providers, software solution providers and SMEs. On the contrary, CloudSpaces foresees short time-to-market and important impacts. In particular, the project will reach a global impact thanks to our contributions to three open source projects with huge communities: Ubuntu One Personal Cloud, eyeos Personal Desktop, and OpenStack Cloud middleware. Cloud providers will benefit from OpenStack Swift novel Personal Cloud services facilitating the emergence of European interoperable clouds. End-users and companies will increase their trust in cloud-based applications and storage. This can ease the massive adoption of online services such as Ubuntu One or others. Software solution providers and SMEs will be able to build innovative services on top of open Personal Clouds. eyeos Personal Desktop is our key proof of concept demonstrating the capabilities of the platform.

24 COMPOSE COMPOSE will create an ecosystem for unleashing the power lying within the vast amount of internet connected smart objects by enabling easy construction of services based on these objects. COMPOSE technology will enable standardized access to such objects, the creation of base services, combine them into composite services, and finally building applications. AT A GLANCE Project title: Collaborative Open Market to Place Objects at your SErvice Project reference: Project coordinator: Benny Mandler, IBM Research, ISRAEL Partners: CREATE-NET, (IT) Fraunhofer Institute FOKUS, (DE) The Open University, (UK) Barcelona Supercomputing Center, (ES) INNOVA S.p.A, (IT) University of Passau, (DE) U-Hopper, (IT) GEIE ERCIM (W3C), (FR) Fundació Privada Barcelona Digital Centre Tecnològic (Bdigital), (ES) Retevision, (ES) EVRYTHNG, (UK) Duration: 36 months Total cost: 7,40M Website: (under construction) COMPOSE overview COMPOSE aims at enabling new services that can seamlessly integrate real and virtual worlds through the convergence of the Internet of Services (IoS) with the Internet of Things (IoT). COMPOSE will achieve this through the provisioning of an open and scalable marketplace infrastructure, in which smart objects are associated to services that can be combined, managed, and integrated in a standardised way to easily and quickly build innovative applications. The project will develop novel approaches for virtualising smart objects into services and for managing their interactions. This will include solutions for managing knowledge derivation, secure and privacypreserving data aggregation and distribution, dynamic service composition, advertising, discovering, provisioning, and monitoring. COMPOSE is expected to give birth to a new business ecosystem, building on the convergence of the IoS with the IoT and the Internet of Content (IoC). The COMPOSE marketplace will allow SMEs and innovators to introduce new IoT-enabled services and applications to the market in a short time and with limited upfront investment. At the same time, major ICT players, particularly cloud service providers and telecommunications companies, will be able to reposition themselves within a new IoTenabled value chain.

25 Technical Approach The vision of the COMPOSE project is to advance the state of the art by integrating the IoT and the IoC with the IoS through an open marketplace, in which data from Internet-connected objects can be easily published, shared, and integrated into services and applications. The marketplace will provide all the necessary technological enablers, organized into a coherent and robust framework covering both delivery and management aspects of objects, services, and their integration. Object virtualization: enabling the creation of standardized service objects Interaction virtualization: abstract heterogeneity while offering several interaction paradigms Knowledge aggregation: creating information from data Discovery and advertisement: of semantically-enriched objects and services Data Management: handle massive amounts and diversity of data/metadata Ad hoc creation, composition, and maintenance: of service objects and services Security, heterogeneity, scalability, and resiliency: incorporated throughout the layers Expected Impact COMPOSE strives for a strong impact on a developing market by lowering barriers to develop, select, combine, and use IoT-based standardized value added services. This will be achieved by providing a complete ecosystem, and having it adopted by enterprises, SMEs, government-related bodies, individual developers and end-users. Opening the door to this realm for smaller entities will lead to higher innovation. COMPOSE expects to aid by fostering a developers' community and advocating an open source/interfaces. Use-Case Driven COMPOSE design, development, and validation will be based on innovative use cases highlighting different aspects of the platform. Among the use cases: Smart City (Barcelona): Ample amount and diversity of sensors are deployed at a Barcelona district under the supervision of a COMPOSE partner. Along with Barcelona's OpenData, COMPOSE intends to showcase life in a smart city by creating a group of city services for the citizens. Smart Territory (Trentino): With the collaboration of regional network providers, the tourism board, and meteorological data providers, COMPOSE will explore innovative services for tourists. This pilot aims to enhance the tourist experience by exploiting COMPOSE technologies for the creation of personalized, social and environmentally aware (web and mobile) tourism services and territory monitoring services that leverage the regional networking and environmental infrastructures.

26 HARNESS HARNESS integrates heterogeneous hardware and network technologies into data centre platforms, vastly increasing performance, reducing energy consumption, and lowering cost profiles for important and high-value cloud applications such as real-time business analytics and the geosciences. AT A GLANCE Project title: Hardware- and Network-Enhanced Software Systems for Cloud Computing Project reference: Project coordinator: Alexander L. Wolf, Imperial College London, UNITED KINGDOM Partners: École Poly. Fédérale de Lausanne, (CH) Université de Rennes I, (FR) Zuse Institute Berlin, (DE) Maxeler Technologies, (UK) SAP AG, (DE) Duration: 36 months Total cost: 4.23 M Website: Homogeneous, commodity cloud computing has reached its limits The dominant approach in offering cloud services today is based on homogeneous commodity resources: large numbers of inexpensive machines, interconnected by off-the-shelf networking equipment, supported by stock disk drives. However, cloud service providers are unable to use this platform to satisfy the requirements of many important and high-value classes of applications. Today s cloud platforms are missing out on the revolution in new hardware and network technologies for realising vastly richer computational, communication, and storage resources. Technologies such as field programmable gate arrays (FPGAs), general-purpose graphics processing units (GPGPUs), programmable network routers, and solid-state disk drives (SSDs) promise increased performance, reduced energy consumption, and lower cost profiles. However, their heterogeneity and complexity makes integrating them into the standard Platform as a Service (PaaS) framework a fundamental challenge. The HARNESS project brings innovative and heterogeneous resources into cloud platforms through a rich programme of research, validated by commercial and open source case studies.

27 The HARNESS vision The HARNESS project advances the state of the art in cloud data centre design so that: (1) cloud providers can profitably offer and manage the tenancy of specialised hardware and network technologies much as they do today s commodity resources and (2) software engineers can seamlessly, flexibly, and cost-effectively integrate specialised hardware and network technologies into the design and execution of their cloud-hosted applications. HARNESS develops an enhanced PaaS software stack that brings new degrees of freedom to cloud resource allocation and optimisation. A cloud application is seen to consist of components, some of which have multiple implementations. Applications express their computing needs to the cloud platform, as well as the price they are prepared to pay for various levels of service. This expression of needs and constraints builds upon what can be expressed through today's simple counts of virtual machines or amounts of storage, to encompass the specific and varied new factors characteristic of specialised hardware and network technologies. The cloud platform will have access to a variety of resources to which it can map the components. A flexible application may potentially be deployed in many different ways over these resources, each option having its own cost/performance/usage characteristics. Specialised technologies are virtualised into resources that can be managed and accessed at the platform level. The idea is to provide flexibility to the platform as to which, when, and how many resources are used, and to separate that concern from the low-level deployment and monitoring of the concrete technology elements. Associated with the virtualised resources are policies that govern how allocation and optimisation decisions are made. Also associated with these resources are facilities to track their capacity, usage, and general availability. Expected Impact The public cloud services provider market is projected to reach nearly $22 billion by HARNESS will enable those providers to offer new levels of service to cloud applications at the same time as it opens a new market to the purveyors of specialised hardware and network technologies.

28 LEADS The LEADS project will investigate a novel approach, based on a data-as-aservice model, such that the real-time processing of private and public data becomes economically and technically feasible even for small and medium enterprises. It will develop means to gather and store publicly available data, and will provide a platform that facilitates the real-time processing of this data. Moreover, the public data will be enriched with private data maintained and queried in a privacy-preserving manner on behalf of client applications. LEADS will operate on a collection of micro-clouds, independent and geographically distributed, and will be showcased by two industrial applications. AT A GLANCE Project title: Large-Scale Elastic Architecture for Data-as-a-Service Project reference: Project coordinator: Etienne Rivière, Université de Neuchâtel, SWITZERLAND Partners: Technische Universität Dresden, (DE) Technical University of Crete, (GR) Fundació Barcelona Media Universitat Pompeu Fabra, (ES) Red Hat, (IR) AoTerra, (DE) Adidas, (DE) Duration: 36 months Total cost: 4.05M Website: Context of the project The Web 2.0 revolution is transforming the Internet to a collaborative media where users can meet, read and write. The usergenerated content constitutes a rapidly increasing proportion for the Web. Every day, 15 petabytes of new information is generated. Accessing and processing data in real-time will become more and more crucial in the near future to our data-driven society. The business of many companies is driven by online information (e.g. Web analytics, social networks, aggregators, search engines, etc.). Data growth is becoming the biggest challenge for enterprises to manage their own data centre hardware infrastructure. Clearly, the monetary investment required for crawling, storing, and processing even a small portion of the Internet is very high, making such a task intractable for smallscale and start-up companies. Currently, only the biggest information technology players have access to the infrastructure for storing huge amounts of data and the computing facilities needed to process it. Small and medium companies often have no other choice than relying on larger companies with dedicated data centres to provide them with the data and processing resources. The monetary cost of the infrastructure is among the critical factors determining how to store big data. This problem is especially acute for small and medium companies that

29 have limited resources. Therefore, any new solution should offer pricing competitive with, or lower than, conventional data centres to be attractive. Data-as-a-Service LEADS will provide an economical approach to process large amounts of data by sharing the collection, storage and querying of public and private data. LEADS will provide a DaaS framework that will permit users to gather and store public data, and enrich it with private data maintained on their behalf. Processing of real-time data can exploit the public and private data, including historical versions. The proposed DaaS framework will provide high efficiency and offer scalability as the demand grows. Federation of micro-clouds LEADS will investigate the design of a decentralized platform composed of a collection of micro clouds, each consisting of several servers. The micro-clouds will be based on technology provided by consortium member AoTerra and designed to reuse the waste heat of computing resources for other purposes, hence improving the energy efficiency of the whole architecture. Microclouds will be decentralized but will conceptually appear as a single cloud to clients. Geographic distribution will be exploited to provide faster access times, better utilization of network links, and improved availability and fault tolerance. By relying on a decentralized architecture, LEADS will solve the challenges of network congestion and connectivity faced by cloud providers. Impact of the project LEADS results will have high potential for impact and results integration. Industrial partners have direct interest in exploiting the project outcome, as users or as service providers in a fast-changing information technology world. LEADS has the potential to significantly impact on a variety of software and hardware technologies, standards and open source, and on society as a whole by redefining how information is managed, collected and queried. All services using LEADS will benefit from the improved privacy and security. In turn, this will lead to an improved trust in these services. LEADS will facilitate the design of new datadriven services. In particular, LEADS will simplify their development by providing an open interface. It will reduce the costs so that also SMEs can develop and operate datadriven services that would have previously been too expensive to develop and to operate.

30 MARKOS MARKOS realizes a prototype of an interactive application and a Linked Data API providing an integrated view on the Open Source projects available on the the web, focusing on functional, structural and licences aspects of software code. The MARKOS system itself will be released as Open Source software, which thanks to the offered functionalities, is expected to facilitate software development based on the Open Source paradigm in a global context. AT A GLANCE Project title: The MARKet for Open Source - An Intelligent Virtual Open Source Marketplace. Project reference: Project coordinator: Klaus-Peter Eckert, Fraunhofer-Gesellschaft e.v. Institute FOKUS, GERMANY Partners: Engineering Ingegneria Informatica S.p.A., (IT) ATOS SPAIN SA, (ES) Poznan Supercomputing and Networking Center, (PL) Geeknet Media, (UK) Università degli Studi del Sannio, (IT) T6 Ecosystems, (IT) Duration: 36 months Total cost: 4,51M Website: Goals and objectives Recent studies investigating the reuse of code in Open Source software projects show that developers in Open Source software projects commonly reuse available code and other knowledge that solves their technical problems. Moreover, developers spend non-negligible amounts of time studying scientific publications and standard specifications, or learning from the source code (and its documentation) of related projects to reuse algorithms and methods without simply copying the code. Therefore, developing a software system reusing existing Open Source solutions implies time consuming activities that are not performed when software is developed from scratch or without third party code. All this calls for the need to free the software analysts and developers from the technological barriers caused by the heterogeneity of approaches adopted by each Open Source project, by providing similar information on the software characteristics. MARKOS intends to realize the prototype of an automatic service providing an integrated view on the Open Source projects available on the web, focusing on functional, structural and licences aspects of the software code released by the projects.

31 Markos' innovative approach While existing services already offer a central repository and search tools on Open Source projects at a worldwide scale, they are of limited support for the users to understand the produced Open Source code, because it mainly focuses on people and activities. MARKOS, on the contrary, wants to offer to developers and analysts a solution for choosing the Open Source components more suitable to their needs, to learn how to integrate or extend them and more in general to foster easy adoption of Open Source software. MARKOS will offer semantic querying and browsing tools to inspect the structure of the software code, showing the components, their interfaces, their dependencies, and their Open Source licence models. In this sense MARKOS will strongly innovate with respect to popular services that allows to search and navigate just the text of the source code, without offering any kind of abstraction on the code or that give a structured view of just a single file. MARKOS will show the relationships between components of the same project and also between components of different projects, giving an integrated view of the available Open Source software at a global scale. Expected impacts MARKOS aims to give a considerable contribution to target outcome c) Advanced Software Engineering, specifically to the outcome Tools and methods for community-based and Open Source software development, in particular: - Allowing a faster adoption and integration of Open Source components and libraries removing all the issues related to licensing incompatibility. - Strengthening the European community of Open Source developers as it will increase quality of Open Source software, reduce time to market/use and establish a proofed path to integrate Open Source components among them or without the risks linked to complex and incompatible licensing schemata. - Enabling software developers to use an intuitive and advanced search platform with an advanced service front-end enabling the easy identification of the most suitable Open Source solutions, the analyses of code dependencies, software structures, and potential license infringements. - Facilitating the publication of the description of Open Source software as Linked Data and the production of new tools for software analysis and development that leverage on these semantic data. MARKOS Architecture

32 MIDAS The MIDAS project aims to implement an integrated framework for the automation and intelligent management of Services Oriented Architecture (SOA) testing. The framework is available as a Platform as a Service (PaaS) solution on a cloud infrastructure and supports all the testing activities: generation, execution, result analysis, planning and scheduling. Moreover the framework supports the main testing domains such as functional, interactional, fault tolerance, security and usage-based testing. The test execution environment is based on a distributed TTCN-3 runtime engine. The adopted testing methods and technologies are beyond the state of the art, particularly on model-based testing, fuzzing for security testing, usage-based testing, probabilistic inference reasoning about test evaluation, planning and scheduling. Two pilot SOA testing experiences in different business domains (healthcare and supply chain management) are carried out. AT A GLANCE Project title: Model and Inference Driven - Automated testing of Services architectures Project reference: Project coordinator: Riccardo Fontanelli, DEDALUS S.p.A., ITALY Partners: FOKUS, (DE) Instituto Tecnológico de Aragón, (ES) Simple Engineering France, (FR) CNR - Istituto di Scienza e Tecnologie dell informazione, (IT) T6 Ecosystems S.r.l., (IT) Sintesio Foundation, (SI) Georg-August-Universität Göttingen Stiftung Öffentlichen Rechts, (DE) Université Pierre et Marie Curie - Paris VI - Laboratoire d'informatique de Paris VI, (FR) Duration: 36 months Total cost: 4.30M Website: Rationale Dependable and secure SOAs are mainly the result of good design and implementation practices, but the stakeholders' trust can be decisively strengthened only by rigorous, sound and open validation and verification processes. The contract-based, model-driven SOA engineering approach effectively supports the validation task. SOA key characteristics (reduced control, observability and trust between participants) make actually blackbox and grey-box testing the only practicable verification methods. Nevertheless, SOA testing is a heavy, complex, challenging and expensive task. Objectives and approach The objective of the MIDAS project is to realize a comprehensive framework able to support automation and intelligent management of the SOA testing. The framework supports all the testing cycle activities: test case planning, development and execution, reporting and result analysis, test campaign management and scheduling. Moreover, the framework supports the main testing domains such as functional, interactional, fault-tolerance, security and usage-based testing. In order to provide these features the architecture of the MIDAS framework includes: an environment for design time and run time (on the fly) generation of test cases and oracles;

33 an environment for SOA automatic testing configuration, initialization and execution of the Services Architecture Under Test (SAUT); it is based upon the Test and Test Control Notation (TTCN-3) standard; probabilistic and symbolic inference based methods and tools for test result analysis and test campaign planning and scheduling. In order to support the elastic scalability of the testing environment (allocation of huge amounts of computation resources for relatively short test campaigns on very large services architectures) the MIDAS framework is made available as a cloud based PaaS. In order to evaluate the effectiveness and the usability of the MIDAS framework facilities, two pilot SOA testing experiences are carried out in different business domains: Healthcare (HC) and Supply Chain Management (SCM). In HC, the MIDAS framework will be used for building test campaigns upon the HSSP 1 services implementation, provided by the Italian HealthSOAF 2 research project. The SCM pilot aims at building test campaigns, according to the MIDAS approach, upon an existing services architecture for mobile services supply chain management. Impact The research on the economic impact of the current inadequacy of SOA testing tools and the evaluation of the testing needs of the existing business solutions allow the MIDAS project to: estimate the optimization of the present maintenance and management costs by the availability of advanced verification and testing methods, tools and infrastructures; define new business models for testable service, services architecture delivery and for distributing the advanced SOA testing facilities through new channels, such as PaaS on cloud infrastructures. The potential impact of the MIDAS achievements involves the actual deployment and delivery of dependable and secure services and services architectures. In particular, the MIDAS framework and platform: guarantee the general availability of rigorous, sound, powerful and cheap automated testing processes and tools; allow the providers to deliver their SOA production environment with integrated service test facilities. MIDAS Test Evaluation Planning Scheduling results MIDAS User Front End directives SAUT Model MIDAS Test Generation Test Cases & Scenarios MIDAS Test Exectution MIDAS TTCN-3 Libraries Test logging Test Management and Control TTCN-3 Executable TC MIDAS PaaS Test Data SAUT Adapters Fig. 1: MIDAS Framework Architecture. Journal Bidirectional message transmission Services Architecture Model SAUT 1 Healthcare Services Specification Project 2

34 MODAClouds The main goal of MODAClouds is to provide methods, a decision support system, an open source Integrated Development Environment (IDE) and run-time environment for the high-level design, early prototyping, semi-automatic code generation, and automatic deployment of applications on multi-clouds with guaranteed Quality of Service (QoS). Context AT A GLANCE Project title: MOdel-Driven Approach for design and execution of applications on multiple Clouds Project reference: Project coordinator: Elisabetta Di Nitto, Politecnico Di Milano, ITALY Partners: SINTEF, (NO) Institute e-austria Timisoara, (RO) Imperial College of Science, Technology and Medicine, (UK) SoftTeam, (FR) Siemens SRL, (RO) BOC Information Systems GmbH, (AT) Flexiant Limited, (UK) ATOS Spain SA, (SP) CA Technologies Development Spain SA, (SP) Duration: 36 months Total cost: 8,55M Website: Cloud computing now offers more cost effective and scalable solutions to consumers than ever before. However Cloud providers are still in their infancy in regards to technology and business models, fact that is reflected by the critical issues regularly reported. Thus, to ensure optimum service delivery, to limit vendor lock-in and to enable careful risk analysis and management, we require advanced software engineering methodologies. Proposed solution Model-driven development combined with novel model-driven risk analysis and quality prediction will enable developers using MODAClouds technologies to (a) specify service-independent models enriched with quality parameters, (b) implement these models, (c) perform quality prediction, (d) monitor applications at run-time, and (e) optimize them based on the feedback (thus filling the gap between design and runtime). MODAClouds technologies will provide (i) quality assurance during the application life-cycle, (ii) support migration from Cloud to Cloud when needed, and (iii) techniques for data mapping and synchronization among multiple Clouds. Four industrial cases are foreseen to be developed during the project time-frame: 1. project management server in the Cloud; 2. business process modelling SaaS;

35 3. health-care system for elderly peoples; 4. city urban safety planner. Expected impact The vendor neutral solution will enable the use of a multiple Cloud solutions and the ability to migrate from Cloud provider to provider. This will increase the competitive advantage and agility of European Cloud providers or Cloud brokers. The proposed abstractions over Cloud providers will reduce the complexity of implementation over multiple Clouds, thus increasing the possibilities of SMEs to realise benefits. Moreover, the proposed solution will improve trust in Cloud-based applications by monitoring performance and behaviour and providing an approach for moving applications and data from Cloud to Cloud according to requirements. MODAClouds will enable a better control over services of Cloud providers, and the possibility to combine services from different Cloud providers. It will define design and run-time quality measures, prediction models, and assurance techniques. The project will offer mechanisms and guidelines for the migration of legacy applications to the Cloud by supporting the measurement and identification of nonfunctional characteristics of these applications in their original environments, and by guiding developers in defining the right modelling abstractions for these applications. Key components of the MODAClouds solution are an approach for modelling functional and non-functional aspects of applications, an integrated development environment, a decision support system and a run-time environment. All these components will be developed as open source solutions. Approach MODAClouds solution targets system developers and operators by providing them with tools that support the following software system life-cycle phases: 1. Feasibility study and analysis of alternatives: a special tool will enable developers in analysing and comparing various Cloud solutions. 2. Design, implementation and deployment: the IDE will support a Cloud agnostic design of software systems, the semi-automatic translation of design artifacts into code, and their deployment on the selected target Clouds. 3. Run-time monitoring and adaptation: The run-time layer will (i) enable system operators in overseeing the execution of the system on multiple Clouds; (ii) automatically trigger some adaptation actions (e.g. migrate some system components from a IaaS to another offering better performances at that time); (iii) provide run-time information to the design-time environment that can inform the software system evolution process.

36 OCEAN OCEAN is to play a pivotal role among collaborative Cloud research projects, especially those based on an open source approach. It will help to identify commonalities between projects, whether potential overlaps or prospects for cross-project collaboration and synergies. OCEAN will foster collaboration on Cloud Computing between Japanese and European entities. AT A GLANCE Project title: Open Cloud for Europe, Japan and beyond Project reference: Project coordinator: Yuri Glickman, Fraunhofer-Gesellschaft Zur Foerderung der angewandten Forschung e.v., GERMANY Partners: OW2 Consortium, (FR) Engineering S.P.A., (IT) Information-Technology Promotion Agency, (JP) Duration: 24 months Total cost: 0,68M Website: Motivation As highlighted by the European Commission in its Digital Agenda, interoperability is a key challenge for developing a sustainable cloud computing ecosystem in Europe. The concept of the Open Cloud directly addresses this challenge. There are a number of publicly financed R&D projects in Europe that aim at developing some technologies that all together may contribute to a complete Open Cloud solution. Cloud computing is complex and requires the development and combination of many different and complementary technologies. Thus, there is a need to improve the synergies and complementarities of Open Cloud projects and to enhance the overall economic efficiency of Open Cloud R&D investments. Only by closing gaps, eliminating overlaps, and avoiding missed opportunities amongst current projects will Europe be able to truly do its best at creating a unique, sustainable open cloud ecosystem. Planned Activities The key objective of OCEAN is to foster synergies and reduce overlaps between Open Cloud collaborative research projects. OCEAN will help to expose commonalities, whether potential overlaps or prospects for cross-project collaboration. It will address European FP7 projects as well as other European, European national and Japanese Open Cloud projects. This will be achieved by the following activities:

37 Create and maintain the Open Cloud Innovation Directory that will collect descriptions of current collaborative projects developing Open Cloud components with special attention to software availability, licenses and technical documentation. Develop what we call the Open Cloud Interoperability Framework and Roadmap that will provide a relative positioning, or functional mapping of these projects, in relation to key standards and reference models provided by leading standard defining organizations such as NIST, ETSI, DMTF, OGF, etc. Provide online Build, Test and Certification tools as well as a Beta-test service to independently validate the quality of open source software artefacts from Open Cloud projects and to enable these projects to build and test their software, hence certify their inner quality and compliance to cloud standards. Foster cooperation and integration between projects through the organization of two annual events called Plugfests that will provide project teams with the appropriate environment to work on the integration and interoperability of their software. Foster collaboration between European and Japanese entities on Open Cloud computing, cloud interoperability and standardisation by involving Japanese cloud experts in the Plugfests and discussions on the Open Cloud Interoperability Framework and Roadmap. OCEAN Impact OCEAN will pave the way for the development of a business ecosystem centred on interoperable Open Clouds in Europe. OCEAN will help Open Cloud collaborative projects to identify areas for cooperation and reuse opportunities and will thus contribute to improving the industry positioning of their software deliverables. OCEAN will facilitate the mainstream adoption of Open Cloud by fostering an improvement in the overall quality and trustworthiness of Open Cloud project components. OCEAN will, in particular, promote automated functional and compliance tests for a quality certification of Cloud software and will foster the adoption of interoperable open source cloud solutions by SMEs, Public Administrations and other entities. OCEAN will help develop collaboration between Japanese and European projects on cloud computing.

38 OPENi To inspire innovation in the European mobile applications industry, by radically improving the interoperability of cloud-based services and trust in personal cloud storage through the development of a consumer-centric, open source mobile cloud applications platform. AT A GLANCE Project title: Open-Source, Web-Based, Framework for Integrating Applications with Cloud-based Services and Personal Cloudlets Project reference: Project coordinator: Eric Robson, Waterford Institute of Technology, IRELAND Partners: National Technical University of Athens, (GR) Fraunhofer Institute for Open Communication Systems, (DE) Logica, (ES) Ambisense Ltd, (UK) Velti, (GR) Betapond, (IE) Duration: 30 months Total cost: 3,83M Website: OPENi Interoperable Mobile Cloud Services OPENi will directly address the lack of interoperability between cloud based services and enable applications to access and use a broad spectrum of existing cloud based functionality and content, consistently across different devices and platforms, in a way that enables a usercentric application experience. Targeting the needs of Application Developers, Services Providers and Consumers, OPENi will realize its vision for user-centric, cloudconnected, mobile applications. OPENi Objectives OPENi will deliver a consumer-centric, open source mobile cloud applications solution that will be the catalyst for mobile application innovation advancement. The platform will incorporate an open framework that will be capable of interoperating with any cloud-based service, abstracting the integration challenges to a single open standard without losing any service features. The Key Features of the OPENi platform are: A common framework of web APIs to support easy integration of a broad spectrum of existing cloud-based services into applications in a platformindependent way; A variety of service enablers that will further enhance the richness of features available to application developers; A cloudlet platform that will enable consumers that access cloud-based

39 services through their applications to store and manage their personal data and content; OPENi will base its specifications and delivered solutions on widely accepted open web and cloud technologies so that the platform can be readily adopted by application developers without requiring any additional skills. The combination of the open API and the cloudlet concept creates a single platform of user data and service connectivity, making OPENi a very powerful and beneficial platform for consumers, application developers and service providers. The Figure below depicts how OPENi intends to interact with the service providers and the mobile applications. OPENi Use Cases OPENi will deliver three use cases that will showcase specific features of the platform. 1. The MyLife use case will produce a time-line view of a person s various events and transactions that occur on a daily basis. It will include additional intelligence to categorise and cluster similar events, making the user presentation more intuitive to the consumer s needs. 2. The Personalised Advertising use case will take advantage of the information stored in the personal cloudlet in order to provide personalized ad services via multiple channels in an opt-in and anonymised basis. Access to personalized and behavioural data across applications can also provide an additional value to in-app advertising creating more relevant and targeted ads without violating the privacy of the end user. 3. The Personalised In-Store Shopping will use OPENi to enable the delivery of personalised shopping experiences on a mobile device through demonstrations of working prototype apps in a virtual instore environment. The use case clearly describes how information from many different social networks, applications and services can be combined to power a service that could not exist without that information. OPENi Impact OPENi will be an enabler for the Future Internet in the domain of mobile applications across different devices and platforms by: Developing a new breed of applications that can make use of a broad spectrum of cloud-based functionality and content across devices. Establishing a user-centric application experience, enabling users to store their personal data and content in the cloud and make it available across applications. Promoting a novel mechanism for security and privacy, safeguarding access and use of cloud-based functionality and also security of user information stored in the cloud. OPENi will be a catalyser for the creation and deployment of novel, innovative applications through the provision of a set of breakthrough technologies, with low take up barrier, while protecting existing business models. Developing a European centre of excellence around this project in which European companies can convene and collectively drive innovation in the future Internet.

40 OSSMETER OSSMETER will extend the state-of-the-art of automated analysis and measurement of open-source software, and develop a platform that supports decision makers in the process of discovering, comparing, assessing and monitoring the health, quality, impact and evolution of open source software. Key Challenges AT A GLANCE Project title: Automated Measurement and Analysis of Open Source Software Project reference: Project coordinator: Scott Hansen, The Open Group, UNITED KINGDOM Partners: University of York, (UK) Centrum Wiskunde & Informatica, (NL) University of L Aquila, (IT) University of Manchester, (UK) Tecnalia, (ES) Softeam, (FR) UNINOVA, (PT) Unparallel Innovation, (PT) Duration: 30 months Total cost: 3,40Μ Website: Deciding whether open source software (OSS) meets the required standards for adoption in terms of quality, maturity, ongoing development activity and user support is not a straightforward process. It involves exploring various sources of information including source code repositories to identify how actively the code is developed, how well the code is commented, and level of testing, but also the supporting elements such as communications through newsgroups, forums and mailing lists to identify whether questions are answered in a timely manner, or the number of experts and users of the software, how many open bugs and how fast they are fixed, and many others. It becomes even more difficult to discover and compare several OSS projects that offer similar functionality, and to make an evidence-based decision on which should be selected. Even when a decision has been made for the adoption of a particular OSS product, decision makers need to be able to monitor whether the OSS project continues to be healthy, actively developed, and adequately supported in order to identify and mitigate in a timely manner any risks emerging from a decline in OSS quality indicators. Technical Approach Previous work in the field of OSS analysis and measurement has mainly concentrated on analysing the source code behind OSS software to calculate quality indicators and metrics. OSSMETER aims to extend the scope and effectiveness of OSS analysis with novel advances in language-agnostic and

41 language-specific methods for code analysis, while also introducing state-of-the-art Natural Language Processing (NLP) and text mining techniques such as question/answer extraction, sentiment analysis and thread clustering to analyse and integrate relevant information extracted from surrounding communication channels (newsgroups, forums, mailing lists), and bug tracking systems supporting OSS projects. These additional elements will provide a more comprehensive assessment of the quality of OSS projects and facilitate better evidencebased decision making and monitoring. OSSMETER also aims at providing metamodels for capturing the meta-information relevant to OSS projects (e.g. types and details of source code repositories, communication channels and bug tracking systems, types of licences, number of downloads etc.), and effective quality indicators, in a rigorous and consistent manner enabling direct comparison between OSS projects. These will be integrated into an extensible cloud-based platform enabling users to discover and compare OSS projects, which can also be extended to support quality analysis and monitoring of in-house software development projects. Expected Impact Identifying and reusing high quality OSS instead of implementing in-house solutions with similar functionality enables industries to concentrate on delivering innovative features. OSSMETER advances will allow adopters of OSS to make more informed and confident decisions about the OSS software they build upon. By automatically classifying OSS projects, OSSMETER will enable adopters to discover new cutting edge OSS projects related to their area of interest. The project is targeting a 30% reduction in the effort needed to assess the quality of source code of an OSS project through new analysis facilities, a 90% reduction in the effort needed to assess the quality of OSS support provided through surrounding communications channel, and a 90% reduction in the effort needed to discover and compare multiple OSS projects. The OSSMETER platform can also be used to monitor in-house software development projects and provide key indicators for source code development activity and quality, the quality of communications with users, and the performance of the development and testing teams in identifying and repairing software defects. These capabilities are expected to provide substantial benefits in cost savings as well as facilitate greater innovation in European software development through increased assurance and exploitation of OSS products by industry.

42 PAASAGE PaaSage delivers an open and integrated platform to support model based lifecycle management of Cloud applications. The platform and the accompanying methodology allow model-based development, configuration, optimisation, and deployment of existing and new applications independently of the existing Cloud infrastructures. AT A GLANCE Project title: PaaSage: Model-based Cloud Platform Upperware Project Reference: Project coordinator: Pierre Guisset, GEIE ERCIM, FRANCE Partners: SINTEF, (NO) Institut National des Sciences Appliquées de Rennes, (FR) Univ. des Sciences et Technologies de Lille, (FR) Science and Technology Facilities Council, (UK) HLRS, University of Stuttgart, (DE) INRIA, (FR) CETIC, (BE) FORTH, (GR) be.wan, (BE) EVRY, (NO) RunMyProcess, (FR) Sysfera, (FR) Flexiant, (UK) Lufthansa Systems AG, (DE) GWDG, (DE) Automotive Simulation Center Stuttgart e.v., (DE) Duration: 48 months Total cost: 8,40M Website: (not yet activated) Context Cloud computing is revolutionising the Information Technology industry through its support for utility computing without the need for large capital outlays in hardware or human operation. Currently there exist several open source and commercial offerings at the Infrastructure as a Service (IaaS) level. Software developers targeting the Cloud would want an easy way to develop their software in a fashion that exploits the full potential of the clouds, and still is able to run on any of the available offerings. An impediment to this objective is that IaaS Cloud platforms are heterogeneous, and the services and Application Programmer Interfaces (APIs) that they provide are not standardised. These platforms even tend to impose a specific architecture on deployed applications. Accordingly, there is a significant dependency between client applications and the services provided by the platform, which is not well specified or appropriately communicated to the user. Knowledge with respect to the best suited platform for a given application is therefore hard and costly to obtain, and the actual benefit from migration to these platforms is not directly known. It is generally up to the developer to specify and exploit these characteristics to her best knowledge. This, however, is the general crux: the typical developer will neither know how to use these characteristics, nor how they impact on the overall behaviour and, what is more, how they relate to a given Cloud infrastructure. This is complemented by the fact that most infrastructures do not even offer support to exploit these characteristics, such as location control, specification of scale-out behaviour and so forth.

43 Approach PaaSage addresses this complexity in an Integrated Development Environment (IDE) combining the abstraction power of a modelling language with mechanisms for profiling, reasoning, monitoring, metadata collection, and adaption at runtime. This IDE will explicitly allow the developer to express all necessary characteristics and requirements for scalability, quality of service etc. at design time. PaaSage will analyse these specifications and match them against platform characteristics using captured knowledge to give cost and benefit feedback at development time and make deployment recommendations. This extended model will be coupled with historical metadata to enable automated real-time intelligent reasoning to provide a feasible dynamic mapping of these models to the platform(s) selected for the application instantiation. The detailed objectives are: 1. To contribute to the design and standardisation of an open, powerful, and expressive modelling language for Cloud-independent modelling of enterprise systems with the desired preferences and constraints, focusing on architectural styles and characteristics of the Cloud computing paradigm. 2. To provide an intelligent Integrated Development Environment supporting the modelling language and supporting the developer in the task of optimising the application. 3. To provide mappers and engines that allow a modelled Cloud application to be deployed and executed in a distributed environment across multiple heterogeneous Cloud providers. The execution will thereby observe the specified execution characteristics and adjust itself accordingly at run-time. 4. To define metadata relevant for Cloud services, and provide mechanisms to acquire the metadata and performance indicators from running applications and to reuse the historical metadata available on the services in the application design and deployment. Impact The PaaSage development environment supports the developer in the full project lifecycle. PaaSage will: Reduce cost of migration and implementation Enable full exploitation of the cloud potential, thus reducing expenditure Avoid user lock-in with one provider Allow transparent use of heterogeneous infrastructures Increase the expertise regarding cloud use cases and their best approaches Simplify management of services and infrastructures through dynamic, automated adaptation Enable integrated improvement of service compositions and deployment through the learning of behavioural characteristics.

44 PROSE PROSE will contribute to the adoption of open source software on ICT projects, by increasing the lifetime of the software developed inside European projects and thus maximizing projects impacts, through the creation of a coordination platform for software projects, as well as promoting dissemination and training events on Open Source. AT A GLANCE Project Title: Promoting Open Source in European Projects Project reference: Project coordinator: Alfredo Matos, Caixa Mágica Software, PORTUGAL Partners: Instituto de Telecomunicações, (PT) Waterford Institute of Technology, (IE) MFG Innovation Agency, (DE) Edwards, Wildman and Palmer, (UK) Geeknet Media, (UK) Duration: 24 months Open Source in ICT Free, Libre and Open Source Software (FLOSS) is creating new R&D opportunities across Europe, particularly in an ICT context where projects require an adequate cooperation environment. Despite the numerous advantages of FLOSS as a cooperation model, most developed software faces a limited life cycle and applicability. The absence of a common platform where projects can publish their software, with reliable, trustworthy usage terms and conditions, imposes adoption difficulties. As a result, software is reinvented within the ICT community, leading to effort duplication and failed R&D opportunities. Adopting FLOSS paradigms will extend the lifetime and sustainability of ICT software, increasing contributions to and from the community. Total cost: 0,53M Website: Empowering FLOSS in ICT To achieve this vision it is necessary to promote FLOSS for ICT projects, by removing existing obstacles, especially legal and business barriers. This implies creating a feedback loop for FLOSS usage in ICT projects that encompasses the EC and enables them to assess the true benefits of a FLOSS-driven model, enabling a long term exploitation of FLOSS through a sustainable model for ICT contributions. PROSE defines a threefold approach to FLOSS in European projects, aiming to:

45 a) Create and manage a platform for FLOSS software project management b) Develop a training program on legal and business aspects pertaining to of FLOSS adoption c) Carry out a dissemination program to promote the adoption of a FLOSSdriven model in ICT projects Platform for FLOSS PROSE will provide a platform for hosting and supporting (ICT) FLOSS software that allows creating and managing software repositories. Beyond supporting the development process, the infrastructure defines a common location for ICT software. The platform will also include community and project management tools, as well as access to methodology, business and legal information required for FLOSS adoption. Training and Support To support the FLOSS adoption process, PROSE provides training documentation for the software platform, and also for the legal and business aspects that relate to FLOSS use. The Platform information describes FLOSS development methodologies, allowing ICT projects to take advantage of the tools. Complementarily, it is necessary to provide Business and Legal information, enabling a successful FLOSS exploitation models, and taking into account topics such as Licensing and Intellectual Property Rights. Promotion and Dissemination The goals of the PROSE dissemination strategy aim to make the hosting platform reach a wider ICT audience through platform promotion, as well as conveying the platform, legal and business information, through training and educational events. These efforts translate into a generalized open source promotion as means towards raising the awareness towards the FLOSS advantages in ICT, achieved through the organization of several events, such as the European FLOSS Workshops, as well as participating in different scientific and ICT events. Impact PROSE expects to decrease the FLOSS adoption barriers in ICT projects. This approach will increase the sustainability of FLOSS developed within ICT, allowing software to exist beyond the projects lifetime, and increasing the reach of on-going efforts beyond the current boundaries and maximizing the R&D opportunities in the European space. Figure 1. Structure through Coordination: the PROSE approach to FLOSS in ICT.

46 PROWESS This project will develop advanced software engineering approaches to improve quality assurance: the challenge is to reduce time spent on testing, whilst increasing software quality, in order to quickly launch new, or enhancements of existing, web services and internet applications. We will develop this propertybased testing for web services and internet applications in order to achieve a real improvement of testing efficiency. AT A GLANCE Project title: Property-based testing of web services Project reference: Project coordinator John Derrick, Department of Computer Science, University of Sheffield, UNITED KINGDOM Partners: University of Kent, (UK) CHALMERS, (SE) Universidad Politecnica de Madrid, (ES) Universidade da Coruna, (ES) QUVIQ, (SE) Erlang Solutions Limited, (UK) Interoud Innovation, (ES) SP Sveriges Tekniska Forskningsinstitut AB, (SE) Duration: 36 months Total cost: 4,42M Website: (not yet activated) Testing for web-services Traditional software development processes fit the development of web services and internet applications rather poorly. The speed with which new services are launched requires agile development strategies, small increments and continuous integration. Advanced software engineering techniques, new processes and development tools have emerged and are extensively used by a new generation of software developers, but software testing has lagged behind. Property-based testing Property-based testing (PBT) will deliver more effective tests, more efficiently, and thus deliver economic benefits to the European software industry. Testing with properties as objects improves the competitiveness of software developers, as they can deliver higher quality software for a lower price. It allows collaborating companies to improve the definition of their software interfaces and so improve the compatibility between their services. Project goals PROWESS will advance propertybased testing into the domain of web services and other open, evolving systems. Scaling the PBT approach. Work on PBT has concentrated on testing certain components and a subset of the functionality of these systems. PROWESS

47 will develop methods to compose models into a single system model, develop techniques that use the same property models to mock components of the system. PBT for web services. Creating properties and writing models is more abstract than writing traditional test cases; PROWESS will develop general techniques and methods that can be applied by a wide range of developers in web services and user-interface layer. In addition, the project will develop techniques to extract properties from unit tests for web services. Reading properties can be a challenge for newcomers: so the project will develop graphical and natural language renderings of properties to allow users to understand them more easily. Dealing with multiplicity and evolution. In the open environment of the World Wide Web, it is obvious that there will be multiple service providers, and that these service provisions - both in terms of specifications and implementations - will evolve. This multiplicity gives the integrators of these services a set of challenges about how to choose between different providers, and how these choices work in an evolving environment. PROWESS will develop mechanisms and tools to support effective decision making in situations where there are competing implementations of requirements. The project will provide users with models that describe the difference between implementations, or between an implementation and the requirements upon it, based on PROWESS' models and toolsets. PBT of non-functional requirements. Internet services, like any other software component, need to fulfil a number of nonfunctional requirements on top of software's functional requirements. In particular for evolving, scalable systems, it is important that performance requirements can be met when popularity of the service dramatically increases. PROWESS will apply propertybased testing tools and techniques to test nonfunctional aspects of systems such as performance and dependability, including fault tolerance, safety and availability. These measurements will not only be absolute, but also relative to competing implementations. The project will also develop a cloud testing framework to enable realistic testing of nonfunctional requirements, including stress testing and testing scalability for deployments of web services. Quality Assurance for property-based testing. PROWESS determines mechanisms to estimate the quality of PBT. First, with properties that span a large number of requirements and tests randomly generated from these properties, requirement traceability's needs to be addressed in a novel way: project's results will answer the question of whether a certain requirement is tested with the generated test cases. Secondly, new coverage metrics will record the different states visited in a model during testing, thus providing a measure on the total coverage of the model state. Thirdly, the project will assess the quality of models and properties using the analogue of mutation testing.

48 RISCOSS RISCOSS will offer novel risk identification, management and mitigation tools and methods for community-based and industry-supported Open Source Software (OSS) development, composition and life cycle management to individually, collectively and collaboratively manage OSS adoption risks. AT A GLANCE Project title: Managing Risk and Costs in Open Source Software Adoption Project reference: Project coordinator: Xavier Franch, Universitat Politècnica de Catalunya, SPAIN Partners: Ericsson Telecomunicazioni, (IT) Fondazione Bruno Kessler, (IT) Universiteit Maastricht, (NL) Centro Nacional de Referencia de Aplicación de las Tecnologías de la Información y la Comunicación, (ES) XWiki SAS, (FR) OW2 Consortium, (FR) KPA Ltd., (IL) Duration: 36 months Total cost: 4,43M Website: Software goes Open Source Open Source Software has become a strategic asset for a number of reasons, such as its short time-to-market software service and product delivery, reduced development and maintenance costs, and its customization capabilities. OSS technologies are currently embedded in almost all commercial software. In spite of the increasing strategic importance of OSS technologies, IT companies and organizations face numerous difficulties and challenges when making the strategic move to integrate in their processes the open source way of working. This can lead to the perception of possible extra risk with respect to the traditional approaches of software development and provisioning. Such risks (e.g. evaluation, integration, context, process, quality and evolution risks) are not to be neglected since incorrect decisions may lead to expensive failures. Indeed, insufficient risk management has recently been reported as one of the five topmost mistakes to avoid when implementing OSS-based solutions. With proper risk management and mitigation, failures could be reduced and cost minimized. To maximise the benefit from OSS adoption, the understanding and management of all risks becomes necessary since they directly impact business, with strong effects on time-to-market, revenue and therefore customer satisfaction and brand image.

49 Decisional level Risk assessment, Decision-making, strategic ecosystem Technological level Code/architecture analysis, Info. mining, OSS ecosystem Strategic OSS ecosystems As any other information system, OSS ecosystems are not developed, and do not exist, in isolation. Instead, they exist in the wider context of an organisation and of various OSS communities, including groups of projects that are developed and co-evolve within the same environment, but also further beyond, their context (the organization itself, OSS communities, regulatory bodies, etc.), forming a wider and more strategic ecosystem. A typical OSS ecosystem may include several products in a product family, with several versions active in each. Moreover, these versions are typically adapted to build personalised releases that meet the needs of different customers. Each single product release version contains a long list of thirdparty products, many of them OSS components, potentially different (for versions, patch level, etc.) from each other. Above this technological view, several strategic questions emerge, e.g.: How to view an ecosystem in order to collect relevant information for evolution management? How to secure that specific features of OSS do not harm business strategies and their underlying business models? How to implement a systematic approach toward understanding and representing dependencies that involve OSS components for assessing all kinds of risk? The answers to these questions require the clear understanding of OSS ecosystems from a strategic perspective, with clear identification of relevant strategic dependencies in order to control and mitigate all the risks coming from the adoption of OSS components, throughout the lifetime of the different products and components that are part of the OSS ecosystems. RISCOSS use cases One of the key issues in the RISCOSS project is the inclusion of very different use cases led by the project s partners: OSS risk management program in a large IT department. Risk assessment in public administration OSS projects. Software Quality Assurance and Trustworthiness (SQuAT) programme in an OSS large community. Assessing development practices of an OSS tool in an SME. Evolution of the platform undertaken in a small OSS community. RISCOSS impact Organizational impact. Clear definition of the roles, tasks, documents, etc. that are implied in business models and business processes around OSS-based development and distribution. Methodological impact. Definition of guidelines, methods and strategies to manage the risk and leverage the costs in OSS adoption. Technological impact. Deployment of a platform to enable the information flow from OSS communities to a company ecosystem and then further to support the management of this ecosystem with the OSS components therein.

50 SUCRE The SUCRE project is driven by a key objective, which is the consolidation of the European Cloud Computing and Open Source communities by creating a critical mass of stakeholders who will work together on promoting the use of Open Source in Cloud Computing. Where do we stand? AT A GLANCE Project title: Supporting Cloud Research Exploitation Project reference: Project coordinator: Alex Delis, University of Athens, GREECE Partners: Instytut Chemii Bioorganicznej PAN, ( PL) SingularLogic S.A, (GR) The University of Manchester, (UK) MFG Medien - und Filmgesellschaft Baden- Württemberg mbh, ( DE) Zephyr s.r.l., (IT) Nippon Telegraph and Telephone Corporation, (JP) Duration: 24 months Total cost: 0,65M Website: The existing gap in addressing interoperability problems is partially the cause for slow adoption of open-source cloud solutions by businesses and public institutions. This is all despite the quality of several open-source-based projects, their overall encouraging developments as well as their potential, are high. Furthermore, the lack in interoperability and standards among applications, services and devices, results in fragmentation of technologies, services and markets, and thus in reduced competitiveness and growth for the European economy. SUCRE approach and objectives In order to have users furnish input and voice the users concerns and needs, SUCRE will implement two specific use cases: Open Clouds for Public Sector applications and Open Clouds for the Health Care Provisioning Industry. These application areas do have direct and significant impact in real life and in general, present major challenges. Particular attention will be given to the interaction between academia and industry as well as among industry players. This is reflected in the formation of the SUCRE EU Japan Experts Group, consisting of stakeholders from both regions as well as in our effort to bring together the researchers of tomorrow with industry experts and researchers from the areas of the Internet of Services, Clouds and Open Source. Through the 5-step integrated approach, SUCRE will set up an all-embracing, still

51 focused, supporting mechanism available to all EC funded projects in the areas of Cloud Computing and Open Source, consisting of targeted workshops, operation of EU-Japan Experts Group, Young Researchers Forum, publications, videos, and a wide range of additional promotional activities. SUCRE intends to provide support for standardization and collaboration in software and services technologies in the following ways: By acting as a focal point for all parties pursuing these ideas and providing a forum for fostering collaboration and exposing ideas that can lead to standardization and improved collaboration. Through the organization of SUCRE targeted events as planned as well as the exploitation of resulting links. Through the drafting of Recommendation reports and incorporated checklists. The concrete SUCRE products will be: 4 issues of the Cloud and Open Source Magazine 2 recommendation reports 2 videos 1 young researchers forum 2 targeted workshops 1 EU-Japan workshop and 1 final event Impact It is expected that the above activities will significantly impact on various areas, through the following SUCRE high-level outcomes: Explore and highlight issues in the intersection of cloud computing, the use of open-source development models and the effect that these two scientific areas bear into key segments of the European economies. Identify the reasons why open source cloud solutions have not been widely adopted yet. Support the uptake of open source cloud models. Compare and suggest best practices among European countries for the adoption of open cloud solutions. Reinforce the adoption of open source cloud solutions by key stakeholders. Initiate and nurture an international dialogue on the hot topics of interoperability and data portability involving experts from both Europe and Japan. Explore opportunities for join work in interoperable clouds and related solutions with Japan. Establish and maintain a highly innovative community consisting of players from both commercial and scientific sectors. Stimulate the interest and convince young people across Europe regarding the importance and societal impact of Open Source Clouds as well as encourage their professional involvement and commitment in the area.

52 U-QASAR The main objective of U-QASAR is to create a flexible Quality Assurance, Control and Measurement Methodology to measure the quality of Internet-related software development projects and their resulting products. The methodology will be supported by an Internet solution composed of several knowledge services based on open standards that will be able to detect changes in the scope and requirements of an Internet application (or changes in its development process) and provide the adequate set of assessments to deliver an accurate measurement of the quality of the process and product at any time. U-QASAR methodology and platform will be validated and assessed in real business cases. AT A GLANCE Project title: Universal Quality Assurance & Control Services for Internet Applications with Volatile Requirements and Contexts Project reference: Project coordinator: Fernando Ubieta, INNOPOLE SL, SPAIN Partners: Métodos y Tecnología de Sistemas y Procesos S.L., (ES) Institut für angewandte Systemtechnik Bremen GMBH, (DE) SINTEF, (NO) Aalto University, (FI) CONTACT Software GMBH, (DE) INTRASOFT International S.A., (LU) Vaibmu Oy, (FI) Duration: 36 months Total cost: 4,01M Website: Context of U-QASAR U-QASAR methodology and platform will provide objective quality measures of Internet-related software development projects and their resulting products. Methodology and platform will be used indistinctively by software engineers, designers, developers, testers and managers alike for different purposes. In that sense software engineers, developers and testers will be able to rapidly correct inadequate trends in design, development and testing; project managers will be able to schedule deliveries with the agreed level of quality and IT directors will be able to forecast the cost and quality of future projects. This will introduce a high level of automation in the Software Quality Management (SQM) process avoiding the traditional problems of data gathering and analysis in traditional measurement and SQM processes which are: - Difficulty in demonstrating the completeness, accuracy and integrity of the data used. - Presumable lack of objectiveness. - Lack of stability in the processes. U-QASAR Solution U-QASAR proposes the creation of a methodology and a collaborative framework

State of the art analysis: Cloud solutions in the Public sector

State of the art analysis: Cloud solutions in the Public sector Deliverable D1.1: State of the art analysis: Cloud solutions in the Public sector Date: 23 January 2013 Authors: Timo Mustonen (MFG) Dissemination level: PU WP: 1 Version: 1.0 Keywords: State of the Art,

More information

Description of Work. Project acronym: BigFoot Project full title: Big Data Analytics of Digital Footprints

Description of Work. Project acronym: BigFoot Project full title: Big Data Analytics of Digital Footprints Description of Work Project acronym: Project full title: Big Data Analytics of Digital Footprints Project Budget: 3,538.387 Euro Work programme topics addressed: Objective ICT-2011.1.2: Cloud Computing,

More information


SOFTWARE ENGINEERING SOFTWARE ENGINEERING Key Enabler for Innovation NESSI White Paper Networked European Software and Services Initiative July 2014 Executive Summary Economy and industry is experiencing a transformation towards

More information

NESSI White Paper, December 2012. Big Data. A New World of Opportunities

NESSI White Paper, December 2012. Big Data. A New World of Opportunities NESSI White Paper, December 2012 Big Data A New World of Opportunities Contents 1. Executive Summary... 3 2. Introduction... 4 2.1. Political context... 4 2.2. Research and Big Data... 5 2.3. Purpose of

More information

The SUCRE State of the Art Analysis: Cloud solutions in the Healthcare Sector

The SUCRE State of the Art Analysis: Cloud solutions in the Healthcare Sector Deliverable D2.1: The SUCRE State of the Art Analysis: Cloud solutions in the Healthcare Sector Date: 22 January 2013 Authors: Timo Mustonen (MFG) Dissemination level: PU WP: Work Package 2 Version: 1.0

More information

Future Internet Roadmap. Deliverable 1.1 Service Web 3.0 Public Roadmap

Future Internet Roadmap. Deliverable 1.1 Service Web 3.0 Public Roadmap Future Internet Roadmap Deliverable 1.1 Service Web 3.0 Public Roadmap Authors: Elena Simperl (UIBK) Ioan Toma (UIBK) John Domingue (OU) Graham Hench (STI) Dave Lambert (OU) Lyndon J B Nixon (STI) Emilia

More information



More information

Customer Cloud Architecture for Big Data and Analytics

Customer Cloud Architecture for Big Data and Analytics Customer Cloud Architecture for Big Data and Analytics Executive Overview Using analytics reveals patterns, trends and associations in data that help an organization understand the behavior of the people

More information


WORK PROGRAMME 2013 REVISED WORK PROGRAMME 2013 REVISED COOPERATION THEME 3 ICT INFORMATION AND COMMUNICATIONS TECHNOLOGIES (European Commission C(2013) 3953 of 27 June 2013) Changes to FP7 ICT Work Programme 2013 This work programme

More information

OPEN DATA CENTER ALLIANCE : The Private Cloud Strategy at BMW

OPEN DATA CENTER ALLIANCE : The Private Cloud Strategy at BMW sm OPEN DATA CENTER ALLIANCE : The Private Cloud Strategy at BMW SM Table of Contents Legal Notice...3 Executive Summary...4 The Mission of IT-Infrastructure at BMW...5 Objectives for the Private Cloud...6

More information

Analysis of server utilization and resource consumption of the Google data centre trace log Robert Mugonza MSc Advanced Computer Science

Analysis of server utilization and resource consumption of the Google data centre trace log Robert Mugonza MSc Advanced Computer Science Analysis of server utilization and resource consumption of the Google data centre trace log Robert Mugonza MSc Advanced Computer Science 2012/2013 The candidate confirms that the work submitted is their

More information

Challenges and Opportunities of Cloud Computing

Challenges and Opportunities of Cloud Computing Challenges and Opportunities of Cloud Computing Trade-off Decisions in Cloud Computing Architecture Michael Hauck, Matthias Huber, Markus Klems, Samuel Kounev, Jörn Müller-Quade, Alexander Pretschner,

More information

19 Knowledge-Intensive

19 Knowledge-Intensive 19 Knowledge-Intensive Cloud Services Transforming the Cloud Delivery Stack Michael P. Papazoglou and Luis M. Vaquero Contents 19.1 Introduction...450 19.2 Cloud Computing Overview...452 19.3 Cloud APIs...455

More information

European Big Data Value Strategic Research & Innovation Agenda

European Big Data Value Strategic Research & Innovation Agenda European Big Data Value Strategic Research & Innovation Agenda VERSION 1.0 January 2015 Big Data Value Europe Rue de Trèves 49/5, B-1040 BRUSSELS Email: Executive

More information

Masaryk University Faculty of Informatics. Master Thesis. Database management as a cloud based service for small and medium organizations

Masaryk University Faculty of Informatics. Master Thesis. Database management as a cloud based service for small and medium organizations Masaryk University Faculty of Informatics Master Thesis Database management as a cloud based service for small and medium organizations Dime Dimovski Brno, 2013 2 Statement I declare that I have worked

More information

January 2015. Keywords: NFV, SDN, cloud computing, digital services, automation, OSSs.

January 2015. Keywords: NFV, SDN, cloud computing, digital services, automation, OSSs. January 2015 Keywords: NFV, SDN, cloud computing, digital services, automation, OSSs. Summary... 4 Introduction... 5 Vision on NFV/SDN Evolution... 7 Phase 1: Virtualized IT & Network Functions... 8 Phase

More information

Conference Paper Sustaining a federation of Future Internet experimental facilities

Conference Paper Sustaining a federation of Future Internet experimental facilities econstor Der Open-Access-Publikationsserver der ZBW Leibniz-Informationszentrum Wirtschaft The Open Access Publication Server of the ZBW Leibniz Information Centre for Economics Van Ooteghem,

More information


PROJECT FINAL REPORT PROJECT FINAL REPORT Grant Agreement number: 212117 Project acronym: FUTUREFARM Project title: FUTUREFARM-Integration of Farm Management Information Systems to support real-time management decisions and

More information

Internet of Things From Research and Innovation to Market Deployment

Internet of Things From Research and Innovation to Market Deployment River Publishers Series in Communication Internet of Things From Research and Innovation to Market Deployment Editors Ovidiu Vermesan Peter Friess River Publishers Contents Preface Editors Biography xiii

More information

Best Practices in Scalable Web Development

Best Practices in Scalable Web Development MASARYK UNIVERSITY FACULTY OF INFORMATICS Best Practices in Scalable Web Development MASTER THESIS Martin Novák May, 2014 Brno, Czech Republic Declaration Hereby I declare that this paper is my original

More information

Cloud Computing and Grid Computing 360-Degree Compared

Cloud Computing and Grid Computing 360-Degree Compared Cloud Computing and Grid Computing 360-Degree Compared 1,2,3 Ian Foster, 4 Yong Zhao, 1 Ioan Raicu, 5 Shiyong Lu,,, 1 Department

More information

Government Cloud Computing: Planning for Innovation and Value

Government Cloud Computing: Planning for Innovation and Value White Paper Cloud Computing Government / Public Sector Government Cloud Computing: Planning for Innovation and Value G Executive Summary Keywords: Cloud Computing, Government, Public Sector, Innovation,

More information

FUTURE INTERNET. Public-Private Partnership. Outcomes, achievements and Outlook

FUTURE INTERNET. Public-Private Partnership. Outcomes, achievements and Outlook FUTURE INTERNET PHASE 1 FINAL REPORT Outcomes, achievements and Outlook Future Internet Outcomes, achievements and Outlook September 2013 Executive Summary The overall goal of the Future Internet (FI-PPP)

More information

Ensuring a Thriving Cloud market: Why interoperability matters for business and government

Ensuring a Thriving Cloud market: Why interoperability matters for business and government Ensuring a Thriving Cloud market: Why interoperability matters for business and government An Executive Summary Businesses, public administrations and individuals are eagerly embracing cloud computing

More information

Cloud Computing Synopsis and Recommendations

Cloud Computing Synopsis and Recommendations Special Publication 800-146 Cloud Computing Synopsis and Recommendations Recommendations of the National Institute of Standards and Technology Lee Badger Tim Grance Robert Patt-Corner Jeff Voas NIST Special

More information

Convergence of Social, Mobile and Cloud: 7 Steps to Ensure Success

Convergence of Social, Mobile and Cloud: 7 Steps to Ensure Success Convergence of Social, Mobile and Cloud: 7 Steps to Ensure Success June, 2013 Contents Executive Overview...4 Business Innovation & Transformation...5 Roadmap for Social, Mobile and Cloud Solutions...7

More information

Best Practices for Building an Enterprise Private Cloud

Best Practices for Building an Enterprise Private Cloud IT@Intel White Paper Intel IT IT Best Practices Private Cloud and Cloud Architecture December 2011 Best Practices for Building an Enterprise Private Cloud Executive Overview As we begin the final phases

More information

Designing a big data software-as-a-service platform adapted for small and medium-sized enterprises

Designing a big data software-as-a-service platform adapted for small and medium-sized enterprises Designing a big data software-as-a-service platform adapted for small and medium-sized enterprises by Lyubomir Nedyalkov June, 2013 Delft, The Netherlands MASTER THESIS Designing a big data software-as-a-service

More information

Green Cloud computing and Environmental Sustainability

Green Cloud computing and Environmental Sustainability Green Cloud computing and Environmental Sustainability Saurabh Kumar Garg and Rajkumar Buyya Cloud computing and Distributed Systems (CLOUDS) Laboratory Dept. of Computer Science and Software Engineering

More information

Priorities for Research on Current and Emerging Network Technologies

Priorities for Research on Current and Emerging Network Technologies 101010011011101100111001010111001001010101110101111010001100111 Priorities for Research on Current and Emerging Network Technologies About ENISA The European Network and Information Security Agency (ENISA)

More information