icore Internet Connected Objects for Reconfigurable Ecosystem

Size: px
Start display at page:

Download "icore Internet Connected Objects for Reconfigurable Ecosystem"

Transcription

1 icore Internet Connected Objects for Reconfigurable Ecosystem Grant Agreement Nº D2.3 Architecture Reference Model WP2 Cognitive Management and Control Framework for IoT Version: Due Delivery Nature: Dissemination Level: 2.0 M16 30 June 2013 Report PU Lead partner: Authors: Internal reviewers: Telecom Italia Roberto Minerva (Editor) + see contributors table TCS, VTT, SIE, ALUB, CRF, ATOS, AMBIENT The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/ ] under grant agreement n Grant Agreement number: Page 1 of 136

2 Abstract The deliverable defines the icore architecture in terms of its principles, guidelines and identified components. The architectural definition fits business and technological requirements as put forward by Stakeholders and considers the requisites stemming from a wide set of use cases. The document is structured as follows: Section 1 describes some high level requirements that have provided the general motivation of the icore architecture definition. Section 2 describes the requirements of the architecture and provides initial general principles of it. Section 3 describes the functional architecture of icore, its building blocks and the relationships between them. Section 4 discusses the supporting technologies that can be integrated in a consistent way in order to provide a sound technology architecture that supports the execution and the functioning of icore systems. Finally section 5 emphasizes how the icore definition meets the requirements put forwards in Section 2. Grant Agreement number: Page 2 of 136

3 Executive Summary The icore Architecture defines a set of basic principles, building blocks, functional entities and guidelines for building icore compliant systems. The definition of these architectural principles (Section 2) have been carried out in several ways: a) requirements of different stakeholders have been considered in order to determine a set of business related guidelines that have led to several technical requirements [2][3]; b) technical requirements stemming from a top down approach have been considered; and c) requirements derived from a bottom up approach (i.e., by the definition and specification of a set of Use cases) have been taken on board. This has led to a functional architecture (described in Sections 2 and 3) capable of representing Real World Objects and their virtualization within the icore architecture as well as the support of application developers aiming at creating new compelling applications that exploit the icore Architecture capabilities. Central to the design of icore functional architecture definition there are a few basic principles: Identification and roles of stakeholders. Actually the icore architecture currently already distinguishes seven actors in the functional architecture and the respective business roles. They are used in order to fully describe the functioning of an icore system with respect to business, functional and operational needs. Virtualization and composition of Objects. Real World Objects can be represented as Virtual Objects, VOs, in icore. They allow a representation of the real world objects in a functionally enriching environment. Simple Virtual Objects can be aggregated and merged in order to create new Virtual Composite Objects, called CVOs, that extend and generalize the real world object functionalities and features, often in a service execution context, according to service requirements. Segmentation and Aggregation of functions. Objects are framed in a set of levels, each level is aggregating functionalities offered by a certain type of icore entities: the VO level collects all the functionalities provided by the Virtual Objects as well as the entities and the functionalities of the icore system needed to support and to use the VOs. On top of the VO level, icore frames the CVO related functionalities and how they can be related to each other and with entities of other levels (namely the VOs and the Service Enabling Functions). In the upper most level, the icore architecture considers all the service enabling functions and their relations to applications and CVOs. At each level, the icore architecture envisages an increasing number of functionalities and systematic entities used to support the architecture. Functional and Systematic view of objects. Objects and building blocks of icore are designed in order to offer specific functions to be used to support services. Any single object will provide interfaces and functions that support the integration of the object into the system (systematic view). Each icore component will support these two facets, functional and systematic, that will allow to support end user services and to properly manage the object within an icore context. Cognition. Cognition is another cornerstone of the icore architecture, within icore it is structured as a cognitive cycle in which knowledge is derived from observing the external environment (i.e. real world events/data) which is continually evolving and decisions/actions are inferring its future behavior based on three criteria, namely: (i) the knowledge created, (ii) other goals and also (iii) policies, so as to optimize the performance. Central to the icore architectural definition is the concept of situation awareness: an icore system will not be exclusively concerned with individual pieces of sensor data, but it has the ability to build on events (i.e. low level situations) that are detected and interpreted from real world sensor data, into a higher domain relevant representation. This representation should encapsulate and describe Grant Agreement number: Page 3 of 136

4 the current abstract state of affairs concerning the monitored environment, which is relevant to the application in question, i.e., the situational awareness. Another major concern of the icore Architecture is security. The icore security framework is based on the concept that access to the VO/CVOs must be regulated through a sticky policy management approach, i.e., the policy applicable to a piece of data (e.g. originating from a VO) travels with the data and is enforceable at every point it is used. In icore, the security framework combines the concept of sticky policies with the concept of VO/CVO, which are created and managed with their associated policies and access rights. Users will therefore be able to declare privacy statements that define when, how and to what extent their personal information (also accessed through a VO) can be disclosed. The application of sticky policies allows the VO/CVOs to be distributed across different domains in several operational scenarios while preserving the information accessible through them. The concept is that the VO/CVOs are encrypted and signed in one domain and not accessible by unauthorized users, even if they are distributed across untrusted networks and domains. In order to implement an icore system in with compliance to the functional architecture definition, there is the need to define a technology architecture, i.e., a combination of technology infrastructure products and components that support the functioning of an icore system. The technology architecture of icore is a collection of technology software components that provide the services used to support the specific set of functions defined according to the architectural guidelines. This includes IT infrastructure, middleware, networks, communications, processing, and standards. The technology architecture is derived by the needs and requirements exposed by several use cases (see Chapter 4). Eventually, the merits and the value of the icore architecture are evaluated with respect to different stakeholders (Chapter 5) that have actively participated to the elicitation of requirements. icore is an open architecture and as such will allow to Service Providers to offer a broad portfolio of services, to marketplace providers to offer applications within a sort of app store. Even Knowledge Providers a new role enabled by the architecture, the architecture can expose and leverage their capabilities within an icore system, in fact knowledge representation is a highly valuable feature that can be integrated in icore systems in order to support applications and services in (either broadly or narrowly) specialized problem domains. The role of Platform Provider is also envisioned, i.e. a provider able to instantiate a proper icore system (or several segmented systems) in order to support specific requirement of service providers. The merits of the icore architecture are also evaluated with respect to possible integration and contribution to other architectures such as IOT-A, OGC Sensor Web Enablement and the ETSI s M2M. Grant Agreement number: Page 4 of 136

5 Contributors First Name Last Name Affiliation Gianmarco Baldini JRC, EC Thomas Bartzsch Innotec 21 Panagiotis Demestichas UPRC Corentin Dupont Create-Net Matti Etelapera VTT Darminder Ghataoura UniS Raffaele Giaffreda Create-Net Pasi Hyttinen VTT Dimitris Kelaidonis UPRC Maarten Los ATOS Stephane Menoret Thales Roberto Minerva Telecom Italia Cosmin-Septimiu Nechifor Siemens Gerard Nguengang Thales Toon Norp TNO Venkatesha Prasad TUDelft Daniele Presta CRF (Fiat Group) Abdur Rahim Create-Net Marc Roelands Alcatel-Lucent B [email protected] Chayan Sarkar TUDelft [email protected] Lucian Sasu Siemens [email protected] Vera Stavroulaki UPRC [email protected] Paul Tilanus TNO [email protected] Filippo Visintainer CRF(Fiat Group) [email protected] Panagiotis Vlacheas UPRC [email protected] Grant Agreement number: Page 5 of 136

6 Table of Acronyms API Acronym Application Programming Interface Meaning CONOPS CONUSE CVO DoW Concept Of Operations Concept Of Use Composite Virtual Object Description of Work F, see also NF Functional FI FP7 GSM ICT IoT IoT-A NF, see also F RWO SA SL SLA Telco UMTS VO WP w.r.t API CVO RWK SK OGC SWE Future Internet Seventh Framework Programme Global System for Mobile Communications Information and Communications Technologies Internet of Things Internet of Things Architecture Non-Functional Real World Object Situational Awareness Service Level Service Level Agreement Tele-Conference Universal Mobile Telecommunications System Virtual Object Work Package With Respect To Application Programming Interface Composite Virtual Object Real World Knowledge System Knowledge Open Geospatial Consortium Sensor Web Enablement (by the Open Geospatial Consortium) Grant Agreement number: Page 6 of 136

7 Table of Contents 1. Introduction Virtualization Large scale systems Cognition Organization of this document Requirements Analysis and Design Principles Requirements engineering process Business Requirements General high Level Requirements from D2.1. and Business Use Cases High Level Architectural Principles The Architectural Vision The icore Functional Architecture General Overview Actors of the icore Architecture Functional Blocks and basic Interfaces among them Consistency of the Functional Architecture with the WP6 work plan High-Level Message Sequence Diagrams VO Naming and Addressing Templates, Data and Metadata Functional Interfaces Cognitive Aspects The icore Technology Architecture Smart Home - Ambient Assisted Living (AAL) Use Case Smart Meeting Use case Car Services Use case Smart Business Use Case Urban Security for smart cities Use Case Expected Benefits of icore Architecture High Level Value of icore Mapping of the icore Architecture to Business Requirements Mapping the Reference Architecture to Technical Requirements The Cognitive advantage Contributions to other Architectures Conclusions and steps forward References Grant Agreement number: Page 7 of 136

8 List of Figures Figure 1 icore specification ecosystem Figure 2 Example of requirement chain Figure 3 Traceability & justification table Figure 4. An initial View on some icore Principles and Entities Figure 5: icore Cube Figure 6: icore Cube Detailed Figure 7: Segmentation of icore Functionalities Figure 8. The Concept of Template Figure 9: A high Level Representation of the icore Architecture Figure 10: Semantic enrichments of VOs Figure 11 Functional Architecture of the Service Level Figure 12 Functional Architecture of the CVO Level Figure 13 Functional Architecture of the VO Level Figure 14: icore Functional Architecture Figure 15: Relationships with WP6 Use Cases Figure 16: VO Template provisioning in the VO Template Repository Figure 17: Service Template provisioning in the Service Template Repository Figure 18: VO Installation in the VO Registry Figure 19: Service Request handling part Figure 20: Service Request handling part Figure 21: Graph data model of the VO Model Figure 22: Virtual Object Graph Data Model part Figure 23: ICT Object & non-ict Object Graph Data Model part Figure 24: VO Function Graph Data Model part Figure 25: VO Template Figure 26: VO Template and VO Installation, Registration and Deployment process Figure 27: CVO Template Figure 28: External Interfaces Figure 29: Interfaces of components involved in VO design phase Figure 30: Interfaces of components involved in VO installation phase Figure 31: Interfaces of components involved in VO execution phase Figure 32: Interfaces of components involved in CVO design phase Figure 33: Interfaces of components involved in CVO instantiation phase Figure 34: Interfaces of components involved in CVO execution phase Figure 35: Interfaces of components involved in Service and Knowledge design phase Figure 36: Interfaces of components involved in Service Request phase Figure 37: Interfaces of components involved in Service execution phase Figure 38: A high-level overview of the cognitive cycle and relation to icore Figure 39: Position of Situation Awareness in icore Figure 40: Access levels in icore framework Figure 41: Description of the cognitive capability with the access model Figure 42: Distribution and access of VO in a distributed domain Figure 43: Functional and Technology Architectures Figure 44: Technologies areas comprised in the icore Technological Platform Figure 45: The Urban Security communication stack Figure 46: Possible Adaptation Mechanisms in icore Figure 47: Mapping of icore Architecture Functions and needed Technologies for Ambient Assisted Living Grant Agreement number: Page 8 of 136

9 Figure 48: icore Technology Platform and Technologies for Ambient Assisted Living Figure 49: Mapping the Smart Meeting Requirements to the icore Technology Architecture Figure 50: icore Architecture and Technologies in the Smart City use case Figure 51: icore Technology Platform and Technologies in the Smart City use case Figure 52: Technology use in icore Figure 53: Technology Architecture for the Logistic Use Case Figure 54: Urban security use case overview Figure 55: Urban security use case ICORE unique technical features overview Figure 56: Urban security use case system deployment overview Figure 57: Urban security use case and Sarah rescue Figure 58: Urban security use case workflow between C4I and Sarah s home Figure 59: Functional architecture model generalisation from standards Figure 60: Urban security use case overall functional architecture Figure 61: Urban security use case sensors network functional architecture Figure 62: Urban security use case sensors network gateway functional architecture Figure 63: Urban security use case domain applications (C2 / C4I) functional architecture Figure 64: Stakeholder roles and related benefits enabled by icore architecture Figure 65: Functional IoT-A alignment Figure 66: The ETSI TC M2M Service Enablement Framework Figure 67: Open Geospatial Consortium Sensor Web Enablement Figure 68: OGC SWE (layered) architecture Figure 70: OGC SWE 52 North implementation overview Grant Agreement number: Page 9 of 136

10 List of Tables Table 1 Requirements criteria Table 2: Components Interfaces Table 3: Mapping of Technical Requirements towards the icore Architecture Table 4: Conceptual mapping of IoT-A and icore Table 5: Two different mappings between icore and ETSI TC M2M service enablement framework Table 6: OGC SWE Encodings and Web Services Grant Agreement number: Page 10 of 136

11 1. Introduction The icore Architecture is defined in order to differentiate its contribution in the field of the Internet of Things with respect to other on-going activities. The major differentiator of icore is: Cognition, i.e., the capability of the icore systems to derive knowledge from the usage context in order to help applications to reach their goal and the ability to intelligently behave and organize the system s own resources. In addition icore combines into a unique architecture other two important features: Virtualization, i.e., the capability to virtualize real world objects and to represent them into a programmable framework Large scale IoT systems, the icore architecture is trying to cope with large systems made out of many thousands of sensors and actuators. The icore contribution is also based on an extensive set of requirements drawn by a constant interaction with stakeholders and experts in the field. This has led to a collection of requirements and desiderata that have driven the definition of the architecture (see Section 2). Consequently a top down approach has been adopted to define and frame some high level architectural principle inspiring the whole solution. On the other side, a set of Use Cases have been defined with a two-fold goal: to elicit new requirements adopting a bottom-up approach and to validate or challenge some of the basic icore architectural assumptions. This is an on-going activity and it will be consolidated in future releases of the architecture. However, some results, especially in the area of the definition and organization of management and control aspects as well as in the definition of data and templates of the icore architecture are visible and described in this deliverable. 1.1 Virtualization A first differentiating objective of icore is introducing Virtualization into the Internet of Things realm. In many ICT areas virtualization is already being widely used and it comes with a lot of disruption as well as new opportunities for creating services and application. In the Network Control area, for instance, the Network Function Virtualization Forum, sponsored by ETSI [1] is using virtualization for promoting higher levels of functional aggregation and organization of the network functions. In the Data Centre area, virtualization is used to support a whole new approach for the organization of software. This has enabled a new kind of offering: the XaaS (everything as a service) paradigm is heavily based on the virtualization techniques. Introducing Virtualization in the Internet of Things context has many merits. For instance it could help to overcome the heterogeneity of many proprietary architectures and systems enabling the possibility to run several proprietary IoT applications into a single platform. Virtualization of sensors and actuators is also important because it allows representing real world objects into the network. This enables the possibility of programming in relation to real world objects in the icore platform and to control, govern, or integrate the virtual instances in a programmable environment. Virtualization is the first step towards the virtual continuum, i.e., the possibility to create a link between any physical object and its representation in the cloud. The virtualized object can extend the functionalities, the features and the capabilities offered by the real one. For instance a real world object, a car, can be virtualized in the cloud and its functions can be controlled and managed in the internet. The car functions can be extended by applications and services running in the cloud. But there is more: each real world object can be transformed from a product into a service. The virtualized car can be enriched with control functionalities (e.g., for a trip in the mountains, the driver Grant Agreement number: Page 11 of 136

12 can enable the four wheel drive capability for a couple of days by interacting with the car and paying for that functionality for the required time. Additional power can be bought by means of the virtualized representation of the car for a specific travel, and so on). The car is not anymore a product sold to a client, but a service that can enable and disable premium functionalities depending on the needs of users. This is an example of how virtualization can support a promising business model called Servitization. But there is more: different objects could represent a single real object allowing for sharing of its functionalities in different virtual environments. The same physical sensor can be virtualized and confined into different contexts of usage. Valuable resources can be virtualized and shared in such a way to increase their usage and to help in limiting the wide deployment of similar sensors by different providers. On the other side valuable functions can be derived by virtualizing the real sensor capabilities: e.g., a camera can be used as a location device, a noise sensor can be used to determining the number of people present in a city street in the context of smart cities applications, and so forth. Virtualization allows deriving useful functions by many heterogeneous devices deployed in the real world. Composing and extracting these capabilities is supported by virtualization. 1.2 Large scale systems Large scale systems, as those in the scope of icore, have to be dealt with in a different manner than small scale and confined (wireless) sensor networks or small Internet of Things systems. They are complex by definition (comprising many objects that show a very unpredictable and dynamic behavior). Dealing with these systems in a traditional way (i.e., with humans involved in the management and configuration of the system) does not seem to be feasible. For instance, the business perspectives offered by machine to machine (M2M) platforms are quite intriguing for Telecom Operators. However, the development and exploitation of these possibilities implies new approaches to the normal way of operating networks and systems. M2M systems will need to dynamically activate/deactivate SIMs and possibly the operation has to be done by the customer directly. This means to decentralize a number of management functions. The impact on the management systems and processes is huge. Operators have to retain the overall control on SIMs, but those are to be managed by several other actors in order to provide a meaningful set of applications and services. Security and Monitoring applications in homes have similar issues. From a big provider point of view, there is the need to manage and control a large number of sensors determining their working status. The control has to be individual in the single home, but it has to be global from the provider perspective, i.e., the Provider should be able to activate/deactivate, and orchestrate several thousands (in perspective millions) of sensors, smart objects, and in general real world objects in quasi real time. This will require rethinking of the way resources are managed today. The icore architectural proposition is aiming at supporting solutions for this change of phase. Internet of Things large systems will stress even more the issues because plenty of sensors and actuators will be considered (e.g., some forecasts point to a situation in which for each square meter there will be thousands of smart objects). These objects will be owned and operated by several actors and environments. On one side this will enrich the capabilities and opportunities related to dealing with these sensors, but on the other end it introduces a lot of difficulties to overcome issues of controlling and orchestrating resources pertaining to different administrative domains. Virtualization will also increase the number of available objects. Consequently the features of selforganization, autonomics and the like are highly desirable and needed in order to run these large Grant Agreement number: Page 12 of 136

13 systems. The icore proposition is to put together virtualization (for smoothing the different properties of a heterogeneous environment and resources) and self-organization in order to create an intelligent environment of large size, capable to self-govern a great deal of issues (at the time of configuration in real-time as well as during operation). 1.3 Cognition Virtualization and autonomics are not sufficient in order to cope with the issues posed by future IoT applications. They are unavoidable steps towards systems that have to be intelligent and cope with varying situation in a smart way. An icore system needs to intelligently organize and orchestrate dynamic resources, but even more importantly, it should be able to extract knowledge from each situation being coped with and to learn how to improve the support to applications. This comprises the capabilities to recognize patterns and to derive knowledge from them. The icore system will be also capable of exploiting knowledge bases of specific vertical domains (e.g., logistics) introducing learning capabilities that can increase the comprehension of related phenomena. icore applications can be developed and built on top of these cognitive capabilities. Cognition can be extended at two different levels: Internally to the icore system: In this case the system will use cognition in order to govern and dynamically find optimal solutions for the usage of resources of the icore system (at the single application level as well at a global system level) At the level of functionalities offered to applications: icore systems should be able to encompass and use knowledge bases introduced by experts in specific fields and to enlarge the level of knowledge of a problem domain by means of (artificial) reasoning. In this case the icore system will provide rich and valuable functionalities that applications can use in order to focus on their business goal. For instance, an icore system could support the geopositioning functionality. In an icore system this functionality does not only mean that a person or an object will be pinpointed by using location based mechanisms (e.g., GPS positioning). The concept needs to be extended in such a way that cameras, or other information can substitute or integrate the location information. For instance a person entering in an underground parking lot could be located by means of the security cameras or by presence sensors as well as by Bluetooth probes or even by means of activities that s/he is carrying out (e.g., punching a park ticket). From the application point of view the function is a location based one, at the level of icore it is a much more complex one involving reasoning on and predicting where the person or the object is heading, organizing available resources in order to track it and to drive the location information. Prediction has a lot of value in this case and implies also that the icore system is able to determine patterns of behavior of people or objects within specific spaces and environment. Collection of data and reasoning about these data set is an integral part of the cognitive capabilities of the icore architecture. 1.4 Organization of this document This document aims at giving a general description of the icore reference architecture and to point to the relevant documentation produced by the project. The description is at a global level and more detailed discussions and explanations are to be found in other icore deliverables. The second chapter is focused on the analysis of business requirements as carried out in other documents and a set of architectural principles and guidelines that can be fruitfully applied in order to fulfill the desiderata. As said, the icore requirements stem from a deep analysis carried out contacting experts and stakeholders in the field of IoT applications. In some respect the chapter could be seen as a top down description of the icore architecture. Chapter three is about the use cases being defined by icore in order to challenge the architecture and to refine and consolidate it. It is the main part of the functional architecture definition. The Grant Agreement number: Page 13 of 136

14 chapter presents the use cases and then it describes the icore Functional Architecture, the data and the templates that are part of it and the interfaces that can be used in order to exploit the architecture. In this section two other important features of the icore architecture are presented: the cognitive support of icore and how icore deals with and offers security features. Chapter four is about the technological platforms underlying and supporting the icore functional architecture. It describes how current and future technologies can be integrated so that an icore compliant architecture can be developed and deployed. Chapter five summarizes how the icore Architecture can fulfill the requirements set forth by experts and stakeholder and the advantages that icore can offer. In addition there is a section dealing with possible contributions and relationships between icore and relevant standards in the area of IoT. Grant Agreement number: Page 14 of 136

15 2. Requirements Analysis and Design Principles In this section, we first detail the process followed for the requirement elicitation for the whole project in section 2.1. This includes the link between the use cases and the various requirement sets of the work packages. The general business requirements for the icore architecture are derived from a number of sources, from documents developed for the architecture definition [2][3], from a business analysis conducted by WP1 [6] and from the bottom up approach advocated by the use cases developed by WP6 [9]. Furthermore many external inputs, e.g., the Stakeholder group, have been considered especially by exchange of information with experts of this sector. 2.1 Requirements engineering process The engineering process used to elicit the various requirement sets of icore is described in Figure 1. Firstly, the WP6 is identifying Use Cases, which are stories on how to use an icore platform. These use cases are translated into User requirements, still within WP6. These user requirements are expressing the features that should be expected from an icore Platform. At this stage the icore platform is treated as a black box : there is no internals exposed. From this first level of analysis, the architecture is derived, distributing the features into big components (which are presented in this deliverable). The level of granularity is still coarse. The user requirements are further developed into Software requirements, within WP3, 4 and 5. Those software requirements are detailing each feature of the user requirements, and each feature belongs to a component, as described in the architecture of WP2. At a last stage, WP3, 4 and 5 performing the detailed design, in which the technology and detailed architecture allowing to fulfil the requirements is chosen. In this diagram, the green boxes are representing a set of requirements (a set of requirement is also called a specification ). The blue boxes are representing other kind of documentation (respectively use cases and detailed design items). UC1 UC2 UC3 UC4 WP6 Translates to WP6 User Requirements First level design & block view Architecture WP2 Covers Covers Covers Software Reqs WP3 Software Reqs WP4 Software Reqs WP5 Impl. details Impl. details Impl. details Legend: Detailed Design Detailed Design Detailed Design Req set Other doc Figure 1 icore specification ecosystem The process is explained in Figure 2 with an example: firstly a use case, «If Sara falls, then call the doctor», is given. This use case is derived into a user requirement: «The icore platform shall provide the way to retrieve the velocity of an object». At this stage, the subject of the requirement is always The icore platform since we don t detail the internals of the platform. The requirement is again Grant Agreement number: Page 15 of 136

16 developed into a software requirement: «The VO information model shall contain the velocity of a RWO in geocentric reference using ISO 6709 format». This is an implementable requirement, speaking about a component ( The VO information model ) and a format, but still no technology is given. The last step indicates the technology used for the implementation (a java library) : «Velocities will be formatted and computed using the "GeoTools" java library». WP6 AmbientLiving Use case: UC-1.1 «If Sara falls, thencall the doctor» WP6 User Requirements: UR-F-1.1 «The icore platformshallprovidethe way to retrievethe velocityof an object» WP3 Software Requirements: SR-VO-F-1.1 «The VO information model shall contain the velocity of a RWO in geocentric reference using ISO 6709 format» WP3 Detailed Design: «Velocitieswill be formatted and computed using the "GeoTools" java library» Figure 2 Example of requirement chain The objective of this requirement elicitation process is to obtain a set of specifications (3 software requirements sets and the architecture) which together form the reference specification for the icore platform. This set of documents shall be sufficient for the implementation of an icore platform. In Figure 2, the blue arrows are materialized by traceability tables, for which an example is given in Figure 3. This is a two-way table, in which the link between a use case and a user requirement should be made visible. The same table is produced to show the link between user requirements and software requirements. All use cases should be covered by at least one user requirement. Conversely, all user requirements should be covering at least one use case. Optionally, this table can be completed by a justification table (visible on the right), in which a rational is given for each link. Use Case User Requirements UR-F- 1.1a UR-F- 1.1b UR-F-1.2 UC-1.1 UC-1.2 UC-1.3 UC-1.4 UC-1.5 UC-1.6 X X UR-F-1.3 X X UR-F-1.4 UR-F- 1.5a UR-F- 1.5b UR-F-1.6 X X X X X X Justification Representing vertical velocity is necessary to track Sara s falling Etc. UR-F-1.7 UR-F-1.8 X X Figure 3 Traceability & justification table The phrasing to use in the requirements is the following: User requirements: «The icore platform shall provide <this feature>» Software requirements: «<This icore component> shall provide <this detailed feature>» Grant Agreement number: Page 16 of 136

17 The icore components are defined in the D2.3 architecture deliverable (this document). Each set of requirement (User requirement or Software requirement) may contain both functional and nonfunctional requirements. A functional requirement is feature oriented, for example «The icore platform shall provide the way to store the position of an object», whereas a non-functional requirement is quality and performance oriented, for example «The icore platform shall support at least 100 users at the same time». Finally, the requirements of icore should conform to the quality criteria detailed in the Table 1: Criteria Unitary (Cohesive) Complete Consistent Atomic (Non- Conjugated) Traceable Current Feasible Unambiguous Specify Importance Verifiable Explanation The requirement addresses one and only one thing. The requirement is fully stated in one place with no missing information. The requirement does not contradict any other requirement and is fully consistent with all authoritative external documentation. The requirement is atomic, i.e., it does not contain conjunctions (otherwise, make two requirements). The requirement meets all or part of a business need as stated by stakeholders and authoritatively documented. The requirement must not be made obsolete by the passage of time. The requirement can be implemented within the constraints of the project. The requirement expresses objective facts, not subjective opinions. It is subject to one and only one interpretation. The requirement must specify a level of importance (i.e. priority). The implementation of the requirement can be determined through basic possible methods: inspection, demonstration, test. 2.2 Business Requirements Table 1 Requirements criteria Some general business requirements are clearly put forward. There is a clear need to have solutions that do not depend from a particular technology or protocols, the concept of platform (and icore is an architecture that leads to the definition and implementation of platforms) is particularly appreciated. As a platform, icore needs to support some features: interoperability, openness to several actors, and support to large scale solutions (i.e., systems comprising a large number of sensors and actuators). Interoperability has to be supported at least from three perspectives: a device view, i.e., the icore architecture shall support a multitude of heterogeneous devices and protocols; a knowledge view, i.e., the icore architecture shall support the definition of knowledge models that describe several aspects of the real world. This knowledge can be declined to describe and determine the knowledge needed to represent and solve challenges in specific Grant Agreement number: Page 17 of 136

18 problem domains as well as the issues related to the icore system seen as a complex system itself (i.e., self-organization). Different Knowledge sets need to be interoperable and usable in order to solve complex and integrated problems; an application view, i.e., the icore architecture shall support the interoperability needs of different applications that can reside on different application domains, managed by different actors and developed in different contexts with several technologies. Interoperability is a requirement that can be fulfilled by virtualization. In fact Virtualization of entities and systems is a major architectural principle of the architecture itself. Openness to several actors has a broad meaning: the icore architecture should allow the implementation of icore platforms and solution by different actors (e.g., service providers, telecom operators, communities, or even government agencies). Each instance of the icore architecture should provide open interfaces (that the single provider can decide to close or block according to its own business model). These interfaces can be used in order to introduce new components or mechanisms in the icore platform, or for controlling and governing the architectural components, or used in order to develop applications that use icore mechanisms for the advantage of final users. In addition open interfaces have to be provided in order to allow the creation of new mechanisms and applications for orchestrating, improving, personalize the self-organization capabilities of icore systems. Finally another open interface has to be provided in order to allow the introduction of knowledge about specific problem domains. These interfaces can be made available to experts in order to populate the knowledge base or to applications themselves that will try to derive new knowledge at run time. Some of these capabilities can be seen under a general requirement of enrichment of the platform, i.e., the possibility to add value by means of new functions, new data or new cognitive capabilities to an icore system. The requirement about the capability to deal with large scale system derives from the fact that many stakeholder of the project (e.g., telecom operators, large service providers) have the need to focus not on a single class of applications, but on several ones. The focus is not even at the local level (e.g., managing a small set of local wireless sensors networks, but at a global one (national, international and even worldwide). Already now systems related to the management of Machine to Machine SIMs require the capability to deal with millions of systems, and in addition a clear requirement is also the ability to demand the activation/deactivation of functions to the customer according to his specific business related needs. Self-organization, cognitive capabilities and virtualization of resources come at hand for fulfilling this requirement. Dealing with plenty of information extracted by potentially millions of sensors exacerbates two requirements of IoT systems: security and privacy. Sensors can be used in very sensitive environments and the security and privacy of applications, data and knowledge models must be guaranteed to Platform Providers (responsible for the leaking of information) and to Service Providers that build applications and services on top of the platform. In addition the treatment of data that often will have a personal valence needs to be ensured according to the privacy framework established in Europe for this purpose. And this is a daunting effort. Finally the large scale dimension of the icore approach and the openness yield to the need to support the capability (also offered to final users) to introduce new resources in an icore system. For this reason, icore architecture needs to define discovery and registration functions for tracking available and allocated resources and for managing them accordingly to the specific policies established within icore systems. Grant Agreement number: Page 18 of 136

19 2.3 General high Level Requirements from D2.1. and 2.2. In this section, we provide the main requirements categories already defined in the icore deliverables D2.1 and D2.2. Beyond what already described in [2] and [3], in this section we also indicate the architecture implications. We distinguish between Functional requirement Categories and Non-Functional requirements categories. The following functional requirements categories are identified: 1. Data acquisition and categorization. Data acquisition relates to fulfilling all those actions needed to instantiate the icore system (i.e. need for formal icore entity descriptions to enable acquiring information about their status, ownership, purpose etc.). Data categorization must be possible in a very flexible way according to the structure envisaged for formal descriptions of icore entities. 2. Situation Awareness. The icore framework shall provide the capability to describe the contexts and to react to changes in the context. Situation awareness includes the concept of proximity among data and services and the capability to formalize proximity areas and assess membership levels of objects within a proximity area (i.e. geographically close, same owner, granted access, connected, same domain). 3. Semantic searching. The icore framework shall provide the capability for controlled searching and access to data and services. The controlled term is related to the access control function. The semantic searching capability shall be distributed and interoperable across different domains. 4. Autonomic and cognitive service lifecycle management. The icore framework should be able to adapt to changes in an autonomic way. This autonomic function is based on a continuously executed cognitive cycle (i.e. monitors, analyses, decide, actuate) continuously improving the outcome of service actuations and the weight of influence from sensing / participating objects (e.g. energy use optimization). 5. Directory Services. Maintain and log service usage history, useful compositions of objects, rating of icore objects. The following non-functional requirements categories are identified: 6. Performance and scalability. icore shall provide means and functions to support scalability and support the validation of performance requirements and their matching to systems constraints. 7. Interoperability. icore shall be interoperable for data and functions among different domains. 8. Security & Privacy: Availability. icore shall provide the capabilities to support service continuity. 9. Security & Privacy: Confidentiality/Data protection and Privacy. icore shall provide capabilities to regulate access to data and services. This category includes also Data protection and Privacy requirements. This also means anonymization or pseudonymization on the one hand and using the minimum set of data needed for the use case in the other hand. 10. Security & Privacy: Integrity. icore shall provide the capability to guarantee the logical correctness of the processes and the consistency of the data structures and occurrence of the stored data. 11. Security & Privacy: Authentication/Authorization. icore should provide the means to establish the validity of a transmission, message, or originator, or a means of verifying an individual's authorization to receive specific categories of information. 12. Security & Privacy: Non-repudiation. icore shall provide functions for the proof of delivery and the recipient is provided with proof of the sender's identity, so neither can later deny having processed the data. Grant Agreement number: Page 19 of 136

20 2.4 Business Use Cases In this section the following steps are taken: Taking inventory of use cases Derivation of a limited number of generalized sub-use cases from the use cases Derivation of high level user requirements from the generalized sub-use cases Derivation of functional architectural requirements for the icore architecture This section talks about how icore could be used in a business setting. The term business should be understood in broad terms, not necessarily in the financial sense. For example, a public authority may provide IoT Services to a public university in exchange for research results. We would like to address this from two main angles The use of sensor data by third parties The provision of services based on the platform. Use of sensor data Assume an organization wants to build an application and has a choice between two options: 1. Deploy, operate and maintain the required sensors and the Sensor and Actuator Network (SAN). 2. Buy the sensor data from an organization that already operates and maintains a SAN with the required sensors and a suitable availability. Option 1 is costly and Option 2 would be a good business case provided an agreement can be made regarding, among others, safety, security, and (open) interfaces. Also for the owner of the operational SAN an agreement would be attractive if the costs of opening up the sensors to another organization are lower than the revenues from the sensor data and provided the agreement would also address security from the SAN owner perspective. The second option can best be illustrated with a number of cases, the majority of them derived from the actual icore WP6 use cases. Case 1. Consider two cameras a few kilometers apart with a view on a highway section and deployed by the road management authority. The road management authority uses the cameras for traffic flow observation and counting the number of cars in the section to set an appropriate speed limit. The same cameras, augmented with license plate recognition can be used to identify speeding vehicles by the police department. Case 2. Given the scenario of Case 2, the same cameras can be used by the general public to check if it is time to go to the office or better to continue working from home while waiting for the traffic jam to dissolve. Case 3. Consider a Smart Home equipped with many sensors, such as cameras, smoke detectors, burglar alarms, CO2 detectors. The dwellers of this smart home also turn out to be elderly people. A health care service could offer a monitoring system, coupling the sensors to the icore platform and somehow commercialize or provide this service to the authorities. Case 4. Consider case X. The residents of the smart home is in this case are well to do people. Insurers or security companies can use the icore platform, couple the sensors and provide a monitoring service or offer discounts on the insurance premium. Grant Agreement number: Page 20 of 136

21 Case 5. Transport operators can also deploy many sensors in their warehouses, their transported goods and in their distribution centers. They can use the icore platform for intelligent monitoring of the transported goods Service Provision Case 6. Consider a service provider wants to leverage the icore platform by providing applications on top of it. It could sell the complete package to third parties, either as a service or on premise. Case 7. Consider a service provider that runs and maintains an icore installation, without being concerned with the applications that run on it. Third parties will pay the provider for the maintenance while providing the applications and sensor / actuator devices themselves. Use Cases Requirements In WP6 a number of use cases is studied. For a complete description we would like to refer to [5] and [7] Smart Home Ambient Assisted Living The output streams of medical sensors are analyzed and exceptional combinations of values are reported for further action. Smart Business Monitoring in the logistic chain The sensors in containers with perishable food give warnings that allow for optimized transport and minimizing food waste. Smart Meeting Organizing a business meeting. This use case is not about selling sensor data to other parties, but rather to combine the cognitive capabilities and the virtual sensor and actuator objects of the icore Platform with the purpose to make life easier for the meeting organizer and participants. The use cases are a, often complex, composition of a number of generalized sub-use cases. The following generalized sub-use cases have been identified. Sell/Buy generalized use case Party S is an owner of a sensor network, and prepared to Sell sensor data to recover part of the costs of operating and maintaining the sensor network. Party B is reluctant to install and operate his own sensor network and prefers to Buy sensor data. Data Enrichment generalized use case Party E offers services (such as anomaly detection, recognition services, storage) to Enrich streams of sensor data. Party A has access to sensor data streams and needs for an application to Apply the enrichment service. Party A prefers to buy the data enrichment service (Software as a Service, SaaS) over building the enrichment functions himself. Non-commercial generalized use cases The Sell/Buy generalized use case and the Data Enrichment generalized use case occur in noncommercial settings. When the selling and buying parties are applications within the same organization, the non-commercial user requirements remain, but now from a sound software engineering point of view. Device manufacturers generalized use case Party D produces Devices (sensors and/or actuators) and wants to innovate these devices without cannibalizing his own products. Furthermore, D wants to compete with other device suppliers without customers objecting to change of supplier due to high change costs. Grant Agreement number: Page 21 of 136

22 Party M has multiple customers that bring devices from different device manufacturers into his environment; per device manufacturer there might even be multiple generations of devices. M wants to interact with all devices in a uniform way. Mobile sensors generalized use cases Party U uses sensors 1 that are by nature mobile (cars, mobile phones). Every time U uses his application the appropriate set of sensors needs to be established. Service Provider Use Cases In the case of Smart Meeting, an application is sold that is based on the icore platform, and delivered as a turnkey solution on the customer s site. In this case, the software and the platform would be sold as a bundle. Alternatively, the application could be offered in a SaaS setting, the icore platform potentially serving multiple applications. Other generalized use cases? A service provider could offer the icore platform to third parties so they can write and run their applications. The third party is also responsible for providing and connecting the sensors. From the Sell/Buy generalized use case we can derive a number of user requirements: Commercial user requirements For S the costs of opening up the sensor data to B and other interested parties should be lower than the revenues from selling the sensor data. For B the business case of buying the sensor data should be better than the case of installing and operating a sensor network. And B wants to avoid a lock-in by S if in the future the sensor data can be bought from other parties at a better price the switching costs should not be prohibitive. For both, flexibility in billing arrangements, from a lump sum for access to all sensors of S for an agreed period to an amount per single sensor reading, is required. Usage information is needed to set the right type of financial arrangement for each sensor or group of sensors. Security and safety requirements For S the integrity of the sensors and the network of S should not be violated. Furthermore, only B should get access to the sensor data bought. And B should only get access to the sensor data bought, and not to data from other sensors operated by S. Note that access to sensor readings and assess to other sensor data (such as the last calibration date or geo location) might be different. For B the access to the sensor data of S should not create a hole in the security system of B, allowing malware to enter the network of B. Operational requirements For S the normal operations of the sensor network should not be hampered by opening up the sensor network to B, and possibly other parties. For instance, replacement of old (types of) sensors by new ones, potentially with more functionality and higher accuracy, should remain possible. 1 Mobile sensors should be distinguished from (smart) tags. Tags (bar code, RFID, smart tag) are attached to products and are being sensed by a sensor. Mobile sensors, such as a GPS sensor, themselves sense their environment and produce and communicate their sensor values. In particular for smart tags, e.g. logging temperature over time until being read, might be confused with sensors. Grant Agreement number: Page 22 of 136

23 For B it is essential to get all relevant (API) documentation of the functions offered by the sensors of S. Furthermore, an SLA per type of sensor is required. B also needs means to establish the current set of deployed sensors, since sensors might be temporarily out of order (for instance due to maintenance). From the Data Enrichment generalized use case we can derive a number of user requirements: Commercial user requirement For E/A it should be possible to sell/buy SaaS for well-defined input streams. The enriched data stream should be considered a higher level sensor. For A SaaS provider lock-in should be avoided. Security and safety requirements For A it should be possible to transfer the access rights to the input streams to E or E should be able to provide the enrichment services including access to the required input stream(s) of sensor data. From the Device manufacturers generalized use case we can derive a number of user requirements: Operational requirements Different sensor and actuator hardware with the same functionality should have, as much as possible the same interface. From the Mobile sensors generalized use case we can derive a number of user requirements: Operational requirements Party U should be able to establish the current set of relevant deployed sensors, even if the sensors are mobile. From the Service Provider use case we can derive a number of user requirements: Operational requirements The icore platform should be able to be deployed as a service. The icore platform must have the possibility to deployed as a complete package. From the user requirements that originate from the generalized use cases, the following high level functional architectural requirements can be derived. a. Virtualization The hardware and embedded software of the sensor or actuator should be encapsulated by a virtual counterpart and this virtual counterpart should offer a well-documented interface to the IoT. b. Enrichment The architecture should provide means to combine and enrich the data from one or more sensors into higher level sensors. c. Discovery The architecture should provide means to discover the relevant set of ordinary and higher level sensors, given certain relevance criteria. The discovery mechanism should take into account that the party doing the discovery (buyer) might have a different frame of reference than the party offering the sensor data (seller); Grant Agreement number: Page 23 of 136

24 the sensors can be mobile. d. Security The architecture should offer a fine-grained security mechanism, allowing authorised access to just the sensor data sold/bought in a multi-party setting (i.e. distributed and multidomain). e. Platform multi-tenancy. The architecture should allow the running of multiple compartmentalized vertical application stacks. It is recognized that the same could be achieved at a higher level by running different instances in different virtual machines. Still, there must be a mechanism to associate the sensors and actuators to the correct instance. 2.5 High Level Architectural Principles The icore architecture is designed in order to support virtualization and enriching the IoT environment with cognitive capability. The icore virtualization architecture is based upon the concept of Virtual Object (VO). It is a representation of real world objects capable of providing ICT related functionality. A Virtual Object (VO) is a semantically and functionally enriched representation of a real world object. Semantic enrichment means that the object is described in its state, location, functionality, potential uses or features/functionalities. Semantic enrichment may be the outcome of learning and knowledge generation. The precise definition of a VO is given below. Definition: A VO is the virtual (abstract) representation of an ICT object that is associated with a non ICT object. Indeed, the act of installation brings the ICT object in a specific real-world context, which in icore modeled as an association to one (or more) non-ict objects. VOs indeed help in accessing the real world objects and helps interfacing them (after abstraction) to the external world. Virtual Objects can be seen as providing a very basic set of functionalities representing the actual functions of a Real World Object. Actually the icore architecture introduces three more basic principles for dealing and transforming Virtual Objects into more usable and service bound entities, they are: Aggregation: The functionalities of a group of virtual objects can be collected, represented, coordinated and controlled by other virtualized entities. In addition these functionalities can be mashed up, i.e., the functionalities of different Virtual Objects can be integrated in order to make them generic and available in more applicable ways to applications or to create new functionalities made possible by the integration of the functions of the single Virtual Objects. Abstraction: The functionalities of a Virtual object can be generalized and abstracted in order to make them applicable to a large set of situations and requirements. Functional Enrichment: A virtualized entity trying to mimic and represent the functions offered by a real world object, this entity can be further extended by adding new functionalities that are made available in the virtual environment. These functions can be of cognitive nature allowing in this way to add intelligence and other properties to a virtualized entity or aggregation of them. With this in mind, another basic component of the icore architecture can be introduced now; it is called Composite Virtual Object (CVO), i.e., a virtual aggregation of different Virtual Objects that can be functionally extended by introducing new functionalities and logic with respect to the functional requirement of applications or the systematic needs of the icore environment. In a more formal way: Definition: A Composite Virtual Object (CVO) is a mash-up of VOs that renders services in accordance with user/stakeholder perspectives and application requirements. Here we note that a CVO may indeed has only one VO but still it could be a CVO if it has built some more cognitive functionalities which are not possible/available in the included VO. Grant Agreement number: Page 24 of 136

25 Though a VO becomes existent immediately after installation, a CVO is formed only when a service need to be served. In order to understand a service request (user/stakeholder or application), a set of rules needs to be defined to form an appropriate CVO. In icore terminology, the Service Logic (SL) accomplishes this task. It analyses various service requests and map them dynamically to a CVO. Definition: Service logic is the logic reflecting what a particular service (application) does according to the icore Service Model at various implementation ion stages of the service in the icore system. In general, service logic is the cognitive task capability built in (SL) which helps in breaking the requested services into functions and scheduling them in the proper order to with the help of CVOs and VOs. Another important goal of the icore architecture is to allow applications to access to a wellstructured and rich set of service functions to be used in order to ease the application design ensuring at the same time the fulfillment of strict functional goals of the applications. In order to support this requirement, the icore architecture is adopting a new concept called servitization principle. Definition: Servitization is the capability of creating a link between a (physical) product and a set of services and enriched functionalities that extend, complement, and add value to the product itself. Servitization is a means to create a continuum between a product and a set of remote functionalities that augment the capabilities and the properties of the product and relate it to other products and functions either in the physical or virtualized environment. That is, the architecture is structured in such a way to identify very early a set of generic services (i.e., a set of widely reusable and accessible functionalities and capability stemming from the functions offered by VOs and CVOs or generically by the icore entities). Another way to consider this principle is the possibility to build a set of services around CVOs or even VOs. Figure 4 represents these concepts. Figure 4. An initial View on some icore Principles and Entities The advantage of icore with respect to other Internet of Things related activities is strongly related to the cognitive enrichment of virtualized entities, but is also lies on the application of several principles like, abstraction, aggregation and extension of functionalities that a virtualized environment usually allows. In addition the servitization principle helps the icore architecture in striving towards the definition of widely reusable functions that can be organized into consolidated services that applications can use in order to achieve their goals. Grant Agreement number: Page 25 of 136

26 2.6 The Architectural Vision To explain how we built the methodology in developing the icore architecture, we first provide an abstracted view in Figure 5. We refer this abstraction as icore Cube helping in creating a mind map to position the various initial architectural results, as they will be presented further. Figure 6 helps in breaking up the whole, complex research space (in which such results have been produced) into a number of dimensions / planes and its objective is to ease e the communication and understanding of the relation between various aspects in icore architecture at a macro level. To achieve our main objective, as stated above, an icore system is expected to support users and applications with sensed data from IoT which must be elevated to create knowledge through an understanding of what it represents and the inference of what situation it describes. Further, the knowledge we get could also provide different meaning and insights depending on how it is used and the situation where it is used. Here we make the first level of approximation in the icore cube. In order to improve the way Application and User plane works (i.e. achieve the objective of empowering IoT with Cognitive Technologies) we broke down the whole problem space into three major planes, one (the cognitive plane) where the action is, the other two where the information that needs to be acted upon resides and/or comes from (the data and metadata planes). The System plane supports these, where systematic processes keep executing. Figure 5: icore Cube Let us take the Cognitive plane (see Figure 6), it encompasses all results that are available to provide an icore system the ability to behave, to a large extent, autonomously. The action within the Cognitive Plane relates to the ability to appropriately process data or metadata for one of the following purposes: Grant Agreement number: Page 26 of 136

27 Figure 6: icore Cube Detailed 1. Produce Real World Knowledge (used i.e. to acquire Situation Awareness) and feed data plane to store it or feed the Apps/Usr Plane with inferred real world knowledge (which can be used outside icore scope). 2. Derive Metadata from acquired real world knowledge (i.e. used to enhance the way objects are to be tagged or semantically enriched) and then feed metadata plane. 3. Select most relevant resources such as VOs, and Composite VOs (CVOs), likely to meet App/Usr Plane requirements. 4. Produce System Knowledge. 5. Control system resource e allocation. The Data Plane (Figure 6), as we hinted at earlier, is representative of the real world data that can be either sensed directly or the information that can be derived and inferred from such data indirectly. Data plane information is stored in appropriate icore architectural components or dedicated registries or even in cloud. In Figure 6, the Metadata plane represents a placeholder for all registries dedicated to metadata (such as location, ownership, access rules, semantic annotation etc.) about icore objects i.e., Virtual Objects (VOs) or Composite Virtual Objects (CVOs) (explained in the later sections). As illustrated, Cognitive plane functionality helps to integrate the information held in various metadata registries, by deriving newer set of metadata either from real world knowledge or from system knowledge. In order to frame the various icore entities a segmentation of functionalities has been carried out in the architecture. The segmentation in icore is a grouping of functions corresponding to and emphasizing the properties offered by different icore entities (i.e., VO, CVO and SL). Segmentation is meaningful for aggregating functionalities or to access them in a well-structured way. That leads to three basic levels in icore Architecture VO Level, CVO Level and Service Level. Objects are framed in a set of levels (Figure 7), each level is aggregating functionalities offered by a certain type of icore entities: the VO level collects all the functionalities provided by the Virtual Objects as well as the entities and the functionalities of the icore system needed to support and to use the VOs. On top of VO level, icore frames the CVO related functionalities and how they can be related to each other and with entities of other levels (namely the VOs and the Service Enabling Functions). In the upper most level, the icore architecture considers all the service enabling functions and their relations to applications and CVOs. At each level, the icore architecture envisages an increasing number of functionalities and systematic entities used to support the architecture. Further Grant Agreement number: Page 27 of 136

28 empowering the system capabilities as well as the functionalities that icore makes available to the applications is done. Moreover integration and interactions with the real world objects is also taken into account. Figure 7: Segmentation of icore Functionalities This segmentation adheres to the architectural principles introduced before. In fact, it supports virtualization and abstraction, as well integration and composition of functions and entities. Also servitization of functionalities is represented (especially at the service level). In addition Figure 7 introduces another important architectural principle: the entities at each level have to support interfaces and APIs in order to allow other entities to access and program the functionalities made available. Each level should make available and consider a set of well-formed APIs. This segmentation of functionalities is not to be intended as a layering of functionalities as being done in an OSI like architecture. It has to be considered as a grouping of functions that are meaningful for aggregating functionalities or to access them in a well-structured way. In fact (right part of Figure 7), the application developers can select the APIs at each level they prefer. The lower level will simply provide less systematic support to programmers and probably the programming of lower level functions will be more complex, but it will anyway provide a close mapping to real world objects. Summarizing, levels are not prescriptive, i.e., objects can access any other level functions if they have granted the access to those specific elements. So far icore has not chosen a unique way for describing APIs. They should be defined according to the functionalities offered by the VO and CVO. For instance many VOs/CVOs could be used by means of Web Services (possibly in a stateless, i.e., REST, manner), however some functionalities (and even some systematic functionalities could be used and accessed by means of different APIs (e.g., event based APIs or even streamed based ones). This choice is due to the fact that icore applications and in general IoT need to use different interaction models for gathering data and functionalities of the sensors and actuators. For instance a kind of streaming programming is emerging as a means to create applications that deal with sensors or other objects generating continuous flows of information to be processed as streams. In other terms, icore will provide entities able to support a Grant Agreement number: Page 28 of 136

29 pure client server type of interaction as well Publish/Subscribe in order to allow the dispatching (possibly in real time) of streams of data. It is envisaged that a Complex Event Processing (CEP) engine will be a systematic part of the icore architecture ture for applications requiring this type of interactions. Lastly, the architecture encompasses also the cognitive functions that are the major goal of icore. These functions will increase their ability to help applications and programmers and they grow in number and power in the upper levels. Figure 8. The Concept of Template Entities within the icore architecture can be seen from two different perspectives: 1. Functional: : Here the functional goals and the business goals of an entity are described, represented and programmed. This is the intended behavior of the entity for coping up with and acting in the real world 2. Systematic: : Here functionalities and data are represented in order to frame the entity in an icore system. In order to properly work in an icore system, an entity should behave and should represent certain data and properties. They are useful for helping the entity in fulfilling its functional goals. One architectural principle in icore is that entities should be designed and instantiated from templates. Templates are a means to allow developers to create functionally oriented objects (functional view) that are compatible with the icore architecture (systematic view) and can take advantage from services and functions provided at the different architectural levels. Figure 8 represents this concept. icore systematic view varies at different levels (more cognitive functions are introduced at the upper level), so it is envisaged that at each level the template could be specialized. Further work is needed in order to determine and design template at each functional level of the icore architecture. However as a principle it stated that each icore template should inherit systematic functionalities from functional ones. The collection of systematic information and data will allow the icore system to reason about how it is service the specific element, the environment in which it is executed and in a general view the relations between different application related environments. The Service oriented template is useful to frame the functional behavior, data and functions. Moreover it is also to help the programmer in providing means to represent the cognitive aspect of the problem to be solved. This organization will allow each instantiated entities to help the icore system in Grant Agreement number: Page 29 of 136

30 understanding the functional context (the context in which smart functionalities should help the application to fulfill its goals) and the systematic one (helping the icore system in understanding its status and taking action to optimize it). Grant Agreement number: Page 30 of 136

31 3. The icore Functional Architecture 3.1 General Overview In its most generic sense, the interaction with an icore system is initiated through a Service Request generated for the purpose of activating data-streams from IoT objects and continuously processing these to support an end-user / ICT application with a set of processes monitoring a situation and producing alerts when particular conditions are met. Such processes, derived from service templates are orchestrated and bound to relevant VOs using icore functionality. The purpose is realize that a Service Request can get a Service Execution Request, mapped via the Service Level functionality, which is then passed to the lower CVO/VO levels for the selection and activation of appropriate objects needed for satisfying the request (Figure 9). Behind this simple set of processes, icore value stands in the loose coupling between service requests and actual IoT objects or a combination of these, which satisfy the request as well as in the ability to select these dynamically, runtime and purposefully through the use of cognitive technologies. This is reflected also by the ability of icore system to learn and adapt to changing situations the way it satisfies requests. Figure 9: A high Level Representation of the icore Architecture Figure 9 shows the icore architecture at a first level approximation, with interactions between levels aimed at activating an IoT supported set of processes. The loose coupling between the needs of the application and the IoT objects that are used to fulfill these, is realized through a dynamic binding able to select the most relevant ones as the situation / context surrounding the application / user issuing the Service Request changes. The figure also shows the rough interactions between the icore levels cascaded after the Service request, which, following a Service Template selection (details to be found later) generates a Service Execution Request used to generate appropriate CVOs which in turn (CVO/VO Execution Request) rely on features of underlying VOs. The picture also shows how these interactions result in a set of running processes that is expected to produce notifications throughout their lifetime delivered to the application. Besides fulfilling the Service Request needs, icore is designed to improve its ability to grow and therefore represent Real World Knowledge as well as System Knowledge (Figure 10). The first of these abilities is realized within the service level where external user feedback or other assessment of accuracy of service behavior is used to tweak parameters in the selected service template. Within the CVO level similar ability to grow System Knowledge is realized, which helps select or refresh for Grant Agreement number: Page 31 of 136

32 example the best objects that can fulfill a CVO/VO request. In both cases, the objective is to continuously refine the (RWK / SK) models therefore minimizing the discrepancy between the way an icore system can represent the real world and the system itself, for the needs of the requested services and for resource-optimal optimal fulfillment of those by the system. The figure also depicts the Real World Information (RWinfo) and System Information (Sinfo) databases, which comprise real-time information about the real world and system instances, in contrast to the respective RWK and SK models. Figure 10: Semantic enrichments of VOs As far as the usage of cognitive technologies is concerned, some first insights are: Semantic tagging will help reasoning over VOs at a real-world semantic level. Reinforcement learning (a type of Machine Learning) can help in tuning the Real World Knowledge (RWK). In the remainder of this section we delve deeper behind the scenes of the icore interactions and features, describing the first results which lead to the icore draft functional architecture. The functional architecture comprises mainly functional blocks that stand for the functionalities needed for the proper operation of icore system, basic interfaces for the communication of several actors with the icore system and the information exchange among functional blocks, as well as some diagrams in terms of message sequence charts (MSCs) trying to shed more light on the coordinated operation of the functional blocks and their interconnection aspects in some operational phases of the icore system. An important note here is that the functional architecture depicts the perspective of the icore consortium of what is needed to fulfill the icore objectives. However, this does not imply that the icore project intends to reinvent the wheel when existing solutions may be used for accomplishing some needed functionalities. The project is currently in the process of studying several standardized solutions [8] and investigating their potential use.. Furthermore, icore intends to provide extensions to these standards, mainly by means of adding cognitive functionalities to deal with the large scale of the Internet of Things (IoT). In this respect, icore can make a breakthrough in the area of IoT. The functional architecture is the first step a summary of what icore is before doing the mapping and possible extensions to standards. Some functionalities are more core than others; that means that they are in the core icore system, while some others may be generated functionality. Such aspects will be investigated after the Grant Agreement number: Page 32 of 136

33 functional view by providing a software view. The software view will depict how the functional blocks will be built as software components. For instance, one or more functional blocks may be included in one software component, or one functional block may be split into several software components. At a final stage, a system view will be built by investigating in which real world system components need to be installed. This will bring icore close to a field trial testing. Both the software and the system views will be the subject of future WP2 deliverables. The structure of this chapter is as follows. Section 3.2 defines the main actors of the functional architecture. Section 3.3 introduces the functional architecture in terms of functional blocks. Section 3.4 provides some insights of the consistency of the functional architecture with the architectural features of the WP6 work plan [7][4]. Section 3.5 includes some tentative message sequence charts, MSCs, and their description, in order to help understanding the operation of the functional blocks. The section 3.6 investigates the naming and addressing issues, while the section 3.7 describes in detail the templates, data and metadata used in the functional architecture. Section 3.8 specifies the functional interfaces. Section 3.9 gives some significant hints about the cognitive aspects envisioned in icore, while section 3.10 deals with the security aspects. 3.2 Actors of the icore Architecture The icore system defines seven actors in the functional architecture, also corresponding to distinguished business roles. The Service Requester is the actor, normally a human user or a software application, who issues a Service Request to the icore system. The Service Requester may be a developer that has knowledge of specific programming languages (e.g. SPARQL) to issue the request through them or a nontechnical user speaking a natural language. More information about this kind of actor may be retrieved from [10]. The Observed Entity is the Real World Object (RWO), which the icore system wishes to sense and/or which the icore system wishes to actuate after according to an issued Service Request. icore approach at this stage does not distinguish between objects and humans, as intends to investigate a generalized Observed Entity solution. However, in case that the observed entity is a human, preferences and policies may also be expressed by that person. Preferences are rules that humans consider as desirable (e.g. I prefer not to be monitored when I have a visitor ), while policies stand for constraints that the human user imposes to the icore system (e.g. don t use my camera in the bedroom after 8 p.m. ). More information about this kind of actor may be retrieved from [9], [10]. The Domain Expert/Knowledge Engineer is an expert of a specific application domain that designs in an icore specific format (Service Template) how basic services can be built and/or provides base models of a range of specific application or knowledge domains by means of domain ontologies (Real World Knowledge (RWK) Model). More information about this kind of actor, service templates and the icore RWK model may be found in [10]; Templates are also further discussed in section 3.6 of this document. The Data Processing Domain Expert/Developer is the developer that designs in a specific format (CVO Template) how basic functions offered by specific VO types that are described in specific format (VO Template) can be combined to build a more complex function based on specific processing logic for the CVOs represented by the template. More information about (C)VO templates is given in section 3.6. The actor is further analysed in [11]. The Device Installer is the user that installs the physical sensor/actuator/resource devices in the icore system as registered at the icore VO Level. The difference between sensor, actuator and resource is introduced in section 3.6 and will be part of the VO information model [13]. More information about this kind of actor may be retrieved from [13]. Grant Agreement number: Page 33 of 136

34 The Device Manufacturer stands for the industrial actor that creates the devices (sensors, actuators, resources) and who describes their capabilities (e.g. offered functions) in a specific format (VO template). More information about this kind of actor may be found in [13]. Finally, the icore Administrator is the supervisor of the actual icore operation in the three icore Levels (Service, CVO and VO). This actor provisions administration and management configurations for each Level and for the icore system as a whole. 3.3 Functional Blocks and basic Interfaces among them For the sake of clarity, we split the description of functional blocks in four subsections: Service Level, CVO Level, VO Level and Security and Privacy. Templates and respective repositories are considered in the corresponding level (and subsection), although they are described in detail in section Service Level Figure 11 depicts the functional architecture of the Service Level. In the end of section 3.3, Figure 8 illustrates the whole functional architecture consolidating all icore levels. The link of the functional architecture with potential technologies is provided in chapter 4. Figure 11 Functional Architecture of the Service Level The Service Requester may be a technical or a non-technical icore user. In case the service requester is a non-technical user, a user interface is used to help him or her to communicate with the icore system. However, the user interface is left outside the icore specification. In case that the service requester is a technical user, icore intends to provide a specification of the Application Programming Interface (API) that is offered to the service requester. Natural Language Processing (NLP): This functional block is applicable only when the user of the icore system is a non-technical user that issues a request in a natural language form (either typed in a user interface as free text or potentially through speech recognition software). The functional block translates these non-technical user human language queries and statements into the formal icore service request SPARQL query. In this end, NLP technology jointly with a domain ontology are used to analyse the input and autonomously extract parameters from it to be used in the SPARQL query. Cognitive mechanisms such as reasoning on the ontology and Semantic Similarity between a pair of concepts can be used in this functional block. More information about this functionality may be found in [10]. Service Request Analysis: This functional block comprises three functional sub-blocks. The Goal Analysis is the central point of contact of the block. It receives the SPARQL query and asks for the retrieval (through the Intent Recognition) of the current situation in which the query is performed. Then, it forwards the query (request parameters) and the situation (situation parameters) to the Grant Agreement number: Page 34 of 136

35 Semantic Query Matcher. The Semantic Query Matcher comprises semantic alignment/learning enhancements as a potential pre-processing for the standard SPARQL matching of query to Service Template concept as done in the RDF Rules Inference Engine. The Semantic Query Matcher is required, since in practice there is hardly a perfect match from a query to the set of templates, because of the incompleteness of the (stored snapshot part of the) domain model to cover all semantic concepts used in the service requests. The RDF Rules Inference Engine (in WP5 terminology also identified with the more generic term Service Factory) is the actual sub-block matching a Service Request to a Service Template and instantiating the Service Templates. The outcome of the RDF Rules Inference Engine is actually a logical mash-up of CVO-selection criteria (i.e. CVO Template names, in its simplest form), called Service Execution Request, which is to be handed over to the CVO Level for service execution. The Service Execution Request is the final outcome of the Service Request Analysis and comprises - apart from CVO-selection criteria Service Level Agreement (SLA) criteria that express quality demands and cost criteria. More information about this functionality may be retrieved from [10]. Service Template Repository: The Service Template Repository contains a semantically query-able collection of Service Templates, as provided in the repository by the Domain Expert/Knowledge Engineer. The Service Templates are expressed as rules for RDF inference. The Service Template Repository contains a lot of implicit RWK, and so expressing those as RDF rules can be seen as a virtual extension of the RWK Model. More information about this functionality may be found in [10]. Intent Recognition: The Intent Recognition comprises the cognitive functionality that is used to determine what the Intent of the User is (e.g. what is the user interested in?). This assists in identifying the Monitoring Goals (application specific) needed for Situation Detection sub-block of the Situation Awareness block. User Characterization: The User Characterization comprises the determination of a range of facts concerning a human user, including user context, profile, preferences and policies. Furthermore, this block comprises the cognitive functionality to build user-related knowledge, in order to eventually act on behalf of the user. As stated before however, icore aims at a generalized solution which can handle the observation of humans and objects by the same mechanisms. Therefore, also the userrelated knowledge is stored in RWK model. Part of the user characterization is also related to access control; a an access level may be associated to a particular user based on the application of predefined rules to the user attributes (as an example, an adult can be provided access to e-commerce services for buying liquors e.g. if the user is older than 18 th years of age.) Situation Awareness: The Situation Awareness block is responsible for the creation of the RWK, which is then stored in the RWK model. The situation awareness process is generated by a logical sequence of steps/sub-blocks namely (i) Situation Detection (ii) Situation Recognition (iii) Situation Classification and (iv) Situation Projection. Cognition adds the element of intelligence which helps discerning the situation and thereby effecting action. Each of the above sub-blocks follows the cognition cycle in terms of Perception (Input) Comprehension (Processing)- Adaptation (Response) tuned to specific goals. The Situation Detection comprises the cognitive functionality to extract features of interest from event streams. The process of situation detection enables to identify the event streams which will evolve into a situation of interest. The process requires an aggregation of the subscribed event streams, synchronization of the time-spatial granularities and validation of the causal relationship of the events. Extracting features of interest is the key to identify if the events that are generated will converge to a situation. Cognition in this process is the substantiation of the causal, spatial and temporal constraints. This would eliminate false triggers of a potential situation. Grant Agreement number: Page 35 of 136

36 The Situation Recognition comprises the cognitive functionality to assist in the recognition of the relations that exist among the features of interest for the situation being monitored. Once a potential situation is positively identified, the reasoning mechanisms are employed to establish the facts of the evolving events as relevant and valid. Based on the events being semantically annotated and the availability of the information of the goals and other policies, a correlation of the various features of interest is performed. Reasoning mechanisms can help establishing the facts based on predefined rules. The Situation classification relies on reasoning and learning mechanisms to classify the established facts as a situation that needs to be addressed. The learning methods employed depends on the situation type. The Situation Projection comprises the cognitive functionality to analyses which facts best serve the Monitoring Goal derived from the analysis of the service request (Goal Analysis) and provided by the Intent Recognition. WP5 considers specific CVOs that are devoted ed to the particular task of observation of events though CVO processing, as particularly meaningful and relevant to the situation that a particular person or service is in. These CVOs are called Situation Observers (SO). The SOs are considered to provide runtime input to the Situation Awareness through the Queried Fast Collector. More information about this functionality may be found in [10]; a summary of this is also in the CVO level functional blocks discussion in this document. icore currently investigates the case that the Situation Awareness functional block is fully substituted in terms of functionality by SOs with respect to the needed runtime operation (i.e. real-time situational events are handled by CVOs, as prescribed by the RWK and Service Request analysis at the Service Level). Queried Fast Collector: This assistant block is used to aggregate the subscribed event streams and to deliver the outcome to the Situation Detection of the Situation Awareness, having the latter to create the RWK. The information is stored in the Real World Information (RWinfo) Database. Real World Knowledge Model: For the internal data representation of RWK, we assume that knowledge can be captured in RDF graphs, and so an RWK Model Store can be used to memorize the reflection of the real world s rules of behavior in the icore system. (Note that this does not include the real-time events reflecting the actual status of the real world the RWinf.) More information about RWK Model and RWK Model Store may be found in [10] CVO Level Figure 12 depicts the functional architecture of the CVO Level. In the end of section 3.3, Figure 14 illustrates the whole functional architecture consolidating all icore levels. The link of the functional architecture with potential technologies is provided in chapter 4. Figure 12 Functional Architecture of the CVO Level Grant Agreement number: Page 36 of 136

37 The CVO level receives from the Service level a Service Execution Request. The first point of contact in the CVO level is the CVO Management Unit. CVO Management Unit: The CVO Management Unit is more about the execution / runtime phase. The first point of contact in the CVO Management Unit after a Service Execution Request is the CVO Lifecycle Manager. The CVO Lifecycle Manager may be considered as an intelligent monitoring unit keeping track of the changing states of the collection of running CVOs. Based on several trigger events and conditions, the states of CVOs may change causing a specific action. Moreover, the lifecycle manager may decide to keep alive for some time (SLA- or System Knowledge- based) some CVOs, despite the service to the execution of which they initially belonged already ended. At the time of a Service Execution Request, the lifecycle manager needs to check if CVO instances for a specific CVO template are already running and could be reused, and otherwise needs to instantiate a new CVO through the CVO Factory. So, some CVO instances may be jointly in use by multiple services, as they were reused across multiple service execution requests, but large parts of the graphs of CVO instances being executed are expected to belong to one single service instance only, at a given moment in time.. Nevertheless, when CVOs are used in the context of multiple service execution requests (but especially so when VO resources are involved) the potentially different service objectives may affect the same resources of icore system configuration and/or sensor/actuator/resource configuration so that, when operating simultaneously, they may result in undesired conflicts and system instabilities. In this respect, the Coordination functional block is foreseen as a means of conflict resolution. The Performance Management intends to guarantee the proper performance of not only the CVO Level (functional blocks and CVO instances) but also the VO Level (functional blocks and VO instances) in terms of satisfying specific Key Performance Indicators (KPIs) thresholds. The Quality Assurance targets mainly at the satisfaction of the SLAs as delivered by the Service Level. Both the Performance Management and the Quality Assurance may trigger reconfiguration actions in order to improve the performance and the quality of service respectively. More information about the CVO Management Unit may be retrieved from [11]. CVO Factory: While the CVO Management Unit is more about the execution / runtime aspects of resource management, the CVO factory is concerned with the production of new running CVO instances. When a service execution request is handed over by the Service Level to the CVO Lifecycle Manager, containing a number of CVO template names as part of the service description, the first activity is the Approximation and Reuse Opportunity Detection. This functional block first performs a search in order to discover potentially available, relevant CVO instances of the requested CVO template names that can be reused. For this purpose, the functional block compares the parameters in the Service Execution Request with the corresponding recorded parameters in the CVO registry (a registry where metadata about logged CVOs are stored). Functional Similarity / Proximity, possibly derived from cognitive mechanisms, is determined in order to decide on the reuse of concurrent use of an existing CVO instance. This even may allow for approximation when no exact match of the templates available in the CVO Template Repository and the requested CVO template names exists. If a match is found, the functional block triggers the use of the existing CVO instance, reusing it from a previous service execution, or concurrently with another still running service. If no match is found, then the CVO instance should be created from scratch. After having produced or identified each needed CVO instance for the particular service requested to be executed, the remainder of the service execution request is passed to the CVO Composition Engine will mash up the appropriate VO instances and CVOs to form the complete service graph, ready for execution. The CVO Templates contain information about how basic functions offered by specific VO types that are described in specific format (as stipulated in VO Templates) can be combined to build a more complex CVO composition Grant Agreement number: Page 37 of 136

38 according to the specific logic and constraints for the service requested. The CVO Composition Engine also registers any new CVO instances in the CVO Registry and registers also their status (running, in use by which requests, etc.). The CVO Composition Engine then communicates with the VO Level to ask for VO instances of the required VO templates. The Orchestration and Workflow Management is one means the icore system has to further manage the dynamics of the running graphs of CVOs. Furthermore, System Knowledge (SK) based cognition can be exploited in order to identify further optimizations. For that purpose, the functional blocks of the CVO factory are supported by Learning Mechanisms in order to build SK for faster and more sophisticated decisions. More information about the CVO Factory may be retrieved from [11]. System Knowledge Model: The System Knowledge that is built by the Learning Mechanisms is stored in System Knowledge (SK) Model. More information about SK Model and SK Model Store may be retrieved from [11]. CVO Template Repository: The CVO Template Repository contains a (semantically and) functionally query-able collection of CVO Templates. The CVO Templates are stored by the Data Processing Domain Expert/Developer. More information about this functionality may be found in [11] and in section 3.3 of this document. CVO Registry: The CVO Registry contains metadata for each deployed CVO instance, which is preserved for a specific time period. This metadata includes: (a) CVO identifier, (b) request parameters that led to the creation of the CVO, (c) situation parameters that represent the context in which the CVO was created and (d) the VOs that are connected to the CVO instance. More information about this functionality may be found in [11] and in section 3.3 of this document. CVO Container: The CVO Container stands for the actual execution environment of the CVO instances. The CVO instances and the CVO container are monitored, controlled and managed by the CVO Management Unit. Event streams coming from the CVO container are (under certain conditions) aggregated by the Queried Fast Collector bringing new facts, relevant to the RWK, to the Service Level. Situation Observers: As stated previously in section 3.3.1, Situation Observers (SO) are a specific CVOs that are devoted to the particular task of observation of events though CVO processing, as particularly meaningful and relevant to the situation that a particular person or service is in. The design goal here is that eventually an organically grown set of SOs is applicable generically for many applications and (derived) situational aspects. Again, this is of high importance, as it makes the growing of RWK more directed, towards those observations most relevant to people and to the services they define or launch. The SOs are considered to provide runtime input to the Situation Awareness through the Queried Fast Collector. icore ultimately targets leveraging cognition (learning) techniques to identify universally relevant /generic SOs beyond (but using as a starting point the RWK already available in) the explicit observation as auto-decomposable from particular service requests. Note however that the CVO Level does not need to explicitly distinguish SOs from any other CVOs. That distinction is in fact a Service Level matter, requiring just regular CVOs for execution. More information about this functionality may be found in [10] VO Level Virtualization is apparent at all levels of icore. However, the VO level is the drive wheel of the virtualization, in the sense that the VO level is responsible for virtualizing the sensor (&actuation) data for any service needs. Except the virtualization capabilities, the VO level comprises additional management capabilities for the control and management of virtual and real world objects. Figure 13 Grant Agreement number: Page 38 of 136

39 depicts the functional architecture of the VO Level. In the end of section 3.3, Figure 14 illustrates the whole functional architecture consolidating all icore levels. The link of the functional architecture with potential technologies is provided in chapter 4. Figure 13 Functional Architecture of the VO Level The VO Level receives from the CVO Level a VO Execution Request. The VO Execution Request is done from the CVO Factory (CVO Composition Engine) towards the VO Management Unit. VO Management Unit: Like the CVO Management Unit, also the VO Management Unit is concerned with managing the VO runtime instances, i.e. at execution / runtime. The first point of contact in the VO Management Unit after a VO Execution Request is the VO Lifecycle Manager. The VO Lifecycle Manager may be considered as an intelligent monitoring function keeping track of the about the lifecycle states of the individual VOs in the collection at a given moment. Based on trigger events and conditions, the states of VOs may change causing a specific action. Moreover, the lifecycle manager may decide to keep alive for some additional time (SLA- or System Knowledge- based) some VOs, although they are not currently used. At the time of the VO execution request, the lifecycle manager just needs to be informed about the request of a specific VO template and forwards the request to the VO Factory. The Coordination functional block is required in order to ensure the normal runtime operation by means of resolving conflicts and instabilities deriving from different concurrently invoked VOs that wish to control the same facets of icore system configuration and/or sensor/actuator/resource configuration. The Resource Optimization optimizes by means of cognition the operation of the underlying sensors, actuators and resources e.g. by reducing the energy consumption. The Data Manipulation / Reconciliation takes care of the data management (e.g. considering also big data) and ensures the quality of the data e.g. by interpolating missing data using machine learning techniques. The VO Management Unit is controlled by the CVO Management Unit. For instance, the CVO Lifecycle Manager cannot have a CVO as alive if a VO used to compose this CVO is not alive in the VO Lifecycle Manager. Another example is that the Performance Management of the CVO Management Unit is responsible for the proper performance of the VO level (VO level functional blocks and VO instances) in terms of satisfying specific Key Performance Indicators (KPIs) thresholds through the VO Management Unit. More information about the VO Management Unit may be retrieved by [13]. VO Factory: While the VO Management Unit is more about the execution / runtime phase, the VO factory regards the design / bootstrapping phase. Let us assume a request that is forwarded by the VO Lifecycle Manager and contains the VO template names for which VO instances are searched. The VO Factory performs a search in the VO registry (a registry where metadata about logged VO instances are stored), in order to discover potentially available, relevant VO instances of the requested VO template names (that exist in the CVO Template and are given through the CVO Execution Request) that can be reused. The VO instances information e.g. the uniform resource identifiers (URI) of the VO instances if seen as Web Services (WS) are returned in the VO Factory, Grant Agreement number: Page 39 of 136

40 which in turn forwards them to the VO Lifecycle Manager and subsequently to the CVO Composition Engine. More information about the VO Management Unit may be retrieved by [13]. VO Template Repository: The VO Template Repository contains a semantically query-able collection of VO Templates. The VO Templates are stored by the Device Manufacturer, who stands for the industrial actor that creates the devices (sensors, actuators, resources) and who describes their capabilities (e.g. offered functions) in such templates. More information about this functionality may be retrieved in [13] and in section 3.3. VO registry: The VO Registry contains metadata for each installed VO, which is preserved for a specific time period. VO registries store the semantically enriched data that are used for the description of the VOs, in order to be available anytime from anywhere. The stored information may include the instance name and the installation context, such as the VO identifier, associations with ICT and non-ict objects, location and offered functions. The semantically enriched data of the VOs are written in Resource Description Framework (RDF), which has been designed as a model for representing metadata about web resources, and stored in the RDF graph databases, which implement the VO registries. VO (and CVO) registries may be distributed across several domains. More information about this functionality and the VO information model may be retrieved in [13] and in section 3.3. VO Container: The VO container stands for the execution environment (WS host) of the VO instances. The VO instances and the VO container are monitored, controlled and managed by the CVO Management Unit through the VO Management Unit. Data are transmitted from the VO Container to the CVO container. Each VO comprises two parts: the Front End and the Back End. The Front End is the abstract part of the VO making it interoperable. It comprises the VO template filled with the specific to the VO instance information. The front-end helps also checking the access rights and communicating with the IoT based on IETF protocols on top of IP. The Back End calls Device Manufacturer / vendor provided libraries for communicating with the RWO. However, icore need not store those libraries, since they are integrated into and encapsulated by the back-end code and are therefore not part of the VO Template. When the Back End needs to communicate with more than one RWOs (sensors, actuators, resources), this can be done through a Gateway/Controller. More information about this functionality may be retrieved in [13] Security and Privacy Authentication and Authorization: Each actor stated in the section 3.2 before interacting with the icore system needs to be authenticated and authorized. The authentication function is needed to ensure that a user or application is identified securely by the icore framework. The authorization function is used to determine the level of access of the authenticated party (role-based) and to grant access accordingly. The authorization function is linked to the access control function, which regulates the access to the services and resources (i.e., VO and CVO) in a sticky fashion. Further hints on how such an approach works are given in section Access control: Before using a VO/CVO instance (and hence its data), the Service Requester needs to be granted access rights to access the VO/CVO. This is typically mostly important with relation to ownership of VOs and their data, but also applies to CVOs, with respect to the VOs connected to it (providing the CVO with data), and with respect to service & resource access as implied by the use of the CVO s data processing logic. Key Management System: This is Such system will be needed to be used for an icore system in a similar fashion as in existing solutions, but with a positioning towards the exact relation to each of blocks of the icore Functional Architecture needs to be determined. The access control for resources is implemented by encrypting the VO/CVO with a set of keys, which is related to each specific level of access. The VO/CVO encryption is useful for inter-domain distribution of data when VO/CVO are Grant Agreement number: Page 40 of 136

41 distributed across different domains based on the icore framework. It is also needed to store VO/CVOs in storage areas, which cannot be fully trusted (e.g., the memory of a commercial mobile device). The Key Management system is connected to the Access control because each level of access is associated to a specific set of keys. Further hints are given in section The Functional Architecture of icore is depicted in Figure 14. Grant Agreement number: Page 41 of 136

42 D2.3 Architecture Reference Model Grant Agreement number: Page 42 of 136

43 D2.3 Architecture Reference Model Figure 14: icore Functional Architecture Grant Agreement number: Page 43 of 136

44 Figure 15: Relationships with WP6 Use Cases Grant Agreement number: Page 44 of 136

45 3.4 Consistency of the Functional Architecture with the WP6 work plan This section provides some insights of the consistency of the functional architecture with the architectural features of the WP6 work plan [7], which are depicted in Figure 15. The mapping of the functional blocks to the architectural features of the WP6 work plan, which represent the sub-use case flows seen in the WP6 cases, is as follows: Service Level: SL.1 Service Request Generation -> Service Requester Authentication - (Natural Language Processing) Service Request Analysis Service Template Repository SL.2 Service Template Creation -> Domain Expert/Knowledge Engineer Authentication - Service Template Repository SL.3 Service Factory -> Semantic Query Matcher RDF Rules Inference Engine SL.4 Real World Knowledge Generation -> Queried Fast Collector Intent Recognition User Characterization Situation Awareness Real World Knowledge Model SL.5 Service Execution Request Interface -> Service Execution Request CVO Level: CVO.1 CVO Creation -> CVO Factory CVO Template Repository CVO.2 CVO Discovery -> CVO Registry Access Control Function CVO.3 Self-healing -> CVO CVO Management Unit CVO Factory CVO.4 Registry and Templates -> Data Processing Domain Expert / Developer Authentication - CVO Template Repository CVO Registry - Access Control Function CVO.5 Active Interfacing with Upper (SL) and Lower (VO) Levels -> Interfaces of CVO level with Service and VO levels VO Level: VO.1 VO Self-Management -> VO Management Unit VO.2 VO Dynamic Metadata Tagging -> Device Installer Authentication Vo Registry Access Control Function VO.3 VO Registry -> Device Installer Authentication Vo Registry Access Control Function VO.4 VO Northbound Interfacing -> Interfaces of VO level with CVO level VO.5 VO Creation -> Device Manufacturer Authentication VO Template Repository VO Factory 3.5 High-Level Message Sequence Diagrams This section includes some tentative MSCs (Figure 16, Figure 17, Figure 18, Figure 19, and Figure 20), in order to help understanding the operation of the functional blocks. Grant Agreement number: Page 45 of 136

46 Figure 16: VO Template provisioning in the VO Template Repository Figure 17: Service Template provisioning in the Service Template Repository CVO Template provisioning in the CVO Template Repository follows a similar process as the VO template provisioning involving the appropriate actor Data Processing Domain Expert/Developer and the CVO template repository, and the corresponding figure is omitted for simplicity. Grant Agreement number: Page 46 of 136

47 Figure 18: VO Installation in the VO Registry Grant Agreement number: Page 47 of 136

48 Figure 19: Service Request handling part 1 Grant Agreement number: Page 48 of 136

49 Grant Agreement number: Page 49 of 136

50 Grant Agreement number: Page 50 of 136

51 Figure 20: Service Request handling part 2 Grant Agreement number: Page 51 of 136

52 3.6 VO Naming and Addressing The sheer number of RWOs, which will be part of the icore system through their virtual representations (VOs), require suitable mechanisms in the icore architecture to cover the needs that arise for the naming and addressing of VOs. There are some key requirements on this topic and they should be taken into account, namely (a) the need for survivability (e.g. the persistence of names and addresses in case of storage location changes, owner changes, etc.), (b) the need for automatic/autonomous naming of VOs and (c) the need to address the migration of VOs. The main vision around this topic approaches the use of VO Names as a short description of the referred object, as well as the use of VO Addresses as the location of the object. In order to cover the above needs, as well as to realize the vision about naming and addressing process in the icore system, appropriate centralized mechanism(s) should be designed and developed so as to be part of the icore architecture as Naming and Addressing mechanism(s). The VO Naming and Addressing mechanisms should take over the generation of unique and persistent VO names and addresses, so as to cover the need of survivability and automatic naming of VOs while addressing simultaneously the migration of VOs. The persistence of names and addresses implies that they will remain valid in case of storage location changes, owner changes, etc. In addition, the names should be meaningful for machines, not necessarily for humans, allowing the effective use of VOs from machines and enhancing the M2M communication in the system. Potentially, a real effective solution could be the use of Uniform Resource Identifiers (URIs). Moreover, devices may also be semantically tagged, automatically and by human intervention at installation time. Considering the VOs as Web Resources that are available in the IoT, their naming and addressing and consequently their unique identification can be realized using the URIs aka Uniform Resource Locators (URLs) [14]. Using the URIs, both naming and addressing can be satisfied in an efficient way since a URI can be classified as URL or Uniform Resource Name (URN) or both. Moreover, the exploitation of solutions that come from Object Naming Service (ONS) [15], which is a kind of DNS extension, and Unique ID (UID), which proposes various prototypes of the Unique Identifiers approaches, can enhance the functionality and the effectiveness of the Naming and Addressing mechanisms. Another high priority requirement is the VO migration, namely the capability of the VO to change administrative domain and to become available under a different address. Additional mechanisms regarding the VO Naming and Addressing thus need to be included in the icore architecture. A solution based on Persistent URL (PURL) [16] may solve the migration capability challenge. The PURLs are Web Addresses that act as permanent locators and identifiers in the face of a dynamic and changing Web infrastructure. The main idea is that each Web Resource has a PURL that redirects the HTTP requests to the current (potentially temporary) URL where the resource is available. The most relevant solution for the icore VOs is the use of PURLs Type 303 that is used for Web Resources and essentially is assumed as Persistent URI. When this solution is selected, the PURL mechanism will perform the dynamic monitoring of VO addresses updating current temporal addresses and maintaining the persistent address for each VO. 3.7 Templates, Data and Metadata This section intends to provide further information about the templates, data and metadata as used and considered in icore functional architecture. Sections and detail the VO Model. Sections 3.7.3, and give insights about the VO, CVO and Service Templates respectively. Finally, section provides some brief information about (C)VO registries. Grant Agreement number: Page 52 of 136

53 3.7.1 VO Model A Virtual Object (VO) provides the virtual abstract representation of Real World Objects (RWOs) sensed or actuated by means of Information and Communication Technology (ICT), such as network connected sensors, actuators, up to even mobile devices such as smartphones, and more. The actual RWOs such buildings, rooms, persons, cars, vegetables, etc. are distinguished from the ICT Objects and for that they are called non-ict Objects in icore. However, the terminology used in VO model (and in general in icore) should be considered with respect to the terminology (and standards) commonly used in this IoT level. D7.2 ([8]) identifies the relevant to icore standards with details on how, where, which standards can contribute to and/or benefit from icore framework and functional architecture. It should be noted that icore intends to be compliant to some of these standards; but even if there would not be complete terminology reuse, there will be tried a full mapping among icore s and these standards terminologies. A VO thus is ICT Object-centric and is owned by the icore user that owns the ICT Object. The VO may have one or more VO Parameters, each of which, to depending on their type, may have specific access rights as well as specific use charges, issued by the owner, associated with it. A VO also reflects the specific functionality that the ICT object may have, each function exposed as a capability, again possibly each being associated with a usage cost. The VO representation also includes information for the further description of ICT as well as associated non-ict Objects. Such information can be subdivided in ICT Parameters and Installation Context Parameters of the VO. An ICT Parameter can include information about the specifications of ICT object and other necessary data regarding the ICT. For instance, in case where we have as an ICT Object a sensor, a potential ICT Parameter could be the range or the accuracy of sensor. The other parameters reveal which and how the actual RWO context is associated with it. With respect to the Installation Context, the RWOs (statically or dynamically) associated with the VO may have for example a geographical location.. (Note that an ICT and non-ict object in a VO do not need to have strictly the same geo-location. A typical example is a camera observing a building some meters away from the actual camera location. Many of the VO parameters as discussed above could be simply expressed in a human readable and understandable manner; structured semantic representations of this data, such as RDF and OWL, make this data also machine readable and understandable [35].For machine reasoning over VO properties, and for systematic interoperability across icore (and even beyond icore), this semantic VO enrichment is key VO Model meta-data analysis Different types of meta-data have been defined for the description of VO Model concepts and their relations. Figure 21 represents the VO Model as a graph data model comprising all concepts and their properties as can be used for the semantic enrichment of VOs and their components. Grant Agreement number: Page 53 of 136

54 hasictparameter ICT Parameter hasuri :URI hasname :String hasfeaturetype :URI hasfeaturevalue :Literal represents ICT Object hasuri :URI isassociatedto hasictlocation Virtual Object hasuri :URI hastype :URI hasdeploymentinfo :URI hasstatus :String ismobile :boolean hasvoparameter VO Parameter hasuri :URI hasname :String hasfeaturetype :URI hasfeaturevalue :Literal non-ict Object hasuri :URI hasname :String hastype :URI hasowner hasvoparameteraccessright offersfunction User hasuri :URI hasfunctionaccessright Access Right hastype :URI hasvalue :Literal VO Function hasuri :URI hasname :String hasdescription :String hasinput :xsd^^string[] hasoutput :xsd^^string[] hasinputparameter hasoutputparameter hasvofunctionfeature Input parameter hasname :String hasfeaturetype :URI hasfeaturevalue :Literal Output parameter hasname :String hasfeaturetype :URI hasfeaturevalue :Literal hasvoparameterbillingcost hasfunctionbillingcost Geo Location haslongitude :float haslatitude :float hasaltitude :decimal hasnonictlocation Billing Cost hastype :URI hasvalue :Literal VO Function Feature hastype: URI hasname :String hasvalue :xsd:decimal Figure 21: Graph data model of the VO Model As shown, the VO Model distinguishes three main parts; (a) Virtual Object, (b) ICT Object and non-ict Object and (c) VO Function. In the following subsections there is the detailed description of each part Virtual Object meta-data analysis The first part of information that is included in the VO Model refers to the VO, its properties and its direct associations with other entities. The Virtual Object concept, that constitutes the root element in the VO Model, is directly associated with the following concepts/entities; (a) ICT Object, (b) VO Parameter, (c) VO Function and (d) User. Figure 16 depicts the VO Graph Data Model, while further details about VO direct relationships/associations and VO properties can be found in [14]. Grant Agreement number: Page 54 of 136

55 Virtual Object hasuri :URI hastype :URI hasdeploymentinfo :URI hasstatus :String ismobile :boolean represents ICT Object hasuri :URI offersfunction hasowner hasvoparameter User hasuri :URI refers to icore User Instance Instance of User USER URI refers to ICT Instance Instance of ICT Object ICT Object URI VO Parameter hasuri :URI hasname :String hasfeaturetype :URI hasfeaturevalue :Literal VO Function hasuri :URI hasname :String hasdescription :String hasinput :xsd^^string[] hasoutput :xsd^^string[] refers tovo Function Instance Instance of VO Function VO FUNCTION URI hasvoparameterbillingcost Billing Cost hastype :URI hasvalue :Literal Access Right hastype :URI hasvalue :Literal hasvoparameteraccessright ICT Object and non-ict Object meta-data analysis Figure 22: Virtual Object Graph Data Model part The second part of information that is included in the VO Model refers to the ICT object that is represented by a VO, its properties and its direct associations with other entities. The ICT Object concept is directly associated with the following concepts/entities; (a) ICT Parameters, (b) Geo Location and (c) non-ict Object. In addition the non-ict Object concept in turn is associated with a specific physical location, which in some cases can be different by the ICT Object physical location. Figure 17 depicts the ICT Object & non-ict Object Graph Data Model, while further details about the ICT Object & non-ict Object direct relationships/associations and properties can be found in [14]. ICT Object hasuri :URI isassociatedto hasictparameter hasictlocation ICT Parameter hasuri :URI hasname :String hasfeaturetype :URI hasfeaturevalue :Literal Geo Location haslongitude :float haslatitude :float hasaltitude :decimal non-ict Object hasuri :URI hasname :String hastype :URI hasnonictlocation Figure 23: ICT Object & non-ict Object Graph Data Model part Grant Agreement number: Page 55 of 136

56 VO Function meta-data analysis The third part of information that is included in the VO Model refers to the VO Function(s) that is/are offered by a VO, its properties and its direct associations with other entities. VO Function hasuri :URI hasname :String hasdescription :String hasinput :xsd^^string[] hasoutput :xsd^^string[] hasinputparameter hasoutputparameter Input parameter hasname :String hasfeaturetype :URI hasfeaturevalue :Literal Output parameter hasname :String hasfeaturetype :URI hasfeaturevalue :Literal hasvofunctionfeature hasfunctionbillingcost Billing Cost hastype :URI hasvalue :Literal hasfunctionaccessright Access Right hastype :URI hasvalue :Literal Figure 24: VO Function Graph Data Model part VO Function Feature hastype: URI hasname :String hasvalue :xsd:decimal The VO Function concept is directly associated with the following concepts/entities; (a) Input Parameter, (b) Output Parameter, (c) VO Function Feature, (d) Access Rights and (e) Billing Costs.. Figure 18 depicts the VO Function Graph Data Model, while further details about the VO Function direct relationships/associations and properties can be found in [14]. VO Template The VO Template is the key icore concept for the automated provisioning of VOs in an icore system, and enhances the development of VOs. As depicted in Figure 25, VO Templates constitute two parts: the VO Type and the VO Code.. Initially, the Device Manufacturer creates a VO Template, e.g. based on icore prototypes that show the VO model and abstract interfaces, and stores the new template in the VO Template Repository.. In a second step, a device installer uses the VO Template to create an instance for the installed ICT Object, defining semantic annotations referring to the installation context, as well as other information such as future access rights for the VO (possibly defaulting to open access or no access if not explicitly specified). Figure 25: VO Template Grant Agreement number: Page 56 of 136

57 The VO Type essentially is a partial instantiation of the VO Model (VO Type VO Model), which comprises data about the description of the software specifications of the VO and based on the VO Model concepts we could say that it comprises information for the followings; (a) ICT Object, (b) ICT Parameters, (c) VO Parameters and (d) VO Functions. The VO Type is developing by the Device Manufacturer who introduces in the icore system a new type of a device, which will be represented in the virtual world through a VO. The other concepts, namely the User (VO owner), the non-ict Object, the Geo Location of ICT and non-ict as well as the concepts of specific Access Rights and Billing Costs for VO Parameters and VO Functions are not complemented with data by the Device Manufacturer but they are described by the Device Installer who uses the VO Template in order to create a VO for a specific application domain that is characterized by domain-specific information. It is easy to observe that the full description of a VO is developed both by the Device Manufacturer and the Device Installer and its completed form constitutes a VO Model instance, which is stored in the VO Registry during the registration process (Figure 26). The VO Code is the implementation of the VO as a software entity, implementing the functions as specified in the VO Type. It is the RWO driver so that the RWO becomes compatible with the icore system. In particular we could define the VO Code as the combination of (a) the device software that is provided by device manufacturer (VO Back-end) and (b) the implementation of abstract interface (VO Front-end) that is defined in the VO API and it is provided by icore to RWO manufacturers/owners, rs, etc. After the installation of a VO, the VO Code can be deployed in a VO Container where is running. Figure 26: VO Template and VO Installation, Registration and Deployment process CVO Template The CVO Template (Figure 27) can be used to create and find CVOs that offer specific services and include specific VO Types with specific functions. In addition each CVO has its logic and based on this, renders services executable in the context of service execution requests from the icore Service Level. In general, the CVO Template is comprised of three main parts (a) the VO Template(s) definitions, (b) Search Constraints and (c) the CVO Logic. Grant Agreement number: Page 57 of 136

58 The VO Templates definitions represent the VOs that have been included in the CVO composition and provide specific functionalities that are implemented by the VO Code, which in turn it is described by the VO Type. Furthermore, a CVO Template is associated with a set of search constraints that define some criteria for to discover the CVO. Finally, the CVO Template comprises the CVO Logic that is developed as a set of conditions and actions. CVOType FIND CVO VO Template VOType :: X-sensor getmeasurementx() Subject to constraints: - Search Constraint - X -. - Search Constraint - Z VOx VO Template VOType :: Y-actuator setswitchy() USE Logic: if(condition){ perform this action; } If getmeasurementx() <Logical Operator> thresholdx then setswitchy() VOy Service Template Figure 27: CVO Template As mentioned in 3.3.1, in a basic scenario, a Service Request (as identified earlier), made by a User (be it a non-expert human, a programmer or a piece of application software), specifies how to instantiate a particular icore Service Template, i.e. the request specifies: - the template type, and - which (range of) streams and possible other parameters to use as instantiation arguments. From the Service Request, the template can thus be selected and be fed with the instantiation arguments, resulting in the expression of a logical graph mashing up specific (possibly at run time determined) streams (VOs) with specific execution units (CVOs). In the SPARQL-based implementation, a Service Template is expressed by a SPARQL CONSTRUCT statement, which is triggered in the RDF Rules inference engine as a result of the service request (SPARQL query) using the RDF concept introduced by the template. An example Service Template ( crowd ) may look like this: CONSTRUCT {_:c base:a base:crowd; :input?stream; base:city?city.} WHERE {?stream base:streamof base:person; base:city?city. #!CEP_PROCESS (name= crowd_detector, stream=?stream) } Grant Agreement number: Page 58 of 136

59 The example expresses that for answering queries about potential crowds in a particular geographical area (simplified to city names here), a crowd_detector (a type of CVO) must be applied to (actually: will be deployed for) each person stream in that geographical area. So, the template expresses a specific logical graph connecting person streams to crowd_detectors. Note that, in contrast to icore Service Templates, CVO Templates essentially specify execution-time stream processing procedures, such as crowd_detector, that are active during the actual service graph execution. So, CVO Templates are not concerned with the overall Service Template-specified (static part of) the CVO/VO logical graph for a (to-be-)executed icore Service, and rather consider run-time processing (e.g. CEP) only. In fact, the example illustrates that Service Templates also can reference (i.e. use) other Service Templates. In the example, person is another RW concept of the more complicated type, i.e. it is defined by a corresponding Service Template. Using the concept crowd in a Service Request therefore implies a logical graph using VOs and CVOs as needed to construct live person streams, according to the person template, extended with crowd_detector CVOs as stipulated in the crowd template. In this way, a Service Template can leverage and build upon other Service Templates to describe quite extensive logical graphs of VOs and CVOs. [Furthermore, next to literal instance qualifiers, Service Templates may have Service Templates, CVO Templates and VO Templates as arguments, next to other complex constraint expressions.] As already implied before, icore Service Requests are translated into actual service descriptions by matching them to an appropriate icore Service Template, as available in a Service Template Repository. Like with the application domain ontologies that are assumed to be part of the RWK Model, it is assumed that a skilled domain expert or knowledge engineer populates the Service Template Repository with the range of templates as applicable and useful to the domain of services that can be fulfilled by an actual icore system deployment. Tools can be envisioned to help such external stakeholder to define and create specific templates or even entire template hierarchies, including actual coding of reusable code-fragments part of it. However, like also again ontology engineering (by itself), template creation tooling seems to lie outside the main focus of the icore research. We therefore do not address it (yet) at this stage of the icore Service Level analysis. For the SPARQL-based implementation currently considered and the current experiments conducted with it, it is practically workable to directly express the SPARQL CONSTRUCTS statements by which the Service Templates are expressed (internally in fact resulting in storage of rules applicable to RDF arguments). The population of the Service Template thus is done by making SPARQL CONSTRUCT statements to the system. As the system has a repository and a service request mechanism, there is no requirement to pre-populate the Service Template Repository at system deployment and configuration time. Indeed, as the need arises during the icore system s lifespan, new Service Templates can be added according to the evolving needs to allow for a broader set of supported Service Requests. Further information can be found in [11] (C)VO Registry The (C)VO Registry is a RDF Graph Database (RDF GDB) that stores RDF Triples for the description of available (C)VOs in the icore system and it is hosted on a semantic repository e.g. on Sesame Server. It provides most of its functionality via its Application Programming Interface (API) that makes it quick and easy; (a) the creation of semantically enriched descriptions of (C)VOs, (b) the interaction with stored (C)VO data through SPARQL requests (query, update, insert, delete) and (c) the communication with external entities in the system. The (C)VO registry can also include the level of access of the (C)VO to refine the searching capabilities based on the level of access. For example, only the (C)VOs, which can be accessed by a user are provided by a search in a (C)VO registry. Please, note that this is an optional feature because a Domain manager may also decide to provide to the user the list of (C)VOs with no access, so that the user can specifically ask the access to the icore administrator or a Domain manager. Grant Agreement number: Page 59 of 136

60 Further details can be retrieved in [12] and [14]. 3.8 Functional Interfaces This section presents the functional interfaces of icore. The interfaces are separated in two kinds: the external interfaces, used by the various users of icore (the actors), and the internal interfaces between the various components. This section will present only an overview of the interfaces to complement the architecture. Only functional interfaces are presented, supervision oriented interfaces (for ex. Start/Stop component) are not represented for the sake of comprehension. The detail of the interfaces can be found in the respective deliverables for WP3, 4 and 5. The diagrams are using the following color convention: blue for Service level, green for CVO level and orange for VO level External Interfaces As already mentioned before, the Service Level also hosts the User Interface of icore. It is the main entry point of icore as a whole. icore is a middleware: the End User will not access it directly but will use an application that will, in turn, connect to icore. Store service template Read RWK Store ontology Service Level Write RWK Store CVO template Store VO template Install VO CVO Level VO Level To RWO Figure 28: External Interfaces The Figure 28 is showing the external interfaces of icore. The VO level is exposing Install VO, used by the Device Installer actor, and Store VO template used by the Device Manufacturer actor. The CVO level is exposing the Store CVO template interface, used by the Data Processing Domain Expert actor. The Service Level is accessible through several interfaces: the Store ontology, Write RWK and Store service template are used by the Domain Expert/Knowledge Engineer, and finally the Service Request, NL Service Request, Service Data and Read RWK used by the Service Requester actor, through an application in most of the cases. All these interfaces will be described more in details in the following section Component interfaces The components interfaces will be presented in the following diagrams. For the sake or comprehension, the components have been grouped by level (Service, CVO and VO) and by phase (design, instantiation and execution), resulting in 9 diagrams. At the end of the section a table is summarizing the interfaces exposed by all components. Grant Agreement number: Page 60 of 136

61 The VO Level Interfaces Design phase VO Installation phase Figure 29: Interfaces of components involved in VO design phase VO Management Unit VO Factory VO Container VO Template Repository Store VO template VO creation request VO creation Addnew VO Store metadata Retrieve template VO Registry VO Template Repository Figure 30: Interfaces of components involved in VO installation phase As seen in Figure 29, the only component used during the VO design phase is the VO template repository, used to store the VO template. This component exposes a Store VO Template interface. As shown in Figure 30, the VO Management Unit exposes an interface called VO creation request, which will be used during the VO Installation phase. The VO creation request contains both the identification of the VO to instantiate, its instantiation parameters and some metadata about the VO (like a position if it s a non mobile VO). They are provided by the VO Installer actor (not displayed in this picture). The VO Management Unit then uses the Write metadata interface of the VO Registry to create an entry for the VO and populate it with the data provided. It also uses the VO Creation interface of the VO Factory to ask for the creation of the VO. The VO Factory, based on the identifier of the VO template, will then retrieve it from the VO Template Repository using its Retrieve Template interface and use the Add new VO interface of the VO Container to insert the VO instance into its running environment. Grant Agreement number: Page 61 of 136

62 VO execution phase VO Container RWO VO data VO Management Execution Unit control RWO interface Store Metadata VO Registry Figure 31: Interfaces of components involved in VO execution phase During Execution (Error! Reference source not found.), the VO Container is serving the VO data through its VO data interface (to upper layers). It reads the data from the RWO through their proprietary interfaces. The VO Management Unit uses the Execution Control interface of the VO Container to start or stop the VO. During execution, some metadata about the running VO can be stored/updated using the Store Metadata interface of the VO Registry, for example in the case the sample rate of the VO happen to change. The CVO Level Interfaces CVO design phase CVO Template Repository Store CVO template Figure 32: Interfaces of components involved in CVO design phase As shown in Figure 32, during the CVO design phase, the data processing expert actor uses the Store CVO template interface of the CVO Template repository to store the designed CVO template. CVO instantiation phase CVO Management Unit CVO Factory CVO Container CVO instantiation request CVO creation Addnew CVO Retrieve template CVO Template Repository As shown in Figure 33, the CVO Management Unit exposes a CVO instantiation request interface used at CVO instantiation phase (by the upper layers). The CVO instantiation process works quite similarly than at VO level: the CVO Factory is triggered through the CVO Creation interface; it retrieves the CVO template from the CVO Template Repository and adds the CVO instance in the CVO Container. Figure 33: Interfaces of components involved in CVO instantiation phase Grant Agreement number: Page 62 of 136

63 CVO execution phase SK Database Fact read Fact write Store metadata VO registry CVO Management Unit CVO Container Search VO Execution control VO Data Store Metadata Fact read Fact write VO Container CVO Registry RWK Database Figure 34: Interfaces of components involved in CVO execution phase During Execution as visible in Figure 34, the data is served through the CVO Data interface of the CVO container. The running CVOs are controlled with the Execution Control interface by the CVO Management Unit, which also store some metadata about the running CVOs in the CVO Registry. The CVO Container reads the VO data through the corresponding interface of the VO Container. During execution, the VO used by the CVO may be changed. In order to do that, the CVO container is performing a search in the VO registry for the appropriate running VO. Furthermore, some metadata can be derived by the CVOs about a specific VO, and this metadata will be written in the VO registry. For example, a temperature sensor can be attached to a GPS sensor. The data representing positions extracted from the GPS can be in turn considered as metadata representing the position of the temperature sensor, and be stored in the entry for that sensor in the VO registry. This metadata will then be available for queries in the VO registry about temperature sensors in a certain area. This is the implementation of the loop back between data and metadata in icore. The CVO Container is also notably containing a special kind of CVOs called CVO Situation Observers which are able to monitor other CVOs or RWOs and collect facts about them. These data are written in the RWK database through the Fact write interface. Some CVO may also read RWK in order to take decisions though the Fact read interface. In parallel, the CVO container is also maintaining System Knowledge in the SK Grant Agreement number: Page 63 of 136

64 Database, though the corresponding interfaces. Grant Agreement number: Page 64 of 136

65 The Service Level Interfaces Design phase Service Template Repository Store Service Template Write RWK RWK Database Store Ontology Figure 35: Interfaces of components involved in Service and Knowledge design phase At design phase as shown on Figure 35, two components are used. The Service Template repository is accessed through the Store Service Template interface to store new service templates. The RWK database is accessed to store a domain Ontology, and some preliminary RWK (static RWK) can also be written at design time, expressed using the previously stored ontology. For example, the RWK database can be initialized with an ontology for traffic monitoring, and then using this ontology cartographic information about streets can be entered. These interfaces are used by the Domain Expert/Knowledge Engineer actor (not displayed). Service Request phase Service execution phase Service Template Repository Query template Service Request Service Request Analysis NLP CVO Management Unit Natural Language Request Service Request Read U.C. Write U.C. CVO instantiation User Characterisation Figure 36: Interfaces of components involved in Service Request phase In Figure 36, a service request is performed using the Service Request interface of the Service Request Analysis Unit. It can also be performed in natural language using the corresponding interface. The Service Request Analysis component will then query the adequate template from the Service Template Repository. To refine this query with a specific user profile, it will also need the data from the User characterization module. In turn, the User characterization can be enriched on the basis of the services triggered by him. Finally, the CVOs described in the Service Template are instantiated using the corresponding interface of the CVO Management Unit. During the Service Execution phase, as shown in Figure 37, the CVO container is presenting an interface called Service data used by the Application to Grant Agreement number: Page 65 of 136

66 Service Management Unit Service data Execution control Application CVO Container read RWK RWK Database Situation Awareness Read Situation Observers Write RWK Figure 37: Interfaces of components involved in Service execution phase retrieve the corresponding data. The Application can also directly read the RWK that has been built by the service level in a reusable way using the read RWK interface. This RWK is built during the execution phase by the Situation Awareness component, using the information provided by the Situation Observers in the CVO Container. Additionally, the service represented by running CVOs is controlled by the Service Management Unit component Summary of the interfaces of the components Table 2 is presenting the interfaces exposed by each component. Component name VO Template Repository VO Management Unit VO Registry VO Container CVO template repository CVO Factory CVO Container CVO Management Unit CVO Registry SK Database Service template repository RWK database NLP Service request analysis User characterization Service Management Unit Situation Awareness Interface Store VO template Retrieve VO template VO creation request Store metadata Read metadata Search VO Add new VO VO data Execution control Store CVO template Retrieve CVO template CVO creation Add new CVO Execution control CVO instantiation request Store metadata Read metadata Search CVO Fact read Fact write Store service template Query template Store Ontology Read RWK Write RWK Natural language request Service request Read UC Write UC Manage Service Detect Situation Grant Agreement number: Page 66 of 136

67 Table 2: Components Interfaces 3.9 Cognitive Aspects Besides virtualization and large scale support, cognition is another cornerstone of the icore architecture. This section reviews some classic cognitive approaches and shows how they enrich the functionalities exposed by various icore elements, and support the systematic activities. The knowledge comes either from human expertise or from machine learning approaches. The cognitive cycle section exposes the framework that fits icore. By making a loop on the current context and the existing knowledge, one expects that an icore instance be able to adapt to various situations, have a pro-active attitude and alleviate the overall behavior. The next section, "Situation awareness targets the real world knowledge through three series of mechanisms, layered according to the involved degree of intelligence. The Complex event processing section describes some building blocks that may be used by situation awareness or by machine learning processes. Its role is to perform filtering/aggregation/simple pattern detection on a flood of data, based on human-developed rules. The Machine learning part deals with one cognitive mechanism, based on automatic pattern detection from data. We present some successful/classical ML approaches and their links to functional and systematic behavior are sketched. The "Embedding domain knowledge into icore section is complementary to the previously mentioned machine-derived intelligence. This knowledge model may be related either to Real World Knowledge or System Knowledge. The next section describes automatic reasoning mechanisms that shall be found in icore, e.g. by starting from sensor data and taking into account the context and/or requirements. Decision making is a mandatory aspect of icore cognitive management framework, e.g. by supporting the CVO composition engine block, the performance management block or the reuse opportunity detection. While the previous sections in chapter 3 describe the functional architecture in terms of functional blocks and functional interfaces, this section intends to describe cognitive mechanisms to populate these functional blocks. Since these cognitive mechanisms are currently under development in the WP3 (VO level), WP4 (CVO level), WP5 (Service Level), more insights on where, how and which cognitive mechanisms are located on the architectural components are expected in detail in deliverables of WP3, WP4 and WP5. However, this section intends to give some significant hints on the envisioned cognitive aspects and their relation to the icore functional architecture, even without a strict and complete positioning The cognitive cycle In general, incorporating a cognitive cycle into a system such as icore enables it to retain knowledge from observing the external environment (i.e. real world events/data) which is continually evolving and deciding upon its future behavior based (i) on the knowledge created, (ii) other goals and also Grant Agreement number: Page 67 of 136

68 (iii) policies, so as to optimize the performance (e.g. A direct actuation), as shown in Figure 38. Cognitive Cycle Observe Evolving Real World icore Mach ine learni ng Knowledge Figure 38: A high-level overview of the cognitive cycle and relation to icore Generating knowledge is the most crucial part of cognitive cycle process, typically starting from the observation of new real world context features (initial state) that are either triggered as consequents by sensor data processing tasks (i.e. event-condition based processing) at the service execution Level (i.e. CVO level), as well as the explicit modeling of the user s interaction with icore, brought about by the observation and characterization of service requests by requesting actors, with time (i.e. service level). As a result, new situations can be extracted from basic or raw (streams of) sensor data, with additional machine learning techniques embedded to help create the necessary awareness concerning the real world being observed. The primary need for a cognitive cycle in icore is therefore to allow domain experts to include their specific domain knowledge into the process (e.g. by means of expert systems, constraint based rules, and Bayesian graphical models, each coming with specific inductive bias) so that a degree of control can be influenced onto the icore system and predictive behavior developed, in order to generate relevant real world knowledge and actuation. Beside the reactive nature of these approaches based on the embedded intelligence, one can consider other some mechanisms that are able to learn from data and automatically build predictive models Situation awareness A pervasive computing system, such as icore, should not be concerned with individual pieces of sensor data, but have the ability to build on events (i.e. low level situations) that are detected and interpreted from real world sensor data, into a higher domain relevant representation. This representation should encapsulate and describe the current abstract state of affairs concerning the monitored environment, which is relevant to the application in question (e.g. user is outside and is exercising ) and is generally referred to as situation awareness (SA) and eventually becomes embedded as real world knowledge. Figure 39 below illustrates the position of SA in icore. As shown, the opportunity of having SA within icore therefore lies in its ability to provide a simple, human understandable and real world knowledge representation of sensor data to icore applications, whilst shielding the application from Grant Agreement number: Page 68 of 136

69 the complexities of the different data streams used and the uncertainties (i.e. false alarm constraints) of the associated sensor readings themselves. Real World Knowledge (icore SA) A representation about the current real world facts and dynamic status of the monitored environment icore Applications SITUATIONAL AWARENESS icore Level 3: Projection Current Situation Level 2: Comprehension Extracted Context features Level 1: Perception SITUATION PROJECTION SITUATION CLASSIFICATION SITUATION RECOGNITION SITUATION DETECTION Real World Sensor Data Data Processing Developer/Domain Expert Provides Event Based Rules Inference on Context Features Domain Expert / Knowledge Engineer Provides Application Domain Ontology Figure 39: Position of Situation Awareness in icore As shown in Figure 39, achieving icore SA requires a series of mechanisms (levels 1 to 3) to extract real world knowledge from sensor data. This involves the aggregation and analysis of different pieces of observed context over time, in order of intelligence, as follows: Level 1: Low level situations detected using mechanisms such as CEP, using templates provided by a data processing developer, which outline the rules required to extract the required context. Level 2: Recognition and classification of what the detected situation then implies to the wider application domain, facilitated by domain-specific background knowledge, in order to explain (infer) the behavior of the context being observed ( context awareness ) and determine the current situation. Level 3: Situation Awareness is generated here and deals with how the context awareness from level 2 may change (projection) based on evidence provided by observed historical data and previously observed events, in order to aid the eventual decision making processes. In each of the above mentioned levels, functional blocks embedded with pre-defined goals (machine learning) and prior knowledge (domain knowledge), can help to improve the SA process. Cognition adds further intelligence to the functional elements within SA, which helps discern the situation and thereby eventually effect actuation. Integrating cognitive techniques into the SA levels can build and enhance the SA process, in a series of steps: Grant Agreement number: Page 69 of 136

70 Situation Detection (level 1): Sensor data being processed can satisfy any of the event-based criteria conditions (CEP) provided by a domain developer and thus trigger a potential low-level situation. Machine learning maybe applied here to eliminate false triggers (alarms) of a potential situation and provide an association mechanism to correlate sensor data streams, with view towards identifying new event-based rules. While CEP rules can embed human designed mechanisms, machine learning is able to extract automatically the features of interest from historical event stream data and provide an update mechanism (e.g. adapt event condition values) to the overall CEP workflow. Situation Recognition and Classification (level 2): Once a situation is positively identified, reasoning mechanisms can be employed to establish/recognize which low-level situations are relevant and valid. Recognition of the most appropriate situation can be brought about by applying reasoning mechanisms that incorporate pre-defined rules provided by a domain knowledge expert, to establish the relationships and causes (i.e. to bridge the knowledge gap between detected situations) about the real world environment being monitored. Traditionally a domain knowledge model is stored in a RDF database, with existing semantic and ontology reasoning engines (e.g. Jena, Sesame) providing a query interface (e.g. SPARQL interface endpoint) to the database, to help query/substantiate the spatial and temporal meanings for a given sequence of detected situations, from level 1. Causal reasoning (e.g. Bayesian reasoning) can be used to enable classification of positive relationships that exist between two or more recognized situations. The classification process here helps to choose a probable single selection (output) from a list of pre-determined potential inputs that are provided by events triggered by the detection rules used in level 1. Situation Projection (level 3): Machine learning can be applied to analyses observed/historical data to aid the process of extracting useful information to determine, for example, future trends and behavioral patterns. Learning mechanisms can be applied to provide a projection estimate concerning the situation that has been classified in level 2. This level might contribute to producing RWK and SK Complex event processing Complex Event Processing (CEP) is defined as a method of tracking and analyzing streams of information/data about events. The complex attribute in CEP refers to the events to be processed, not to the process itself. CEP represents the techniques and tools which target the processing in near real-time of event streams, to extract simple situations (e.g. hot or cold, fire alarm, traffic) that exist within the event streams. It provides powerful yet facile mechanisms for expressing structural and temporal relations between the events, and the also ability to define and detect event patterns. The main categories of event processing applications are: observation, information dissemination, dynamic operational behavior, active diagnostic and predictive processing. These might overlap and some of them are rather less supported by CEP e.g. prediction. The following are the major types of agents (i.e. a cognitive function) that CEP can use: Filter agents used to eliminate uninteresting events; the interestingness is encoded by a human-defined test. Pattern detect agents determines whether a collection of events contains a predefined combination of particular interest. Transformation agents to modify or augment the content of the event objects. These agents can be chained in a human-defined workflow, based on specific domain expertise. Moreover, based on a set of previous experiences (i.e. contexts and associated CEP solutions), and on Grant Agreement number: Page 70 of 136

71 a suitable similarity measure dictated by the domain knowledge, a case-based reasoning module might automatically support the creation of such workflows. The CEP functionality matches the CVO and Service levels and might be a building block for Situation awareness. For example, as a CVO aggregates multiple VOs, it could use the following CEP workflow defined as below: Filtering of the data (e.g. remove data with missing /outlier value, low-pass/high-pass filters, etc.). Transform the resulted values (e.g. compute aggregated values, doing scale transformation, correct values based on human defined or ML-deduced rules etc.). Perform a match between the resulted stream and some predefined queries. Such a workflow is composed of standardized CEP building blocks, which themselves are deployed and mapped to as icore functional objects. By including these blocks into the cognitive plane, they can be reused by making them available both to CVO and to the service levels. A typical CEP workflow is represented by a service oriented template and can be combined at runtime with the systematic template to produce the final object template. Such a CEP workflow can be defined by a human expert and saved for further reuse, thus allowing for embedding and reusing RWK Machine learning Machine learning (ML) starts from empirical data and by inductive process, discovers the underlying patterns and models. Informally, machine learning is a program s feature to improve its performance at some task through experience with data thus making ML an appealing feature to be added within icore. The ML approach can be applied by building data-driven models, starting either from raw input or based on output of CEP workflows. The cases below show how ML could enrich icore abilities: Automatically building models to detect anomalies/outliers in a stream of data. The resulting models are the counterparts of the CEP filter agents and in some cases they can be automatically built. The derived models might be further linked to specific scenarios and used as metadata. Time series forecasting, predicting values that are supplied when a specific RWO temporarily stops to deliver data; this scenario is useful when an estimate of most likely future values is required instead of no value at all. Beside this, the forecast mechanism can be applied to predict icore behavioral pattern for some components (e.g. to forecast an out of service status for a VO, based on previously recorded activity) and thus to avoid selecting that VO to serve data in a time range when it s likely to be unavailable; this later scenario enriches the SK. Predicting classes or continuous output values which are most likely to be associated to current input values, based on the model learnt so far. Depending on the model one employs for this task, the knowledge might be further exposed as rules, formulas, decision tables, etc., all being valid forms of RWK. Learning associations between values and producing association rules; they may be further interpreted by human experts (i.e., RWK) or used for automatic proactive behavior (prediction, estimating missing values or detecting concept drifting; all these features are instances of servitization). As an example of using learning associations for generating system knowledge, one might start from the recorded history of successful links between various VO Grant Agreement number: Page 71 of 136

72 objects and, based on the derived associations, allowing for selecting the most actual resources for a CVO. Automatic feature extraction and feature learning are mostly used as pre-processing mechanisms, but might be also further exposed as metadata, feeding the metadata plane. The features having the most beneficial impact on the process at hand are to be further used, instead of the ones that are originally provided. Supplementary, the resulted features can significantly reduce the feature space dimension, with obvious computational benefits. Uncertainty management through the most classical approaches: probabilistic models and fuzzy systems, which offer support for various types of uncertainties noise, errors, degree of beliefs, both for I/O values, and for internal model structure. Beside the intrinsic support for uncertainty, both fuzzy systems and probabilistic models allow previously elicited domain knowledge to be embedded inside the icore. Graphical models allow deriving the most probable causes for specific evidence, and this might be valuable both as RWK and as SK. By allowing a human or an icore procedure to experiment what-if scenarios, the most relevant resources might be optimally selected for a specific process, based on the current context and past experience. The available learning methodologies are considerable and are to be chosen based on the application. Some of the commonly employed machine learning algorithms includes: ANNs, HMMs, Case-Based Learning, Decision Trees, Support Vector Machines, etc. A few of the methods are elucidated below: Artificial Neural Networks (ANN) - Computational mathematical models based on the natural neurons. Typical applications include: classification of activities, home automation/device control, user profiling, energy conservation, and prediction. Some of the limitations of the ANNs include considerable long training time and designing the optimal architecture could be tedious. Hidden Markov Models (HMM) - Finite state machines augmented with state transition probabilities. Each state affects the probability of the outcome for each specific event. HMM systems are assumed to be a Markov process with unobserved (hidden) states. Learning data patterns which are represented as a sequence of events over time, user behavior recognition. The disadvantage being HMMs are very complex to build and scalability is a concern. Decision Trees: they classify instances based on the feature values and on an automatically built set of rules. Decision tree consists of nodes that together form a rooted tree. Each node in a decision tree represents a feature in an instance to be classified and each branch represents a value that the node can assume. Instances are classified starting at the root node and sorted based on their feature values. DTs might be employed useful for monitoring the sequence of events. By means of service oriented templates (built by a human expert, see for example the cognitive side of Service level, as described in [11] one can add them as workflow blocks for different objects within the three icore levels Embedding domain knowledge into icore Section 3.2 mentions the importance of defining user roles and essentially categorizing the entities that deal with the system. In this section, we discuss issues about the interaction of an entity with the knowledge that resides in the icore system. In this respect, we talk about the relationship between entities and the knowledge model of icore. The knowledge model of icore may be related Grant Agreement number: Page 72 of 136

73 to either Real World Knowledge (RWK - representing the real world objects) or System Knowledge (SK - representing the operation of the icore system). In this section, we analyze issues regarding the insertion of knowledge in the icore System. We assume that the actor is a Domain Expert/Knowledge Engineer (3.2.1). This actor wants, in some way, to embed knowledge in the system. The Knowledge Engineer is responsible for upgrading the Service Templates of the Service Level or updating the RWK [10]. This process is carried out by exploiting the API of the icore system in the Service Level. In particular, the Knowledge Engineer might have the capability to use the API in order to insert some new rules or facts (if we speak in OWL/RDF terms) that correspond to a situation. This part of procedure is applied in the research area of Semantics technology. Then, inference/reasoning mechanisms can use these rules/facts in order to form the corresponding knowledge and help a Situation Observer (SO) to detect some event streams and infer about a situation (CEP). In the Service Level, reasoning / inference mechanisms can be applied to User Characterization and Service Request Analysis blocks (3.2.2). Moreover, we assume that the actor is a Data Processing Domain Expert/Developer (3.2.1) in the CVO Level, as it is indicated by the general architecture in section The Developer follows similar procedure to embed knowledge in icore as he uses the icore API to update CVO templates. The reasoning / inference mechanisms in the CVO Level can be applied into the CVO Factory block. In the VO Level, we assume the Device Installer as one of the main actors. The Device Installer, as mentioned above in section 3.2, manages to update through an appropriate API the VO registry with information about the installed device and the installation context. Besides, the Device Manufacturer provides a VO Template which becomes part of the knowledge of the icore system containing information about a specific device type and its capabilities. The latter actor might use the icore API, in order to insert the VO Template. In addition, domain knowledge in Machine Learning terms might mean to insert assumptions (meaning "inductive bias") for a Machine Learning technique. This will help the inference about a situation that is being observed by a Situation Observer. Moreover, domain knowledge might mean the insertion of conditional probabilities in a Bayesian Network. Moreover, we could imagine the icore administrator (3.2.1) through the Administration and Management I/F to select an appropriate Machine Learning technique for the observation of a situation. Although the icore envisions an automatic / autonomic operation, the actor s intervention should be allowed in terms of governance Reasoning in icore The reasoning mechanisms used for extracting value from sensor data are employed and built based on the context or the requirements. In general the reasoning mechanisms might be one of the following types: 1. Rule based/logical Reasoning Reasoning based on the pre-defined rules. The rules could be statements, constraints, or conditions to determine the outcome of a query. 2. Semantic/Ontology Reasoners Use of existing semantic and ontology reasoners such as, Jena, Sesame, and Description Logic based reasoners e.g. Pellet, RacerPro. The query languages like SPARQL, RQL, etc., are employed to query the database based on icore operational constraints. 3. Spatio-temporal and Causal Reasoning The spatio-temporal reasoning mechanism to substantiate the spatial and temporal properties of a given sequence of events. And the causal reasoning establishes if causality exists between two successive events i.e. logical sequence e.g. flipping on the switch turns on the light. These are essential in determining behaviour and are extensively used in smart environments. In addition, some of the reasoning mechanisms can incorporate learning which evolve over time exploiting prior user statistics. Grant Agreement number: Page 73 of 136

74 1. Neuro-Fuzzy Reasoning - Fuzzy logic provides an inference mechanism under cognitive uncertainty and is useful when vague or imprecise information is present. Neural Networks modifies the fuzzy parameters based on experience [18]. 2. Bayesian Reasoning A probabilistic reasoning model built on the Bayes principle and computes the probabilities for uncertain inputs based on the interactions of the variables in the environment which are conditioned based on prior usage statistics. 3. Case Based Reasoning - Uses prior knowledge/experience to make new decisions. Employed widely to study behavior patterns Decision Making Decision Making is a valuable aspect of the icore Cognitive Management Framework, which lies in a distributed manner in various parts of the CVO and the VO levels. In particular, Decision Making mechanisms can be found (indicative examples) in the Approximation & Reuse Opportunity Detection block and the CVO Composition Engine block of the CVO Factory, in the Performance Management block of the CVO Management Unit and in the Resource Optimization block of the VO Management Unit. Essentially, these mechanisms can be considered as optimization processes (such as Linear / Non-linear Programming, Constraint Programming, Meta-heuristics etc.), consisting typically of an objective function, which needs to be maximized (or minimized) and a number of constraints that must be satisfied with respect to the current service request and situation. Decision Making is very crucial in order to handle the large scale IoT comprising a huge amount of objects in an optimal manner. The Approximation and Reuse Opportunity Detection block performs a search in order to discover potentially available and relevant CVO instances of the requested CVO template names that exist in the Service Template and can be reused. In detail, the Decision Making mechanism of this block compares the current service request and current situation with past ones in the search for an adequate match. Past request records that contain CVO components (VOs) with functions that are unavailable in the current situation (either them or approximate ones) are filtered out, as they definitely cannot fulfill the application goals. One possible implementation could be by means of using a ranking of the remaining records based on a satisfaction-rate similarity metric and the highest ranked one is tested against the similarity threshold. The satisfaction rate depends on the amount of total requested functions that are available as well as their correlations and it is implemented as a score (i.e. sum) of these correlations between the set of the requested and the required CVO functions. Besides the functions, the overall similarity metric considers also the rest of the situation and request parameters. When this overall similarity metric exceeds a specific threshold, then the CVO is considered suitable for the newly request. Other possible implementations comprise ontology alignment, semantic similarity / semantic proximity, optimization techniques to minimize the difference between the new and past records etc. Regarding the CVO Composition Engine block, the mechanism is responsible for evaluating the virtual counterparts of real world objects, i.e. VO instances and finding their most suitable composition that fulfills the service requirements, maximizing the value of the composed CVO. For its operation, information regarding the request (functions, policies) and the situation parameters, derived from the Situation Awareness block or CVO Situation Observers (SO), is necessary. Furthermore, it communicates with the CVO Template Repository to request the CVO templates and with the VO level to ask for VO instances of the VO types / templates names that are included in the CVO template. Situation parameters are exploited, in order to identify the VOs that are compliant to them. Through this process, there is a first filtering of the available VOs, preventing unnecessary evaluation of VOs that cannot be utilized. In case that a specific function cannot be provided from any available VO, the Decision Making mechanism searches for VOs that can offer a similar (approximate) function. For this purpose, information about the correlation between different Grant Agreement number: Page 74 of 136

75 functions and whether a function is acceptable to be utilized instead of another is required. The service execution request (3.3) is formalized as an objective function accompanied with some constraints that some of them stand for the policies / Service Level Agreements (SLAs). The Performance Management and the Resource Optimization blocks are responsible for guaranteeing the proper performance of the CVO/VO level blocks as well as instances and for optimizing the operation of the real world objects, respectively. Decision Making mechanisms of these blocks issue commands for reconfiguration actions (e.g. for selecting and replacing VOs), when is needed, considering the desired policies of the system. Particularly, the service requester, when requesting a certain service, may specify the importance of certain features through a set of so-called policies. For instance, the requester may ask for an energy efficient service, a low delay transmission or a secure connection. To this end, VOs that have functions that are characterized by appropriate features need to be selected. The information about such features can be retrieved from the VO registry and considered accordingly. Each of the functions of each VO has a number of features that facilitate the VOs evaluation, such as performance, security level, imposed OPeration EXpenditures (OPEX), energy consumption, induced network latency, etc. For the optimization process, the features of each function of the considered VOs may be weighted according to the set of policies in order to depict the relative importance of the features. The ultimate objective is to find the composition of VOs and VOs functions that maximizes the value of the composed CVO taking into account the correlation of available VO functions and requested functions, as well as the provided VO function features and the defined policies Security Aspects Security and privacy are an important part of any system, and icore is no exception. Access level security and privacy is addressed here. In addition, we have presented the cognitive aspect that influences the access levels. Building cognition need not spoil the security features and they can coexist. This co-existence of the cognition along with access levels of icore services is discussed in this section. The icore framework is based on the concept that access to the VO/CVOs must be regulated through a sticky policy management approach. The underlying notion behind Sticky Policy [37] is that the policy applicable to a piece of data (e.g. VO) travels with it and is enforceable at every point it is used. In our proposal the framework combines the concept of sticky policies with the concept of VO/CVO, which are created and managed with their associated policies and access rights. Users will therefore be able to declare privacy statements defining when, how and to what extent their personal information (also stored in a VO) can be disclosed. Note that there are different levels and types of access rights: create, read, modify. The application of sticky policies allows the VO/CVO to be distributed across different domains in several operational scenarios, preserving the information stored in them. The concept is that the VO/CVOs are encrypted and signed in one domain and not accessible by unauthorized users, even if they are distributed across untrusted networks and domains. An authorized user in another domain (which is icore enabled) can access the encrypted VO/CVOs through an icore portal, which receives internally the keys through a secure distribute key management system. Under specific rules, access rights can be delegated: a virtual object can acquire for a specific time, or space or context the access rights of another virtual object. This is particularly useful for the creation of automatic agents. The access level rights can also be used in the ontology and lookup mechanism in the CVO/VO registries: if a virtual object has higher access rights than the entity accessing the registry, the virtual object will not appear. The VO is created by the user (which can be a person or an application) with specific levels of access for different operations (i.e., read (e.g. read a sensor value or stream)-write (e.g. change a VO Grant Agreement number: Page 75 of 136

76 parameter value such as lastcalibrationdate)-execute (e.g. actuator function access)). Each access level is directly matched to a specific cryptographic key, which is used to ensure that only authorized parties can access the data and execute specific operations on the VO. The VO encapsulates the data and provides a public interface to access this data for the read-write-execute execute operations. A VO may provide different public interfaces depending on the levels of access. For example: it may provide the read and execute operation on the data for a specific level of access s but not for the write operation, which requires a different level of access. Figure 40: Access levels in icore framework Figure 40 shows the lifecycle of a VO object from the creation by a user (i.e., User1) to the access from another user (i.e., User2). The description of each step is following: 1. A user (e.g., a person or an application) identified as User1 in Figure 40 accesses the icore web portal. The user is authenticated with a specific level of access. The user would like to store sensitive data (e.g., financial data) in the icore framework data or an executable (e.g., an analysis tool). The web portal requests the user what type of access should be defined on the data to be inserted in the system or if it can be modifiable or executable. Depending on the information provided by the user, the icore framework requests to the Access function the set of keys to be used for the creation of the VO and its encryption. The VO is created together with its set of sticky policies. 2. After the VO is created and encrypted, the VO object creator requests the VO registry to create a new entry for the VO, so that it can be looked upon by any parties, which is interested to the VO content or functions. The VO registry creates an entry with the specific level of access indicated by the VO object creator and the user and with a set of attributes, which can be used for look up by other parties. For example, if the VO is a set of financial Grant Agreement number: Page 76 of 136

77 data, the entry will be created with parameters/keywords finance and investment and with the name/surname of User1. At this point the phase of VO creation is completed. 3. Another user, identified as User2 in the picture, would like to use the icore framework to access specific information (e.g., the financial data) of User1. The user can be a real person (e.g., the personal financial accountant of User1) or an application (e.g., an automated investing application previously set up by User1). In the example, the user accesses the icore web portal and search for information on financial data related to the name/surname of User1. A transaction is started by the icore framework to track the requests by User2. 4. The VO registry identifies the VO created by User1 and matches the level of access of User2 with the VO. If User2 has an adequate level of access, the VO registry confirms to User2 that the matching information exists. Note that the VO registry may not return a match to User2 if the level of access of User2 is not appropriate even if an entry is present in the registry 5. User2 then requests the icore framework to access the contents of the VO (i.e., read operation). The portal asks the Access function to provide access of the VO to the User2. 6. The Access function checks with the VO registry if User2 can perform a read operation on the VO. In this example, this operation is allowed. 7. The Access function creates a Wrapper object, which contains the key needed to perform a read operation on the VO. User2 does not see the key but only the Wrapper object. 8. User2 reads the content of VO through the Wrapper object. 9. The Wrapper is destroyed at the end of the transaction or by a timeout if no activity by the User2 is detected. This operation is executed to remove all temporary data, which could jeopardize the privacy of the user. The concept of a Wrapper object to interface the VO from the User is introduced to avoid the provision of the keys directly to the users. The keys are stored in the Wrapper object as private attributes and they are deleted when the transaction is completed. The direct provision of the keys the user can create security vulnerabilities if the role or level of access of such user changes in the future. For example, the level of access of User2 may be downgraded or removed (e.g., an employee leaving a company), but User2 could still access the VO even if it is not authorized. An example of the application of the security and privacy model to the cognitive capability of the icore framework, described in the previous sections, is presented in Figure 41. A user specifies the needs for a specific service (e.g., notification on the status of an office building) to an application. The application defines goals for a service definition function, which uses the proximity engine to retrieve VO and CVO, which can be useful to satisfy the needs of the user. For example: VOs, which are related to temperature sensors used for fire notification, electricity consumption, and alarms and so on. Some VOs can be accessed by the application (which operates on the level of access of the user) and some other VOs cannot be accessed (e.g., private offices). The proximity engine uses the level of access to identify the nearby objects. If the user does not have the permission to access a VO, this VO has an infinite distance. A typical scenario can include both data stored in the icore systems and external sensors and remote applications, which contribute to perform a specific goal. The main steps of Figure 41 are described: 1. A user specifies his preferences and needs to an application. For example, in an industrial application (e.g., a chemical plant) the user wants to be informed when the values of specific parameters like temperature and pressure are above thresholds. In turn, the definitions of the values of the thresholds are based on the specific configurations of components of the systems. These configurations and the design of the overall system are stored as VOs and CVOs. An external application may also provide additional data in real-time (e.g., provision of Grant Agreement number: Page 77 of 136

78 water or energy from a utility). The user also requests to the application the identification of mitigation actions in case the thresholds are overcome. Application 1) Specify needs External applications 8) Trigger 2) Define Goals 9) Mitigation techniques Gateway VO Situation Awareness notification 10) Mitigation techniques 4) Requests services Service definition sensor VO 5) Requests lists of VOs/CVOs Proximity Engine 3) Define services 6) Verifies access level Defines user access rights Access function 7) Cognitive cycle VO VO Registry VO CVO VO CVO CVO Figure 41: Description of the cognitive capability with the access model 2. The application defines the goals to satisfy the requests of the user and translates them to service specifications to the icore Service Definition function. The application also sends information on the user like the level of access of the type of access and protocols (e.g., UMTS, Bluetooth). This information is needed by the icore framework in the following steps. 3. On the basis of the required goals, the Service Definition function specifies and forwards requests to other functions in the icore framework. 4. The Situation Awareness function is responsible for collecting and recording the status of the real sensors and devices and of the external environment in general. The Situation Awareness function can also interface other applications external to the icore framework through gateway mapped to VOs. 5. The Proximity Engine function provides a list of VOs and CVOs, which are semantically similar on the basis of specific criteria. The Service Definition function can specify the criteria and the Proximity Engine return the list of VOs and CVOs. For example, the Proximity Engine can provide the list of components in the system (i.e., VOs and CVOs), which can be impacted by an excessive increase in temperature through cascading effects. 6. Both the Proximity Engine and the Situation Awareness functions operate on the basis of the level of access of the user, which is passed as a parameter to the Service Definition function. For example, a user may not have the adequate level to access a sensor or an external application. Additionally, VOs or CVOs may not be accessible by the user. If the Service Definition function is not able to provide the requested services to the Application because of Grant Agreement number: Page 78 of 136

79 the access level of because of lack of available information, the user is notified that the needs cannot be satisfied. 7. Once the application has completed the requests to the icore framework functions (either positively or negatively), the icore framework is responsible for dynamically collect information on the status of the physical devices and external applications and raise a notification if a threshold is passed. The functions described in the icore framework can also compensate changes in the components of the system. If a component, represented by a VO, has an internal failure, the Situation Awareness function requests the Proximity Engine to identify another VO or CVO, which can provide a similar service through the proximity concept. All these operations are executed by the functions of the icore framework in an autonomous way, without the need of user intervention. 8. Once a threshold is overcome, the Service Definition function triggers a notification to the Application. 9. As requested by the user, the application collects information from the icore framework to suggest mitigation techniques. The request of the information is still executed through the Service Definition and Proximity Engine functions, which identify the VOs/CVOs, which can be useful to provide mitigation solutions to the user. As in the previous cases, access levels and sticky policies are still applied and the Application can only use the VO/CVOs, which are adequate to the level of access of the user. The application sends the mitigation solutions to the user in the form of data or potential actions. The features of the mobile device are used to mediate the type of information sent to the user. For example, if the user has a mobile device with poor capabilities or connectivity, the application will only send essential data and actions which are possible with limited capabilities. The encryption of VOs and CVOs is needed to provide data protection for inter-domains distribution of data or when VOs/CVOs are stored in areas which cannot be fully trusted (e.g., memory in a mobile device). In a distributed icore environment where there are different domains or ICT infrastructures furnished with icore functions, some systematic functions like the VO registry or the Access Control function are distributed and connected through trusted connections. VO and CVOs can be securely moved or copied across domains through untrusted connections because they are encrypted. Once the VO is received by a user (e.g., the vending machine), the VO registry will provide the level of access to the VO and the access control function the correspondent key to get access to the VO data through a transaction wrapper. The overall flow is described in Figure 42. Grant Agreement number: Page 79 of 136

80 Access Levels/ Keys id Access Levels/ Keys id Domain 1 Trusted Channel Domain 2 User, which creates objects icore Portal Access Control function VO Registry VO Creator VO Creates encrypted VO UnTrusted Channel Entry for the VO Access Control function VO Registry Decrypt VO VO Transaction wrapper User, which accesses objects UnTrusted Memory VO Decrypt VO VO Transaction wrapper Figure 42: Distribution and access of VO in a distributed domain. Grant Agreement number: Page 80 of 136

81 4. The icore Technology Architecture According to [14], an architecture is the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution. As such the icore Architecture defines a set of basic principles, building blocks, functional entities and guidelines for building an icore compliant system. These definitions are taken from [19] and adapted to the icore architecture. Definition: a system is a collection of components organized to accomplish a specific function or set of functions. The architecture of a system is the system s fundamental organization, embodied in its components, their relationships to each other and to the environment, and the principles guiding its design and evolution. Definition: a technology architecture (synonymous to platform) is a combination of technology infrastructure products and components that provides those prerequisites to support the functioning of a system, i.e., the collection of technology software components that provide the services used to support the specific set of functions defined according to the architectural guidelines. This includes IT infrastructure, middleware, networks, communications, processing, and standards. The icore technology architecture offers a set of functionalities that sit on top of a set of protocol stacks, operating systems, adaptation mechanisms, software solutions and middleware platforms. This set of solutions shapes a technology platform that supports and eases the execution of icore functions. This platform can be designed in different ways, depending on the specific application domain, the provider willingness to address a specific segment of the market or on the available technologies that enable an implementation. The technology architecture can be built incrementally in order to extend and enrich the support and the capabilities offered to the icore functions. Examples of these supporting functionalities are: specific sensor protocol stacks allowing the interaction of icore virtualized objects with zigbee solutions); communication protocols that support messages exchange between icore objects and the external world; some software platforms and interfaces (e.g., REST interfaces and the underlying HTTP based infrastructure for communication with remote objects) or even more complicated platform based on SOA or the like. A major value of a technology platform is its alignment to existing standards. Using cutting edge solutions that support standards gives a sound basis for the evolution of the functional architecture of icore. Figure 43 sketches out the relation between the Functional and the Technology Architectures. Grant Agreement number: Page 81 of 136

82 Figure 43: Functional and Technology Architectures The aim of the Technology Architecture is to enable the possibility to develop, and instantiate the icore Functional architecture, i.e., the set of functions that applications can use in order to achieve their goals. As said, an architecture comprises also components and building blocks, they need to access and use the functionalities that the Technology Architecture makes available. The icore Technology architecture can be seen as a sort of operating system that supports the proper execution of the icore functional architecture. icore has identified a number of technological areas to be considered when putting in place technology platform: - Communication Technologies: i.e., those technologies that allows icore components and building blocks (as VOs and CVOs) to communicate with external entities (e.g., TCP/IP protocol stack). They can comprise also multimedia communications (e.g., for camera and security control) - Specific sensor related protocol stacks (e.g., zigbee, and the like) or languages 2 - Middleware Technologies: such as virtualization platforms (e.g., hypervisors); SOAP or REST based solutions, operating systems up to mechanisms for dealing with massive message passing (e.g., specific solutions for Pub/Sub). - Knowledge Representation Technologies. These comprise xml languages, mechanisms and solutions for storing large chunks of data (e.g., NoSQL DB) and other solutions used in order to represent and reason about objects. - Management of systems, i.e., those functions, mechanisms and platforms needed to support the management of software systems - Identity and Addressing technologies 3. These technologies support the identification and the naming/addressing of objects and their mapping between icore objects and external ones. 2 Sensor protocols could be fit in the category of communication protocols, however in order to emphasize the need to create interoperable solutions, ons, sensor related protocols have been decoupled from the general communications capabilities as TCP/UDP/IP Grant Agreement number: Page 82 of 136

83 - Security frameworks 4, i.e., the set of already developed mechanisms for providing and supporting security within systems. These technological areas will tend to increase in number (offering new capabilities to the icore functional architecture) or in size (adding new features and capabilities related to a specific technological area). Figure 44 depicts the possible technological areas that form the technology platform. Figure 44: Technologies areas comprised in the icore Technological Platform The technological areas identified so far have to be integrated in order to provide to the functional architecture a consistent environment for executing the icore functionalities and to support the external interaction of icore functions and applications. As an example Figure 45 represents a schematic view of some technological capabilities supporting the Urban Security use case (see Section 4.5 for more details). 3 Middleware solution typically support Identity and Addressing solution, also in this case these functionalities are valuable in an icore system and they have been emphasized in the list because some of them could require same adaptation in order to be used in the context of IOT applications. 4 See previous notes Grant Agreement number: Page 83 of 136

84 Figure 45: The Urban Security communication stack In Figure 45,, communication, sensor specific and middleware related solutions are provided in a unique environment that the specific icore functions can use in order to properly execute and provide the right responses. Having a technology platform poses a major technical issue to the icore Functional Architecture: i.e., how do the icore Functions benefit from the features provided by the technology platform? What are the interfaces and the mechanisms to integrate the two sets of functionalities? In other terms, the building blocks and components of the icore functional architecture need to access and use the functionalities offered by the components of the underlying Technology Architecture. They can be accessed by means of interfaces. These interfaces could be proprietary or standard ones, and they could be many also for similar functionalities. For instance in the case of sensors using different protocols, VOs could need to access to two different protocol stacks. In order to solve this issue, there are two viable approaches (see Figure 46): To create a sort of adaptation layer between the technology and the functional platform, i.e., the VO use a single generic interface representing the needed functionalities and the adaptation level maps these functionalities on the specific ones supported by the used protocols: To integrate the needed interfaces and mechanisms directly in the icore objects, in this case Virtual and Composite Virtual Objects will directly integrate in their code the needed interfaces and mechanisms to govern different protocols. Both approaches have merits and drawbacks that are briefly described in the following. Grant Agreement number: Page 84 of 136

85 Figure 46: Possible Adaptation Mechanisms in icore An Adaptation layer has the merit to provide a homogeneous view on the underlying technological platform. icore components can use a normalized set of mechanisms, interfaces and languages in order to access to the needed functionalities. However there is the need to continuously update and upgrade the common interface with respect to new technologies that can evolve along time. This is a quite consuming activity that has also to take care of the normalization of interfaces creating a sort of further delay: identification of the right technological solution, integration in the technology platform and mapping, adaptation to the common interface. On the other hand, the direct integration of interfaces, mechanisms and models in the icore components can result in a faster development response at the additional cost of duplication of solutions. In the long run, the duplication can be very complex and cumbersome to understand and to maintain. Putting together a technology architecture is a process to be taken step by step. The icore technological platform is to be derived and consolidated starting from the different use cases and defining and developing it piece by piece. In addition, the technologies and the standards underlying the platform can vary very quickly and hence a lot of maintenance is required in order to keep it up to date and smooth. 4.1 Smart Home - Ambient Assisted Living (AAL) Use Case Description The Smart Home AAL use case elaborates the needs in the smart home domain and aims at providing value added services to the user in his smart home, focusing on e-health sector. The dynamic and automatic creation of services as well as their exploitation by easy-to-use interfaces can be supported in a smart home environment. The needs of the users in a smart home, such elderly people, people with special needs, impairments or in general health problems, can be satisfied through smart system that supports the autonomous living, while guaranteeing proper assistance in case of need, and tailored to user preferences. The expected outcome of this use case is a helpful environment supported by the icore architecture. The service offers medical care as well as a smart home functionalities with cognitive capabilities that will realize an easy and safe life of the smart home users. Different technology solutions are considered in order to support various proposed functionalities that are included in the architecture. Specifically, starting from the VO level we have a block of Semantic Web technologies that are associated with the VO Registry and its functionalities. Grant Agreement number: Page 85 of 136

86 Specifically, the Web Ontology Language (OWL) is used in order to support the development of the VO Information Model ontology, by the Knowledge Engineers, which will be fulfilled with appropriate meta-data for the description of the available VOs. In addition, the meta-data will be structured using Resource Description Framework (RDF) and it will be stored in the VO Registry by the VO registration mechanisms. The VO description data will be available for querying / modification by external entities in the icore system, using the SPARQL query language. The VO Registry will be implemented as an RDF Graph database that will support the storing and the management of RDF data, while it will be hosted on a Semantic Repository that will support different functionalities with respect to semantics, such as the SPARQL Endpoints that allow the SPARQL requests execution. Furthermore, the RESTful Web Services approach will be followed to satisfy the requirements that arise for the communication between the entities in the VO Level. Specifically, both the communication between the VO Level mechanisms and the VOs with higher level mechanisms will be carried out by the use of RESTful WS. Additionally, the WSDL 2.0 could be used in order to describe the properties of the RESTful WS, such as the service endpoints, as well as it could partly contribute for the development of the VO Templates. Similarly with VO Registries the Semantic Web technologies could be used for the development of CVO Registries, and their functionalities. On the other hand, the RESTful WS could be used for the communication both between the available entities in the CVO level as well as with other entities from higher or lower levels. Various mechanisms in the CVO Level require additional technologies that will support their functionalities. The CVO Composition Engine will be developed using the IBM ILOG CPLEX, while concepts from the Business Processing Execution Language (BPEL) will enhance the functionality of Orchestration / Workflow Management mechanism. These mechanisms will be used for the composition of the CVOs and for the development of the CVO Logic respectively. A variety of machine learning techniques will be used for the development of machine learning mechanisms that will be available in the CVO factory. Concepts from Bayesian Networks, Neural Networks (NNs) and Self-Organizing Maps will be used for the implementation of functionalities. The Service Level gathers a set of technologies, some of them will be used in other levels and some others are introduced only in this level. In particular, RDF will be used for the development of the Service Templates by the Domain Experts and Knowledge Engineers, as well as for the structuring of the RWK model that will comprise RDF facts. Moreover, SPARQL will be used for the implementation and performance of SPARQL requests both by Service Request Analysis mechanisms amongst Service Templates repositories and Intent Recognition mechanism to the RWK Model DB. Additionally, the RDF Rule Inference Engine will include mechanisms for the Reasoning of semantics in the Service Templates, which will be acquired by Service Templates repository. Mechanisms for the machine learning in the service level will be developed by using concepts from Bayesian Networks, Neural Networks (NNs) and Self-Organizing Maps. Finally, technologies for the Natural Language Processing will be based on WordNet implementation using the JWNL java libraries. In the next section there is the detailed description of each technology, whereas Figure 47 represents the mapping of technologies on specific architectural components. Grant Agreement number: Page 86 of 136

87 Domain Expert / Knowledge Engineer Data Processing Domain Expert / Developer Installer/ User Installs Sensor/Ac tuator Devices API Device manufactu rer API Authentication API Authentication Authentication Real-World Knowledge Model (RDF Concepts & Facts) Situation RDF, RDF Awareness Graph Situation DBs Projection Situation Classification Situation Recognition Situation Detection CVO Registry CVO Templates Repository WSDL 2.0 RESTful WS VO Registry Access Server VO Templates Repository System Knowledge Model Access Server WSDL 2.0 RESTful WS User Characterisation Learning Mechanisms Bayesian Nets NNs SOM Intent Recognition OWL, RDF Semantic Repositories, RDF Graph DBs CVO Composition Engine Approximation & Reuse Opportunity Detection CVO Factory VO Factory Orchestration / Workflow Management Learning Mechanisms OWL, RDF, SPARQL Semantic Repositories, RDF Graph DBs Resource GTW/Controller.. RDF, SPARQL Actuator Service Templates Repository Goal Analysis RDF Rules Inference Engine Reasoning on Data Pellet Reasoning Queried Fact Collector Sensor VOVOVOVOVO.. VO Container (WS host) VOVOVOVOVO Authentication IBM ILOG CPLEX BPEL Resource GUI Service Requester (Technology Agnostic) Natural Language Processing (NLP) Service Request Analysis Semantic Query Matcher VO Front End VO Back End: RWO Driver GTW/Controller Actuator Learning Statistics CVO Container (Execution) Bayesian Nets CVO CVO Situation Observer CVO NNs Situation Observer CVO Situation Observer SOM Situation Observer.. Sensor SPARQL RESTful Architecture Web Services Service Request (Developer / SPARQL) WordNet, API JWNL Authentication CVO CVO CVO CVO CVO Bayesian Nets NNs SOM CVO Management Unit CVO Lifecycle Manager Performance Management SLA-based delivery VO Management Unit VO Lifecycle Manager Resource Optimisation RESTful Architecture Web Services Coordination Quality Assurance Coordination Data Manipulation / Reconciliation Authentication Administration & Management I/F Figure 47: Mapping of icore Architecture Functions and needed Technologies for Ambient Assisted Living Technical Solution / Technology Description To support the icore system as well as to enable its functionalities, a set of different technologies have been proposed as the technology enablers in the icore AAL use case. Below there is the description of each technology. Web Ontology Language (OWL) The OWL Web Ontology Language [20] is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. Resource Description Framework (RDF) RDF [21] is a standard model for data interchange on the Web. RDF has features that facilitate data merging even if the underlying schemas differ, and it specifically supports the evolution of schemas over time without requiring all the data consumers to be changed. RDF extends the linking structure of the Web to use URIs to name the relationship between things as well as the two ends of the link (this is usually referred to as a triple ). This linking structure forms a directed, labeled graph, where the edges represent the named link between two resources, represented by the graph nodes. This graph view is the easiest possible mental model for RDF. Semantic Repositories / RDF Graph Databases Semantic repositories [22] [23] or RDF Graph Databases are engines similar to the database management systems (DBMS) - they allow for storage, querying, and management of structured data. The major differences with the DBMS can be summarized in two main parts; (a) they use ontologies as semantic schemes, allowing them to automatically reason about the data and (b) they work with flexible and generic physical data models such as the RDF graphs, allowing them to easily Grant Agreement number: Page 87 of 136

88 interpret and adopt "on the fly" new ontologies or metadata schemes. One of the key features of the semantic repositories is that they offer easier integration of diverse data and more analytical power. Sesame Server / Sesame API Sesame [24] is an open source Java framework for storing, querying and reasoning with RDF and RDF Schema. It can be used as a database for RDF and RDF Schema or as a Java library for applications that need to work with RDF internally. For example, let us suppose that there is a need to read a big RDF file, to find the relevant information for one s application and to use that information. Sesame supports RDF Schema inference. This means that given a set of RDF and/or RDF Schema, Sesame can find the implicit information in the data. Sesame supports this by simply adding all implicit information to the repository, as well when data is being added. SPARQL Language, SPARQL Endpoints / SPARQL Engines SPARQL [25] can be used to express queries across diverse data sources, whether the data is stored natively as RDF or viewed as RDF via middleware. SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions. SPARQL also supports extensible value testing and constraining queries by source RDF graph. The results of SPARQL queries can be result sets or RDF graphs. In addition, components such Jena ARQ [26] and Sesame SPARQL parser [24], can be used for the creation of SPARQL Endpoints that offer an entire SPARQL Service for the querying and update of RDF data. Reasoning on Data Pellet Reasoning Pellet [27] is an open-source Java based OWL DL reasoner. It can be used in conjunction with both Sesame and Jena API libraries and also provides a DIG interface. It is provided with an API which provides functionalities to see the species validation, to check consistency of ontologies, to classify the taxonomy, etc. REST Architecture, URIs, PURLs REST [28] defines a set of architectural principles by which you can design Web services that focus on a system's resources, including how resource states are addressed and transferred over HTTP by a wide range of clients written in different languages. RESTful WS allows the communication between different entities in the Web as well as can be easy described using WSDL 2.0, allowing the automation of processes and the interoperability between web resources. WSDL 2.0 Description of RESTful Web Services WSDL is short for Web Service Description Language [29]. WSDL is used to describe the interface of a web service. WSDL is written in XML. This makes WSDL documents platform independent. Most programming languages and platforms have XML parsing tools these days, so no matter what language or platform you are using, you should be able to parse WSDL files. BPEL Workflow Management Business Process Execution Language for Web Services (BPEL or BPEL4WS) [30] is a language used for the definition and execution of business processes using Web services. BPEL enables the top-down realization of Service Oriented Architecture (SOA) through composition, orchestration and coordination of Web services. BPEL provides a relatively easy and straightforward way to compose several Web services into new composite services called business processes. IBM ILOG CPLEX Decision Making based on Mathematic Formulations IBM ILOG CPLEX [31] Optimizer's mathematical programming technology enables analytical decision support for improving efficiency, reducing costs and increasing profitability. BM ILOG CPLEX Grant Agreement number: Page 88 of 136

89 Optimizer delivers the power needed to solve very large, real world optimization problems, as well as the speed required for today's interactive analytical decision support applications. WordNet, Java WordNet Library (JWNL) API, Apache opennlp and Java WordNet Similarity WordNet is a lexical reference system whose design is inspired by current psycholinguistic theories of human lexical memory. English nouns, verbs and adjectives are organized into synonym sets, each representing one underlying lexical concept. Different relations link the synonym sets [32]. In order to interact with WordNet, the NLP mechanism deploys the JWNL. JWNL is an API for accessing WordNet-style relational dictionaries. It also provides functionality beyond data access, such as relationship discovery and morphological processing. JWNL is a pure java implementation of the WordNet API, which means all that is required is the java libraries and the dictionary files. The Apache opennlp library implements the basic subtasks of Natural Language Processing fields such as Sentence Detection, Sentence Tokenization and Part-of-Speech Tagging. Moreover, Java WordNet Similarity library is used in order to perform Word Sense Disambiguation and Semantic Similarity processes combined with WordNet. Machine Learning techniques Bayesian Networks, Neural Networks (NNs), Self- Organizing Maps (SOM) Different Machine Learning (ML) techniques come from the field of AI, enabling cognitive capabilities for the systems. One of the most popular ML techniques is the Bayesian networks (BNs) that are graphical models for reasoning under uncertainty, where the nodes represent variables (discrete or continuous) and arcs represent direct connections between them. These direct connections are often causal connections. In addition, BNs model the quantitative strength of the connections between variables, allowing probabilistic beliefs about them to be updated automatically as new information becomes available [33]. Another technique is the Neural networks (NNs) [34] that are included in the supervised ML techniques. The inherent power of neural networks lies in its ability to recognize the underlying relationship between input and output data. The prototypical use of neural networks is in structural pattern recognition. Through a preset learning algorithm and series of training iterations the network learns to recognize patterns in the data sets and assigns weights to each variable (nodes). Neural network architecture employs multiple layers of nodes. A node is where the data is converted into values between 0 and 1 using a sigmoid transfer function in a network. Finally, another ML technique is Self-Organizing Maps (SOM). It is an unsupervised method. Training of PLGSOM is the shaping of a so-called map, which is basically a 2-dimensional representation of highdimensional input data. In the training phase, input samples are mapped onto the 2D grid in a competitive manner, a process also known as vector quantization. That map is the knowledge base containing the acquired knowledge from past experience. The SOM algorithm is used to solve classification problems, which implies, among others, that there exists a set of classes, in which input data are clustered/ classified (each input sample belongs to one of the classes). The term "label" in this case refers to the value specifying the class in which a vector belongs Mapping of the envisioned technologies in the icore Technological Platform Following the icore Technological Platform approach, Figure 48 below represents the mapping of the technologies that are described above to the groups that are included in the icore Technological Platform. Grant Agreement number: Page 89 of 136

90 URIs / PURLs OWL / RDF / SPARQL / Reasoning / RDF Graph DBs / Sesame Server / Sesame API ML Techniques (BNs / NNs / SOM) IBM ILOG CPLEX / BPEL Workflow Management / WordNet - JWNL WSDL / REST Protocol / RESTful WS Figure 48: icore Technology Platform and Technologies for Ambient Assisted Living 4.2 Smart Meeting Use case Introduction The aim of the Smart Meeting use case is to offer a seamless meeting experience to meeting participants, with both physical and remote (connected) meeting presence. The core scenario will be that of an external meeting visitor who will identify himself/herself by means of a meeting boarding pass, presented to an identity scanner (e.g. a QR code reader, barcode reader or even by means of NFC). After being recognized, icore can proceed to give access to the visitor and present a map of the premises indicating where the meeting takes place. The goal is to leverage the cognitive capacities of icore in order to provide new services to users. For example, after an end user organizes the meeting, the situational awareness will detect the different situations and act accordingly, thus relieving the end user of tedious task, by automating the progress The next section will discuss some technology choices are being considered for its implementation (based on the premise that icore is not yet completely available) and are proposed here as possible candidates for finding their way into the icore platform. Because this is an architectural document, the technologies are an indication only to help materialize the icore architecture and should not be interpreted as the final choice for the use case demo, which will ultimately aim to leverage the interfaces exposed by icore Technology Description and use in icore SOL/CEP Complex Event Processor Detection of complex situations The Smart Objects Lab Complex Event Processor (SOL/CEP) [38] allows the correlation of events of different types in real time based on a functional description language called Dolce. The correlated events can be emitted as so-called Complex Events. In the scope of icore, SOL/CEP could be leveraged by accepting different stimuli (either sensor events or higher level (pre-aggregated) Grant Agreement number: Page 90 of 136

91 events) and generate new stimuli to other systems by means of the generated complex events. A direct application of this is the reception of positive QR-readings and confirmation s of the assistants, leading to complex events such as all users have arrived, these users will be assisting. SOL/CEP supports adapters for different kinds of formats and transport protocols. RESTful paradigm Communication infrastructure REST (Representational State Transfer) [39] is not a strict protocol in itself, but rather communication style that leverages the concepts used in HTTP. Using HTTP/1.1 as the underlying protocol, it is based on the premise that HTTP operations like GET, PUT, POST and DELETE operate on the remote resources specified using URIs. It can be used for stateless, synchronous communications. Security can be achieved by using HTTPs. It can be used in icore as one of the possible interfaces (as discussed in the beginning of this chapter), although in our evaluation of Smart Meeting it is being considered for the administrative interface. 0MQ Communication infrastructure ZeroMQ (0MQ) [40] is software framework for low level, broker-less, non-blocking messaging. It allows low latency communications and is very suitable for scenarios where a big number of events need to be propagated. In Smart Meeting it is used in conjunction with SOL/CEP for the rapid delivery of sensor events. The framework supports memory based interprocess communications as well as distributed configurations by means of TCP/IP. It also support asynchronous communications. In icore it can be used to pass events between VOs and CVOs. Apache ServiceMix Middleware In order to coordinate the different services, Apache ServiceMix [41] is considered. It is a services platform that allows the creation of software infrastructures according to the SOA [42] paradigm. At its base, there is an implementation of the OSGi runtime [43], an industry standard for modular service deployment. The vision for Smart Meeting is to connect the different information providers, such as the meeting organizing app, the device gateway, the complex event processor and the administration interface to connect to this Middleware. In this scenario, icore could be invoked as a service, but depending on the final deployment of an icore instance, the described functionality could be delivered by icore itself. ZXing Multiformat barcode image processing The ZXing ( Zebra Crossing ) [44] project provides an open source library for the processing of different kinds of barcode images. The aim is to use it for QR code scanning, coupled to the appropriate hardware. It is proposed as additional icore technology, although it is realized that it may simply have to be implemented outside icore as a non-ict RWO coupled to a VO. UUID Universally unique identifier This identifying technology [45] is pondered for usage within Smart Meeting to uniquely identify objects. UUID brings the promise of uniqueness across multiple boundaries, so objects can be distributed physically, instead of depending on a non-unique behind-the-firewall IP address, or using MAC addresses which not every device may have. Grant Agreement number: Page 91 of 136

92 In icore however, this functionality will be provided using the more advanced Naming and Addressing functionality as specified in Section Mapping of icore Figure 49 below visually describes how the aforementioned technologies could be mapped to the icore architecture. UUID SOL/CEP (Zxing) Apache Service Mix ZMQ, RESTful Figure 49: Mapping the Smart Meeting Requirements to the icore Technology Architecture 4.3 Car Services Use case Introduction The Smart City case will demonstrate the application of the icore architecture and concepts to the domain of ICT Services for Car and Drivers. The Smart City case will be a proof of concept providing a clear statement of the benefits coming from the deployment of an icore IoT architecture in the context of the city environment, demonstrating the added values for all the actors in such context. In particular, the case will focus on the implementation of icore features that specifically address the interaction between the following actors: The Car The Driver The City infrastructure (parking systems, road infrastructure) Other types of service providers not directly linked with the city infrastructure but normally providing services in the context of the city, such as Healthcare services In the Smart City case, the Car is seen as a thing which can be connected to the IoT infrastructure both to provide information and to consume information. The information provided by the Car will benefit both the other Drivers and the City Infrastructure, enabling for example a better planning of the car parking sites. The icore-connected Driver will be able to take advantage both from the services provided by the City Infrastructure, getting for example real time information on parking sites and traffic conditions, and by the Healthcare Services, enabling a fast intervention in case of bad health conditions. Grant Agreement number: Page 92 of 136

93 4.3.2 Technology Description and use in icore The following technologies are considered for the implementation of the Smart City case VO/CVO Registries (OWL / RDF / SPARQL / Sesame Server & API / RDF Graph Databases) The Smart City Use Case will adopt the same common technology which has been introduced in the icore architecture and already mentioned in the previous Use Cases (section 4.1.2). In general, the OWL and RDF technology will be used for the creation of the semantic description of VOs/CVOs, whilst the RDF Graph Databases will be used to store the data about VOs/CVOs and will be hosted on Sesame server that provides various abilities for management and interaction with the stored data, through its API. VO/CVO Templates Repositories (RESTful Web Services, WSDL 2.0) For VO and CVO Templates as well the Smart City Use Case will take benefit of the technologies already introduced in the icore architecture for the sake of describing the templates of (Composite) Virtual Objects. In particular, the RESTful Web Services technology and WSDL 2.0 will be adopted for describing the Smart City Use Case virtual objects. Machine Learning Techniques Knowledge Building about User Profiles Different Machine Learning (ML) techniques, already described in the other Use Cases sections (section 4.1.2) and introduced in the icore architecture, will be used for building the knowledge based related in particular to User Profiling. Different user profiles that may be related with specific user preferences can be defined in the Smart City use case, in particular related to the preferred daily trip path of the User or preferred Parking Places. Orchestration Composition Environment The Composition Environment is the software component which is used to create Composite Virtual Objects to be executed within the M3S Orchestration Runtime. The Composition Environment is a Web Application hosted in a J2EE compliant servlet container which provides the developer the tools needed to specify the Orchestration composition. The orchestration composition is specified through a (proprietary) XML file. In order to produce the composition, the Orchestration Composition Environment will have to access the VO Template Repository in order to provide the CVO developer a palette of VO to be composed in order to define the specific CVO. CVO Container (Orchestration Runtime) The Orchestration Runtime is based on the M3S Component Orchestration platform, which enables the creation of event-based workflows for the composition of basic software components, exposed as Web Services. The Orchestration Runtime will be the container for the Composite Virtual Objects which will implement the logic for setting up the services provided by the City Infrastructure (e.g. a park site management service, receiving the events related to a car occupying a certain parking position or leaving the ark slot) and for setting up the services provided by the Healthcare services (e.g. for elaborating the events related to the health conditions of a driver and his/her geographic position) The Orchestration Runtime will assume that VO (orchestrated components) will be accessible via RESTful or SOAP Web Services. Onboard VO Container (Onboard Interface Unit) The Onboard Interface Unit will provide the platform for executing the orchestrated basic building block (Virtual Objects, in terms of icore terminology) for connecting the Car and the Driver to the icore infrastructure. The Onboard Unit Interface will receive events via Bluetooth both from the In- Vehicle sensors and from the Healthcare sensors and it will provide these events towards the icore Grant Agreement number: Page 93 of 136

94 infrastructure (towards the Composite Virtual Objects). Also, the Onboard unit Interface will connect the icore infrastructure to the In-Vehicle communication system for forwarding to the Driver the advices, notifications and alerts coming from the icore infrastructure services.. The Onboard Interface Unit will host icore Virtual Objects exposed as SOAP Web Service. In-vehicle Sensors All modern cars are equipped with different sensors. In the past, in-vehicle sensors were designed for safety application. Nowadays the in-vehicle sensors are used not only for safety applications but also for navigation, entertainment and generic services applications. One of the technologies used to manage the data coming from the car is the CAN bus where sensors can expose physical measurements to a host processor and CAN Controller. Finally, the information coming from the sensors will be translated in icore virtualized objects. Healthcare Sensors Specific sensor protocol stacks will allow the interaction of icore virtualized objects with zigbee solution (Envisioned) Technology use in the icore Functional Architecture The mapping between the proposed technologies, in the Smart City use case, and the icore architecture functional components, is presented in Figure 50, below. Figure 50: icore Architecture and Technologies in the Smart City use case Grant Agreement number: Page 94 of 136

95 4.3.4 Mapping of the envisioned technologies in the icore Technological Platform Figure 51 below depicts the mapping between the technologies that are included in the Smart City use case, on the icore Technological Platform. URIs / PURLs OWL / RDF / SPARQL / RDF Graph DBs / Sesame Server / Sesame API ML Techniques (BNs / NNs / SOM) Workflow Management / WordNet - JWNL ZigBee Protocol / Web Services Figure 51: icore Technology Platform and Technologies in the Smart City use case 4.4 Smart Business Use Case Description The aim of the Smart Business use case is to provide valuable insight in the logistics chain of valuable goods, such as pharmaceutical products. Throughout the manufacturing process and the logistics chain up to the point where the products are delivered to the end-user, high value products must be handled and stored correctly. To prevent product spoilage or damage, goods are monitored by (ICTenabled) sensors (attached to goods and opportunistically placed in vehicles, warehouses, etc) and stakeholders are alerted when storage conditions are not met. Two aspects are of importance for the use case: Acceptance reports: the system should provide the end-user of products a report at delivery of a product, showing the conditions of the product during transport and storage. Based on these reports, the end-user must be able to decide whether to accept the shipment or not. At the same time the reports can be used by transporters to deny claims for spoiled goods. Real-time alerts/early warnings: the system should provide warnings to transporters when the products are outside their optimum storage conditions. The system provides these warnings to guarantee the quality of products throughout the supply chain and enables transporters to act before damage occurs. The system may provide also alerts on the estimated arrival time of shipments and may even exhibit automatic control of storage conditions, such as temperature Technical Solution / Technology Description To support the icore system as well as to enable its functionalities, a set of different technologies have been proposed as the technology enablers in the icore Smart Logistic use case. Below is the description of each technology. ZigBee (IEEE )/6LowPAN/Bluetooth/IEEE b/g/n Communication between sensors, XML over TCP/IP Grant Agreement number: Page 95 of 136

96 Wireless sensors monitor the conditions of products and travel with the products from manufacturing facilities to the end-user. Wireless sensors are also envisioned to be opportunistically placed in vehicles and storage facilities. Typically, wireless sensor networks apply ZigBee/XBee (IEEE ), 6LowPan, Bluetooth or even IEEE b/g/n as underlying communication stack to create star/mesh networks. Sometimes, proprietary communication stacks are being used. Typically, wireless sensor networks use so called gateway devices to create a bridge between energy-consumption optimized protocol stacks and the Internet. In this use case, we intend to use a XML over TCP/IP interface to encapsulate sensor readings and also configuration data for sensors. MQTT - middleware and communication technology Using MQTT as Message Protocol the icore platform provides the information, when the values changes. So an early reaction on value changing is possible. A Restful WebService is not sufficient due to the fact that the WebService should be polled on a high rate to reach a reaction in time. Using an event based middleware supports the advantage, that multiple partner can subscribe on an event and distribution is reduced to publishing the events. Thus reduces the energy necessary for transmitting the events. CEP Complex event processing Fast reaction on events is necessary. The event based infrastructure supports Complex Event Processing (CEP) for analyzing and aggregating the events in real time. Using the sensor data from different sensor producers (temperature, location, location based weather forecast, historical temperature values) can be used in predicting temperature values. The use case relies on several information streams as input: Weather forecasts; Traffic information. CHOCO - Data and Knowledge Management The smart transportation scenario must consider the storage requirements of the goods (e.g. avoid storing product that needs different conservation temperatures in the same location). The compatibility problems can be solved using constraint satisfaction solvers. The constraint satisfaction problem (CSP) is defined as a set of components whose state must satisfy several constrains or limitations. CHOCO is a Java library for constraint satisfaction problems and constraint programming, which is based on event-based propagation mechanism with back traceable structures. ARIMA time series forecasting Time series forecasting in non-stationary environments consists of a broad area of techniques targeting the prediction of what might happen over non-specific time-periods in the future. For icore, it may be used to estimate the arrival time of a transport, the temperature evolution in a system without active cooling (in both cases: based on direct measurements or on CEP-provided streams), etc. Another less popular, but still useful - usage is understanding the past [50]. Probabilistic models, learning machines Probabilistic reasoning is a solid tool for inductive inference. It allows for expert knowledge elicitation and deploying into a predictive model, or automatically learning conditional probability estimations and model s structure based on data. Through the three reasoning patterns exposed by Bayesian networks (prediction, evidential reasoning, explaining away) one can make predictions starting from the SK. Moreover, by combining the estimations and the expected utility of different outcomes, one arrives to following the optimal actions (decision theory). Sesame, Pellet, OWL, RDF, SPARQL - Semantic web technologies For describing, discovering and reasoning based on metadata, one can use Sesame, Pellet, OWL, RDF, SPARQL, as described in one of the above section (Smart Home use case). Grant Agreement number: Page 96 of 136

97 D2.3 Architecture Reference Model (Envisioned) Technology use in the icore Functional Architecture The mapping between the proposed technologies, in the Smart Business use case and the icore architecture functional components, is i presented in Figure 52. Figure 52: Technology use in icore Mapping of the envisioned technologies in the icore Technological Platform Figure 53 depicts the mapping between the technologies that are included in the Smart Business use case, on the icore Technological Platform. Grant Agreement number: Page 97 of 136

98 Figure 53: Technology Architecture for the Logistic Use Case 4.5 Urban Security for smart cities Use Case The overall business context is the safe and secure smart city. The city infrastructure might be composed of: Cameras (providing event data on traffic conditions, emergencies ); Sensor for car parks, weather, pollution, public lighting as well as fire, smoke and CBRNE (Chemical Biological Radiological, Nuclear and Explosive) for emergencies or security forces surveillance; Public transportation (buses, subways ) as well as bike and electric car rental systems, but also smart police cars and ambulances; Personnel equipped with professional smartphone applications. Such infrastructure can be used by several stakeholders: By local government agencies to manage city services (public transportation, traffic lights, weather, pollution, events ), which in turn can be offered as near real-time information to citizens; By third-party companies offering services possibly building on top of public city services (infrastructure operators, car park and electric car rental availability ); By all the agencies dealing with crisis operations (State, civil security, fire-fighters, police forces ), at every level of command, strategic, operative, or tactical. Thus, one of the key opportunities of this ecosystem of interdependent services and objects is to leverage the potential of a city-wide infrastructure to enhance public services, but also private services and in our case emergency prevention and management. Technically this requires mechanisms to ensure secure resource sharing and coordination between applications with different constraints in terms of criticality and data protection. Grant Agreement number: Page 98 of 136

99 4.5.1 Use case description The urban security use case is about a CBRNE crisis within a city district. The goal is the control of this medium scale area with several interconnected sensors network. Basically, the most prominent ICORE unique technical features, detailed further in this chapter, are (1) the automated set-up and control of decisions and optimizations, (2) the ability to use any relevant available means (typically sensors) on the field in an opportunistic but controlled and secured manner (Figure 54). Figure 54: Urban security use case overview According to the Thales security businesses such as public safety, area and critical infrastructure protection for both security forces and military forces, the urban security use case described here supports clearly many new challenges in terms of system adaptability and system scalability. Indeed the business value relies on a medium scale sensors network based system solution with the main following operational requirements: Managed mission is 24/7 and temporary (few days to few weeks) with exceptional events and high intensity situation (CBRNE urban crisis); C2 (Control and Command) and C4I (control, Command, Communications, Computers and Intelligence) operators number and training are limited as much as possible; C2 here must be understood as command post that collects sensors detection and provides local and dedicated (e.g. CBRNE) situation awareness, decision support and controls while C4I aggregates C2 situations and provides high level decision support and controls; Sensors and network heterogeneity is total and fully managed (flexible interoperability, aka. plug-n-play); Sensors system is deployed on demand and may be redeployed easily at run-time; Solution is low cost as much as possible, but with a high degree of reliability, performance, flexibility and ease-of-use. To deal with such type of crisis and requirements, coordinating adequately all the necessary means and processing massive set of events in real-time is mandatory required, so the unique ICORE technical features promoted by this use case are twofold: 1. Adaptable and reliable 24/7 crisis management processes; 2. On the fly (dynamic) plug-and-play of new sensors and sensors networks. Grant Agreement number: Page 99 of 136

100 The role of the first unique ICORE technical features is highlighted (in red) by the next functional architecture model Figure 55. Figure 55: Urban security use case ICORE unique technical features overview Here decision making for adaptable, evolvable, scalable and quickly deployable medium scale urban security with multiple heterogeneous sensors is not yet there, so distributed decision making technologies should bring positive arguments and answers to decision and optimisation concerns: At potentially all architecture levels (sensors, network, gateways, services, applications); To master medium scale crisis situation where uncertainty, huge number of events to process and criticality of decisions is the norm. Decision making is used for operational mission management as a way to partially automate missions workflow management that support operational decisions; for instance it is about rescue services to set-up (the ORIENT and DECIDE steps of the well-known OODA closed-control loop). It is also used for system management as a way to partially automate system management workflows (the Analysis and Plan steps of the well-known MAPE Autonomic Computing closed-control loop) then to support self-healing of missions related workflows for instance. The second unique ICORE technical feature, the plug-and-play of new sensors and sensors networks, is also critical in this context to support system adaptation, system evolution, scalability and deployment decisions where all available means have to be quickly assessed and safely plugged into the running system if they are relevant Use case scenario overview The scenario is to illustrate several emergency preparedness and response challenges after an accidental IED (Improvised Explosive Device) explosion dispersing a highly toxic and persistent liquid (sulphur mustard) along with radioactive caesium-137 (Cs-137) in an urban environment. The delayed symptoms of sulphur mustard add an extra difficulty for first responders and medical services. Also, the compound is persistent and will remain on buildings and vegetation for a prolonged period of time. Challenges include detection and assessment of the hazard area, tracking Grant Agreement number: Page 100 of 136

101 possible victims, dealing with and restoring the contaminated area, and secondary contamination of health care workers and facilities; so two main missions, a medical mission and a security mission. The purpose of the scenario is to evaluate: The ability of emergency services to handle a mass casualty event ; The ability and plans for registration and tracking of possible victims as well as hostile people; The communication and information strategy to inform the public and possible victims as well as hostile people. The main stakeholders here are the security forces (police), first responders, state agencies dealing with urban CBRNE crisis and citizens leaving or just passing through the district. The use case is broadly described hereafter as a five steps story. This use case requires technically: The deployment of a wireless unattended sensors network (CBRNE sensors and perimeter protection sensors) and mobile sensors, here UAV (Unmanned Aerial Vehicle) and UGV (Unmanned Ground Vehicle); The connection of the wireless sensors network to a set of video cameras permanently deployed in the city district; The deployment of a mobile GSM base station; The deployment of mobile C2 with surveillance applications connected to the wireless unattended sensors network, mobile sensors and mobile GSM base station; The set-up of a C4I with surveillance applications and its connection to mobile C2. The connection of the C4I with other sensors and sensors networks in the city district; for instance a given smart home. Figure 56 illustrates the system deployment. Grant Agreement number: Page 101 of 136

102 Five steps story Figure 56: Urban security use case system deployment overview Step 1: CBRNE indoor surveillance of a given suspicious building Based on information acquired from intelligence activities, the police monitors a building where it is expected to detect a clandestine laboratory producing dirty bombs (IEDs) made of chemical and radiological products. A CBRNE detection system based on sensors wireless network has been deployed within the building communal areas (e.g. corridors, lifts). A mobile C2 deployed near the building receives the detections in near real-time. Step 2: accidental explosion of IED in a room (building under surveillance) A detonation is heard. The bomb blast and fragments causes several fatalities to people being in the clandestine laboratory. The detonation also disseminates about 5 kg of sulphur mustard in the form of small droplets along with radioactive caesium-137. The slight breeze carries the cloud of droplets across the street. Droplets are inhaled and also deposited on persons and surfaces. This is, however, not noticed until casualties from the bomb blast, first responders and other persons experience eye irritation, inflammation of the respiratory tract and rashes and blisters on the skin. CBRN detection system immediately identifies: Bis (2-chloroethyl) sulfide and caesium-137. Step 3: crisis response decision and operational capabilities required After a police report describing the situation, fast decisions is taken by city authorities to control the concerned area (a whole district) and restrict access to it; it mainly means managing: evacuation, confinement, public information, medical aid transportation, hostile actions prevention and control. Step 4: crisis response implementation with sensors networks enhancement The CBRNE wireless sensors network initially deployed indoor is enhanced and extended with additional outdoor detection, classification, identification, and tracking capabilities: Outdoor CBRNE sensors; Vehicles and pedestrians detection based on wireless sensors (PIR, acoustic, magnetic, ) and actuators (speakers, flash lights, ); Use of existing CCTV system (wired network) that provides still and streaming video imagery for identification and tracking of vehicles and pedestrians; the CCTV system is connected to the wireless network through a dedicated Micro UAV with on-board sensors (chemical, hygrometer, temperature, video imagery) that enables meteorological based IED compound dispersion prediction plus identification and tracking of vehicles and pedestrians; the micro UAV is remotely controlled by perimeter protection C2, but C4I may take the control over; Grant Agreement number: Page 102 of 136

103 Low-power low-capacity GSM base station for public information but also for identification and tracking through geo-location; the geo-location function is indeed important since it enables the detection and tracking of groups of people; Micro UGV to transport safely pharmaceutical products; the micro UAV is remotely controlled by C2. It is very clear that the use of the CCTV system and the use of a GSM base station to track people is subject to public authorities authorisation and delivered to the police because of the situation but with clear legal limits. An additional mobile C2 for perimeter protection is then deployed that receive detections from all the sensors above. The C2 provides dedicated situation awareness focussed on people being in the city district while the first CBRNE C2 continues to deliver CBRNE specific situation awareness. Finally a C4I is set-up and connected to both C2 in order to provide integrated and full city district situation awareness. The overall strategy to tackle the crisis situation is decided at the C4I level; main actions implemented on the field by first responders are based on the C4I decisions. Step 5: Area control capabilities in action A few examples of area control (and related) capabilities are: Identifying people in the area (e.g. hostile or not, requiring help or not, leaving in the area or not, ); Tracking hostile people (e.g. thieves), first responders, victims, ; Providing relevant information to people in the city district through SMS, audio messages, etc. for confinement, evacuation, medical procedure (e.g. use of mask, iodine), ; Slow down intruders (hostile or not) progression in the area, prevent and deny unauthorised access to the area, ; Citizens rescue. Rest of the paragraph is an example of actions supporting a given citizen rescue, here Sarah, an elderly person living in the city district. This example (Figure 57) promotes the integration of all ICORE use cases; it also highlights the two ICORE unique technical features, automated workflows management and new sensors network secured plug-and-play. Grant Agreement number: Page 103 of 136

104 Figure 57: Urban security use case and Sarah rescue Assisted set-up and use of crisis management workflows based on pre-defined processes templates. According to decision making mechanisms, the rescue management workflows are set-up based on a set of pre-defined processes templates; a given workflow being one or a combination of several processes templates. Initial steps start at C4I level, based on a given predefined processes template, a first workflow is setup that: Build-up a list of people to rescue based on known citizens living in the district, etc.; Select most critical people; Sarah is selected; Locate Sarah (e.g. at home, out her home); Sarah is found being at home; Set-up secured smart meeting (authentication, encryption, etc. is used) with Sarah at home; Sarah smart home: o o o Sarah is warned that she must stay at home (confinement) for a certain amount of time and that a remote monitoring and control of her home and health is required; She accepts, so the sensors network of Sarah home is dynamically plugged into the surveillance system deployed; all sensors and actuators are recognized and manageable by the C4I through a second new end-to-end encrypted monitoring and control workflow; C4I gets sensors real-time data (e.g. temperature, CO2, luminosity, body pulse sensors) and controls actuators (e.g. doors, blinders, windows) adequately. Figure 58 highlights a subset of the second workflow. Grant Agreement number: Page 104 of 136

105 Figure 58: Urban security use case workflow between C4I and Sarah s home In this example, the workflow set-up is connecting the C4I and Sarah s home; at the same time some other workflows are also running for instance within Sarah s home to support medical assistance. Smart business (supply chain): Sara is warned that medical assistance is sent (iodine) to her through an automated vehicle (e.g. micro UGV). Smart car: Sara is warned that evacuation is required with her car, remote monitoring of the car applies as well as remote monitoring of her health with dedicated body sensors network; same kind of sensors network plug-and-play process happen that connects Sarah s body sensors network and C4I; the Sarah s home sensors network may disconnected from C4I or not depending of the CBRNE situation evolution Technical Solution We first describe here the functional architecture (aka system architecture) of the system addressing the operational use case introduced above. Then we detail the technical (aka physical) architecture. The system architecture is clearly a refinement of the use case, literally the context of use (CONUSE), but also a refinement of operational requirements to system requirements; we won t detail here the system requirements but rather we provide only the main operational requirements (that have been shortly introduced above). Because of this, the mapping between system requirements and system functions is also not provided. About the technical architecture, we also won t detail the technologies state of the art supporting the technical solution retained, so the description of selected technologies has to be taken as a result but with reasonable assumptions and feasibility based on a strong business and studies background Main operational requirements We split the requirements into next subsets: mission related requirements, dependability related requirements, cost related requirements and system validation; we don t provide here the requirements by themselves, because they are included in deliverables D2.1 and D2.2 gathering all requirements, but rather the rationales behind. 1 Mission related requirements Grant Agreement number: Page 105 of 136

106 System deployment The requirements here are related to Usability and Performance and scalability categories; they are supported and/or validated through: Field deployment and trial with the provision of sensors and network as well as simulation software that extends the sensors network and enables large scale validation, so it means hybrid simulation; Plug-n-play of sensors associated with deployment assistance as the mean that enable installation in a reasonable time; The resistance to specific environment conditions won t be demonstrated. Sensors, actuators and detections The requirements here are related to Interoperability category and Performance and scalability category; they are supported and/or validated through: Usage of various heterogeneous sensors (detection sensors as well as identification and tracking sensors); Scenario mentioned below in section Situation awareness and Control and Command ; Vehicles detections, identification and tracking. Data communication network and gateway The requirements here are related to Interoperability category and Performance and scalability category; they are supported and/or validated through: Usage of ad-hoc low-speed (IEEE ) and high-speed (IEEE ) network supporting a mesh topology; Usage of IETF standards (6LowPAN / IPv6, CoAP, etc.) with strict compliance to the standards. Situation awareness and Control and Command The requirements here are related to Usability category and Situation Awareness category; they are supported and/or validated through: Scenario that switch on and off the false positive and false positive events reduction; then compare the quantitative benefit (e.g. number of alarms received by operators); Scenarios that provides detection, then identification and tracking of a given person including a short loop scenario where no operator is in the loop; Scenario that performs a few rescue processes with automated support such as the Sarah rescue introduced above ; Specific environmental conditions won t be demonstrated. 2 Dependability related requirements The requirements here are related to Security & Privacy category and Usability category; they are supported and/or validated through: An extensive usage of standards (IETF, OGC, W3C, IEEE, etc.) for maintenance and acquisition; Grant Agreement number: Page 106 of 136

107 Network and sensors failures scenario as well as security attack scenario for availability and C2 and C4I access control; Power consumption estimation and simulation for long term availability. 3 Cost related requirements The rationales here come from exploitation optimization, so target cost is less than 1Keuros per wireless node. 4 System validation related requirements In addition to all kind of description related to validation provided above, hybrid simulation where part of the surveillance system is simulated shall be used for validating system scalability; simulation here shall also be used for generating sensors events Functional architecture (refinement for use case) Regarding functional architecture we go in the document through several steps adding details and zooming on part of the architecture. Figure 59 introduces a simplified functional architecture model as a layered stack. This model is a generalisation of common IoT and M2M (machines to machines) related international standards such as OGC SWE (Sensor Web Enablement), ETSI M2M and IETF Constrained RESTful environments (CoRE) Working Group. We use this model to describe the urban security use case functional architecture rather than the icore functional architecture because it is simpler to do so; nevertheless this representation is fully compliant with the icore functional architecture, sensors network gateway representing both icore VO and CVO layers, while the applications correspond to icore services layer. Figure 59: Functional architecture model generalisation from standards In the model above the sensors network gateway gathers a network access gateway providing interface to specific sensors network, as well as several services that enable generalized access to sensors network or that provides management for security concerns, administration concerns, interoperability concerns and so on. In other words a gateway role is twofold: Grant Agreement number: Page 107 of 136

108 1. A sensors and actuators 5 network interface; 2. A services layer providing at least a virtual representation (VO layer of icore conceptual model) of sensors, actuators and network to enable observations gathering and system management of all these: so a devices access and control service; Other services might provide security (e.g. access control), sensors data semantic mediation, sensors workflow composition management (function of CVO layer of icore conceptual model), dedicated decision making functions (functions of services layer of icore conceptual model), and so on. Applications (services and applications of icore conceptual model) typically represent domain applications that provide for instance situation awareness and control to operational end-users; the police (security concerns) and the first responders (medical concerns) in the urban security use case. Coming back to icore fundamentals such as decision making and cognitive technologies to tackle IoT concerns, this layered architecture has a strong implication in term of optimization; the optimization scope and potential depends at which layer (or level) it is performed: optimization performed by network nodes are indeed more focussed than optimization performed at network or gateway level that is itself more restricted than optimization performed by dedicated applications at system level with the most complete view on possible improvements. This means that several optimizations may be running concurrently with different timing constraints, so short term loops and more long term loops. Figure 60 introduces the overall functional architecture of the urban security use case as a first refinement of the functional architecture model generalisation above. 5 In the rest of the section we only use the word sensors in place of sensors and actuators. Grant Agreement number: Page 108 of 136

109 Figure 60: Urban security use case overall functional architecture The functional architecture here encompasses and refines the main functions introduced above: Sensors and actuators network; this is the wireless sensors network deployed by the police and the wired video cameras network permanently installed in the city district. This global network here connect all kind of sensors whatever they are CBRNE specific or perimeter protection specific specifically deployed for the given crisis or whatever they are plugged onthe-fly into the network and fully managed exactly like the other ones; indeed in our case the mission of the city district video cameras (aka CCTV) network is re-allocated for the time of the crisis and managed by C4I. GSM mobile phones present within the district are also part of this wireless sensors network but rather than controlling them, they are only tracked meaning that C2 get all related geo-location information; another difference is that they are not obviously deployed by the police; Network gateway; the function is distributed among the network, that is the northbound interface (called here server), and the C2 and C4I connected to the network, that is the southbound interface (called here client); in the use case, several (instances of) network gateways are defined and set-up: for interconnecting the video cameras network along with the wireless network deployed and for interconnecting the two different C2 and the C4I with the wireless network deployed. The network gateway is about a common interface to devices and network whatever the network technologies and protocols used (e.g. wireless, wired) and the devices technologies used (e.g. mobile devices or unattended devices such as infra-red sensors, seismic sensors, video camera sensors, acoustic sensors). The network gateway of the C4I will be also used to interconnect the other icore use cases; this way the C4I will gather all information available and will be able to deliver complete situation awareness as well as crisis management; Services; for the C2 it is mainly about providing a domain specific and standardized interface such as a CBRNE domain specific interface, so a VO layer. For the C4I, the services layer provides also a standardized interface, but the goal here is to provide a more open then generalized interface to domain applications that enables a wider usage and management of all heterogeneous sensors network we may need; within this function specially within C4I, we may also integrate various contributions from icore partners; a good example would be the use of workflow management engine for group of sensors tasking and reasoning engines for sensors and sensors network optimization (e.g. performance, availability, power consumption, routing); Applications; here it is about dedicated software used by operational teams both for mission management and system management; within this function specially within C4I, we may also integrate various contributions from icore partners; a good example is the Smart Query SPARQL based engine that enable smart and easier management of video cameras network. C2 and C4I functions integrate both sensors network gateway and applications functions. CBRNE C2 and perimeter protection C2 along with their dedicated set of sensors (respectively CBRNE sensors and perimeter protection sensors) may be seen as different Community of Interest (CoI) since they receive only dedicated sensors information. Figure 61 introduces the functional architecture of the sensors network with two abstraction layers, sensing components and network. The wired and wireless sensors here are typically decomposed into platforms (network nodes) and sensing components connected to the platforms, where the platform, that may be mobile or not, brings the wired and wireless communication mean; so the network is composed of several wired Grant Agreement number: Page 109 of 136

110 and wireless platforms. For testing and validation (e.g. scalability of the system solution) purposes, some of the sensing components and some of the network nodes may be simulated and connected to the rest of the (physical) system; simulation is also useful when there is no easy way to validate the real sensors, for instance explosive or radiological sensors that for sure detect only this kind of very sensitive material. Figure 61: Urban security use case sensors network functional architecture All functions are managed through a management interface that enable fine tuning and access to all the other functions; this is particularly useful to monitor the sensors network status and to possibly adapt its behaviour according to non-functional concerns such network performance, network availability, power consumption and so on. Sensing components exposes most of the time a specific proprietary interface that requires a dedicated protocol adaptation. Security function is preferably hop by hop protecting data communication between network nodes (e.g. routing messages) but also end-to-end between the network gateway and the sensors nodes to protect applications communication, typically the collection of sensors observations; the protection of the network nodes themselves is also a concern with anti-tampering for instance. Finally wireless security is a key point because of wireless communications that may be quite easily intercepted or jammed; it is also difficult to ensure because in our case the network nodes are very low power and have tiny processing performance, so trade-offs have to be found. In the urban security use case, the wireless network is a security domain, while the wired video cameras network is another security domain. Plug-n-play function is about self-configuration both of the network and the connection between the sensing components and the network nodes; the last one is very similar in principles to USB devices plug-n-play where an automated recognition of devices is performed as long as the automated lookup and use of the right devices drivers; network plug-n-play is focussed on the automated ad-hoc routing self-organisation with zero-configuration; this is particularly important for an efficient on-thefield deployment. Ad-hoc routing and data transmission functions are common and basic features to sensors network; ad-hoc-routing here provides the same level of flexibility and availability than Internet hop-by-hop networking with multiple paths opportunity and automated routing reconstruction in case of network nodes failures; data transmission for sensors observations and commands may be point-to-point between network nodes or point-to-multi-point based on the publish-subscribe communication model providing an efficient data distribution mean; this is particularly critical in this kind of resources constrained network; the in-network observations filtering make use of this communication model to set-up a distributed hierarchical sensors Grant Agreement number: Page 110 of 136

111 observations filtering. Both of the Ad-hoc routing and data transmission functions, because of the power consumption constraint, have optimized behaviour with a dedicated performance function to reduce the overall power consumption; they provide also hooks for the self-management function in order to enable network level optimization. The goal of interworking between wireless network and wired network function (network gateway server in Figure 60) is to enable the connection between the wireless network with specific constraints (i.e. power consumption, bandwidth ) and C2 and video cameras wired network that have no such constraints. The functions with green colour such as in-sensor observations filtering and in-network observations filtering represent possible decision making (cognitive) processing that can be performed at sensing components level to minimize the amount of false positive events or at network level by dedicated network nodes to lower even more false positive events through spatial or temporal correlations of detections. Also the self-management function may perform automated optimization using the administration interface. The clear goal (and requirement) here is to decrease even more the C2 and C4I operators workload whatever they perform mission management (e.g. observations filtering adaptations) or system management with self-* (e.g. data communication performance or security levels adaptations with self-optimization, self-protection). It is important to note that self-management function here performs optimization either at network node level or at network level only; in this last case however only the sensors network gateway is able to perform self-management of the entire network. The sensors network gateway is depicted in Figure 62. Figure 62: Urban security use case sensors network gateway functional architecture As introduced above, the role of the gateway is to provide to the business domain applications a useable abstraction of sensors networks, one (for C2) or several (for C4I). The C2 and C4I gateways goal is therefore similar, the main difference being that C4I gateway is designed to be more open and flexible to support more applications on one hand and easy integration of new functions, either related to decision making and connection of new sensors network on the other hand; concretely it Grant Agreement number: Page 111 of 136

112 also means that C4I gateway will have more functions than C2 gateway but this is not discussed here. The integration points in Figure 62 are explicitly mentioned (green boxes) while details are provided below; these integration points represent both interfaces for integration with the rest of the functions as well as new functions. Also green colour is used for decision making functions. The network gateway (client) is the set of functions responsible for connecting sensors network. It is about providing secure but raw access and control to one or several sensors networks then to the sensors themselves: the network protocols adapters are mainly wireless or wired protocols end points in our case; here integrations points mean that we may add new protocols adapters if required depending on sensors or sensors network integration; also the new adapters have to be interfaced both with the security function and the sensors and observations protocols adapter; these end points are used by the sensors and observations protocols adapter that translates specific encoding of sensors observations, sensors responses to commands and sensors commands into a standardized encoding. This translation is based on the use of predefined standardized templates kept in a sensors and observations templates registry. Because we have to manage huge heterogeneity of sensors with an even more huge heterogeneous representation of observations and commands, the translation uses also semantic mediation engines with dedicated (preferably standardized) sensors and observations ontologies. Here, integration points enable to add new ontologies and semantic engines in order to support management of new sensors; these new functions have to be interfaced with the sensors and observations protocols adapter. Security function is the peer end-point of the network nodes security function and also the access point of the sensors network security domain; the function provides key management compatible with wireless network constraints. The sensors and observations repositories manage the storage and access of sensors, sensors observations and sensors workflows models. We have here two kind of models, instances (or objects) models and types (or meta) models; while the first one deals with information about real world such as sensors and sensors observations, the last ones provide definitions about real world such as sensors capabilities description, observations semantic and so on; both enable control and reasoning on the underlying real world. Integration points here provide a mean to extend the repositories with new specific decision making information registries in case of extension of the services with new decision making functions; the new registries are interfaced with the services and adapters synchronisation in order to be updated and linked consistently with the other repositories; they must be also interfaced with security policies registry because the policies are related to the models in the repositories. The services and adapters synchronisation function has two main objectives: consistency of repositories as along as scheduling of requests (top-down) and updates (bottom-up). About consistency, it ensures that all information coming from the network gateway is dispatched to the right registries (sensors and observations repositories), many registries may be implicated in; it manages also conflicts between updates (about sensors and observations) from network gateway and requests from sensors and observations services, for instance to retrieve sensors information such as sensors availability; in this case scheduling may be necessary for instance when no sensors observations is available to satisfy a request coming from an application; the synchronisation function then first suspends the application request and sends a request to the network gateway; when the sensors observations are received the application request is resumed and returned these sensors observations. The sensors and observations services provide a sensors networks standardized interface to the domain applications; it is essentially based on the same principles than the well-known web-services pattern publish-find-bind where information is managed to be available to domain applications, then searched (e.g. discovery of sensors and observations) and retrieved by the applications; last Grant Agreement number: Page 112 of 136

113 step is the control of sensors by applications, to enable system management of them (e.g. information about sensors state and configuration, configuration updates). In addition to these core basic functions, the next few functions may be set-up and used at this level in conjunction or not with similar ones set-up and used at the applications higher level, it is about architecture trade-off and decision; a rather good example is the filtering and correlation function that may be performed by dedicated applications, by the gateway and by both but with a clear splitting of scope. Sensors workflow management is the way to provide large scale processes integration and scheduling of sensors processing along with various sensors data related processing services. Regarding decision making, sensors observations filtering and correlation is used to manage the vast amount of information provided by sensors networks; as such filtering operates on topics of interests (e.g. observations but not only) while correlation provides a first level of observations aggregations through patterns and dedicated operators (e.g. temporal, spatial, causal, logical). Sensors network self-management function may be set-up here to perform network wide management focussing on non-functional concerns taking care of sensors observations collection efficiency, so in short the sensors network performance and availability. Integration points here enable to add new sensors related services if required. The sensors and observations services are accessed through the security function that implements access control against domain applications requests; the granularity of this access control is defined according to the kind of objects, e.g. observations, sensors, sensors workflows, sensors network, security domains, the applications requests are targeting. The security policies registry function associated to the security function takes care of this granularity; also because of the dynamic availability of the sensors, sensors network and sensors observations, the security policies registry function has to be very efficiently updated and organized. Figure 63 describes the domain applications function within C2 and C4I. Figure 63: Urban security use case domain applications (C2 / C4I) functional architecture Within the overall functional architecture introduced by Figure 60, the domain applications functions are clearly much focussed to the specific needs of operational end-users such as the police (with perimeter protection or CBNRE C2) and all the other first responders than the rest of the functions in this architecture: here the level of abstraction is the highest but at the same time the scope of interest is kept much more narrow; e.g. not all observation may be relevant. Depending on the domain and mission they are focusing on, the applications are obviously quite different between the CBRNE C2, the perimeter protection C2, both with their dedicated surveillance mission and the C4I with its global crisis management mission but the principles are similar. Grant Agreement number: Page 113 of 136

114 Figure 63 is a unified view of C2 and C4I functional architectures for the domain applications layer; if there is no fixed rule regarding architecture, for the urban security use case we choose to have both system management and mission management applications for C2 while at the same time, the C4I has only mission management applications; behind that choice the rationales are clear: C2 is dedicated to a given local sensors network and it is the most relevant function to perform local and efficient system management while C4I has an integrated view of all sensors network and is focused on global situation awareness and high level decision making; the system management is not the point in there. With the mission management functions, the core applications represent the basic functions identified in section about use case storyline and capabilities set-up. The main operational goal is the control of a city district so it requires complete evacuation from the district and access restriction to the district. To manage these we use: Identification and location of people (e.g. based on video cameras, GSM mobile phones) to know if they are in the district or not, if there is still people in the district, etc.; Tracking of people (e.g. based on video cameras, GSM mobile phones) to know where they are going, for instance leaving the district or going into the most polluted sector, etc.; Information to inform them (e.g. based on GSM mobiles phones, city district speakers) of what to do or what is not allowed, etc.; Situation awareness function provides a global repository aggregating and linking all the information above; this is then used by UI and crisis management processes to infer and manage the situation. Integration points would be the mean to add new mission core capabilities as well as dedicated UI; a concrete example for C4I would be the inventory of rescue means available and deployed on-thefield such as tents and so on. The crisis management processes functions represent the automation of crisis management processes; these processes are typically managed by experts that decide of actions according to their shared understanding of the situation and CONOPS/CONUSE. Based on the experience, CONOPS/CONUSE the objective is to set-up and use automated processes, namely crisis management workflows that reduce operators and experts workload while enhancing the pace of actions. The processes are defined and kept in the crisis management workflows templates repository. At this level the workflows encompass sensors network workflows and domain applications workflows to provide global surveillance and reactions capabilities; an example of this is provided in step 5 of use case storyline at section The crisis management optimizations are further enhancement of crisis management automation where optimization, normally performed by experts may be partly delegated to the function. The system management functions are operated by network experts and deal with two critical concerns. The first one is the supervision of the sensors network which means the monitoring with a of the detailed behaviour of the system with very strong availability and security in mind because of urban crisis; the sensors network has it-self built-in real-time auto-reparation that provides a first level of resilience and reduces the network experts workload and stress using the supervision UI (User Interface). The sensors network monitoring enables re-configuration actions for instance typically when the system availability or security is no more adapted to support the missions; availability monitoring here includes wireless network nodes power consumption monitoring. Optimized deployment of the sensors network is also a critical question because of the crisis context; it has to be as quick as possible, typically here in less than one or two hours for the global system and less than 5 min. per sensor. Quick deployment implies preparation, so, first deployment planning and Grant Agreement number: Page 114 of 136

115 second on-the-field self-configuration. The deployment planning is performed though a deployment UI that provides an efficient way for defining and estimating the value of a given sensors network deployment before its realization on-the-field. Two typical deployment optimization functions are relevant here: analysis of deployment with the network communication point of view or analysis of the deployment with the sensors point of view; the first tries to find the best location of network nodes while the second tries to find the best location of sensors; so trade-offs have to be found finally. System self-management is a potential enhancement to the management of experts towards sensors network optimizations, mainly for availability and security. Grant Agreement number: Page 115 of 136

116 5. Expected Benefits of icore Architecture This section has a twofold goal, on one side to show how icore Architecture fits in the requirements put forward by the Stakeholders and how they can benefit from it, the second is to show how icore can bring value also to other architectures in the realm of Internet of Things. The value that icore brings to the ICT industry needs to be evaluated from the points of view of several stakeholders. The documents produced by WP1 [5][6] have considered a number of actors. In the next section the value brought by icore to the major stakeholders is briefly highlighted, in section 5.2, the general business requirements are mapped onto high level capabilities of the icore architecture. Then a general analysis of how the icore requirements put forward by D2.1 [2] and D2.2 [3] are fulfilled is provided in section 5.3. Section 5.4 discussed how the Cognitive infrastructure supported by icore is capable to provide value to the stakeholders. Section 5.5 illustrated and discusses the contribution of the icore o existing architectures for IoT services. 5.1 High Level Value of icore There are several stakeholders that can take advantage of the icore architecture. As an architecture, icore makes clear by means of design principles and functional separations how to access to valuable functionalities and services. These could be instrumental for many stakeholders for creating and providing applications in a icore platform or being used by the end users to take advantage of the rich functional set. The major Stakeholders and some of their functionalities are depicted in Figure 64. Figure 64: Stakeholder roles and related benefits enabled by icore architecture Value to Users Users will benefit from icore in terms of Personalized services (cognitive functions can be fundamental in order to tune the service functions with the real specific customers needs); interoperable and open services, users will not be forced to a specific ecosystem, but can use different solutions, services and devices from the market (virtualization is the major enabler for this feature). In addition, users can add their own value to services either through programming (by means of APIs) or by means of refinement of the Real World Knowledge. Value to Service Providers Grant Agreement number: Page 116 of 136

117 The value for a Service Provider resides in: integration of different device ecosystems (interoperability by means of virtualization); the possibility to access to different levels of programmability. From the Service provider standpoint icore supports, at the Knowledge level, the specialization of the Real World Knowledge databases (e.g., providing new ontologies or Knowledge representation of a specific problem domain) as well as the specialization of specific functions and components (e.g., VO and CVOs). These capabilities enable a Service Provider to build his or her own problem space with specialized Knowledge that can be a distinctive value within an icore system. At the Component level, icore components can be specialized according to the needs of applications or for reflecting properties and features of the problem domains. Also gateways and low level objects can be programmed in order to integrate in the icore infrastructure with new devices and real world objects. At the Application level, a Service Provider can capitalize its Knowledge content and specialized components, by creating compelling applications using the icore interfaces. Value to Domain Context Providers This is a new Actor in the icore ecosystem. Some companies can specialize in order to well define a problem domain (e.g., logistic) and use the icore programmability to create the right context for application developers and companies to build/create/use rich applications. In this case the Context Providers are highly skilled people that know what phenomena / events and measurements are important in the specific domain and will be able to sell their Domain representation. Value to Platform Providers The platform providers can put in place an icore framework and make it available to other Providers that will (by segmenting and virtualizing the different problem domains) build specific solutions or will open up to a wide ecosystems of developers. Several instances of the icore architecture will run in parallel satisfying the requirement of individual Service Providers. Value to Developers and Marketplace Providers Developers have the option to develop components to specialize the icore functions or to develop applications in a generic or problem specific manner. Applications (if service providers open their application spaces) could be developed once and be used in different problem domains. Marketplace Providers can nurture a market of specialized applications and relate those to different service providers in order to populate the service offering but still maintaining compatibility with different Service Providers 5.2 Mapping of the icore Architecture to Business Requirements From the user requirements that originate from the generalized use cases, the following high level functional architectural requirements can be derived. a. Large Scale system Some Service and Platform Providers will need to manage a large number of real world objects. The icore architecture helps this daunting management task by means of several functionalities: cognition can help in allocate the right resources for the real time context in which applications are operating and self-organization capabilities of the platform will help in lessening this issue. Programmability of the platform and full availability of interfaces at the level of VO and CVO can allow introducing the mechanisms that the Provider deem appropriate to manage their resources in the case that the management functions are not enough to support the requirements. b. Virtualization The hardware and embedded software of the sensor or actuator should be encapsulated by a Grant Agreement number: Page 117 of 136

118 virtual counterpart and this virtual counterpart should offer a well-documented interface to the IoT. In the icore architecture the virtualization is achieved by the VO, being the virtual counterpart of the RWO device. The VO Type, as stored in the VO Template Repository in the icore architecture, provides the well-documented interface to the IoT. c. Enrichment The architecture should provide means to combine and enrich the data from one or more sensors into higher level sensors. In the icore architecture the CVO is the key element for the data enrichment. Combined with the CVO Template the enrichment functions are unambiguously defined. d. Discovery The architecture should provide means to discover the relevant set of ordinary and higher level sensors, given certain relevance criteria. The discovery mechanism should take into account that the party doing the discovery (buyer) might have a different frame of reference than the party offering the sensor data (seller); the sensors can be mobile. In the icore architecture, the VO Registry and CVO Registry store metadata with each VO and CVO, thus providing the criteria for the discovery of VOs/CVOs. The use of ontologies in the objects in the VO Templates Repository and the CVO Templates Repository, and more precisely in the VO Types and the CVO Templates, within the icore architecture, bridge different frames of reference of parties using the icore system. e. Security The architecture should offer a fine grained security mechanism, allowing authorized access to just the sensor data sold/bought in a multi-party setting (i.e. distributed and multidomain). Security in the icore architecture is not confined to a limited number of concepts. Security by design is pervasive and details can be found in section Mapping the Reference Architecture to Technical Requirements The mapping between the Technical Requirements put forward in and the icore Architecture is represented in Table 3. Requirement category Functional elements Functional Data acquisition and categorization. icore entity descriptions are performed Data acquisition relates to fulfilling through templates and models. There are all those actions needed to three kinds of templates, each one instantiate the icore system (i.e. corresponding to the identical icore level, need for formal icore entity namely Service, CVO and VO templates. descriptions to enable acquiring They are used to describe a specific type of information about their status, Service, CVO and VO respectively. They have ownership, purpose etc.). direct relation to the Real World Knowledge Data categorisation (i.e., to classified data Model, the CVO information model and the according to their value) must be possible in VO information model, which describe a very flexible way according to the structure concepts and associations between concepts envisaged for formal descriptions of icore for these entities. Grant Agreement number: Page 118 of 136

119 entities. Situation Awareness. The icore framework shall provide the capability to describe the contexts and to react to changes in the context. Situation awareness includes the concept of proximity among data and services and the capability to formalize proximity areas and assess membership levels of objects within a proximity area (i.e. geographically close, same owner, granted access, connected, same domain). Semantic searching. The icore framework shall provide the capability for controlled searching and access to data and services. The controlled term is related to the access control function. The semantic searching capability shall be distributed and interoperable across different domains. Autonomic and cognitive service lifecycle management. The icore framework should be able to adapt to changes in an autonomic way. This autonomic function is based on a continuously executed cognitive cycle (i.e. monitors, analyse, decide, actuate) continuously improving the outcome of service actuations and the weight of influence from sensing / participating objects (e.g. energy use optimisation). icore has the Situation Awareness block to process the gathered information from the Situation Observers (SOs), which are specific CVOs that are devoted to the particular task of observation of events though CVO processing, as particularly meaningful and relevant to the situation that a particular person or service is in. Situation awareness is responsible for the creation of the Real World Knowledge, which is then stored in the RWK model. icore investigates the possibility that the Situation Awareness functional block is fully substituted in terms of functionality by SOs especially for runtime operation. Semantic searching exists in each one of the icore layers. In the Service level, The Semantic Query Matcher comprises semantic alignment/learning enhancements as a potential pre-processing for the standard SPARQL matching of query to Service Template concept as done in the RDF Rules Inference Engine. In the CVO level, the Approximation and Reuse Opportunity Detection performs a search in order to discover potentially available, relevant CVO instances of the requested CVO template names using Semantic Similarity / Proximity and other cognitive mechanisms. A similar mechanism is envisioned for the VO factory. The controlled term is related to the access control function, which is related to the Access Server. The Performance Management (CVO Management Unit) intends to guarantee the proper performance of not only the CVO level (CVO level functional blocks and CVO instances) but also the VO level (VO level functional blocks and VO instances) in terms of satisfying specific Key Performance Indicators (KPIs) thresholds. The Quality Assurance (CVO Management Unit) targets mainly at the satisfaction of the SLAs as Grant Agreement number: Page 119 of 136

120 Directory Services. Maintain and log service usage history, useful compositions of objects, rating of icore objects. Performance and scalability. icore shall provide means and functions to support scalability and support the validation of performance requirements and their matching to systems constraints. Interoperability. icore shall be interoperable for data and functions among different domains. Security & Privacy: Availability. icore shall provide the capabilities to support service continuity. Non Functional delivered by the Service level. Both the Performance Management and the Quality Assurance may trigger reconfiguration actions in order to improve the performance and the quality of service respectively. The Resource Optimization (VO Management Unit) optimizes by means of cognition the operation of the underlying sensors, actuators and resources e.g. by reducing the energy consumption. Learning mechanisms and relevant databases exist in each one of the icore levels. In the Service level, there are learning statistics in the Semantic Query Matcher and learning mechanisms about the user preferences and the rating of icore objects in the User Characterization. Learning Mechanisms also exist in the CVO factory to learn about the composition of objects etc. The Performance Management (CVO Management Unit) intends to guarantee the proper performance of not only the CVO level (CVO level functional blocks and CVO instances) but also the VO level (VO level functional blocks and VO instances) in terms of satisfying specific Key Performance Indicators (KPIs) thresholds. Scalability is ensured by distributed CVO/VO registries, the semantic descriptions of objects and the reuse of existing CVOs through the Approximation and Reuse Opportunity Detection. Interoperability is ensured by the VO Front End, which is the abstract part of the VO making it interoperable. It comprises the VO template filled with the specific to the VO instance information. The front-end helps also checking the access rights and communicating with the IoT based on IETF protocols on top of IP. Both the Performance Management and the Quality Assurance in the CVO Management Unit may trigger reconfiguration actions in order to improve the performance and the quality of service respectively. Grant Agreement number: Page 120 of 136

121 Security & Privacy: Confidentiality/Data protection and Privacy. icore shall provide capabilities to regulate access to data and services. This category includes also Data protection and Privacy requirements. This also means anonymization or pseudonymization on the one hand and using the minimum set of data needed for the use case in the other hand. Security & Privacy: Integrity. icore shall provide the capability to guarantee the logical correctness of the processes and the consistency of the data structures and occurrence of the stored data. Security & Privacy: Authentication/Authorization. icore should provide the means to establish the validity of a transmission, message, or originator, or a means of verifying an individual's authorization to receive specific categories of information. Security & Privacy: Non-repudiation. icore shall provide functions for the proof of delivery and the recipient is provided with proof of the sender's identity, so neither can later deny having processed the data. The Access control functional area of the Access Server imposes that before using a VO/CVO instance logged in the VO/CVO registry respectively, the Service Requester needs to be granted access rights to access the VO/CVO. The access rights depend on the user or application which did the initial request to the Service Requester. Data Manipulation / Reconciliation takes cares of the data management (e.g. considering also big data) and ensures the quality of the data e.g. by interpolating missing data using machine learning techniques. The Performance Management (CVO Management Unit) intends to guarantee the proper performance of not only the CVO level (CVO level functional blocks and CVO instances) but also the VO level (VO level functional blocks and VO instances) in terms of satisfying specific Key Performance Indicators (KPIs) thresholds. Authentication and Authorization ensures that each actor before interacting with the icore system is authenticated and authorized. The authentication function is needed to ensure that a user or an application is recognized by the icore framework. The authorization function is used to define the level of access of the authenticated party and to grant access. The Access control function of the Access Server is envisioned to offer such functionality. Table 3: Mapping of Technical Requirements towards the icore Architecture 5.4 The Cognitive advantage Cognition is one of the three cornerstones of the icore platform, besides Virtualization and the capacity to handle large scale IoT systems. It is a distinguishing feature because it enriches the icore platform with the capabilities of cognitive reasoning on large systems and data. Users and stakeholder of icore can take advantage of this by creating applications, services and even components that are capable of intelligently behaving in specific contexts and with specialized knowledge basis. In this sense, icore is offering an infrastructure that can be easily customized to encompass reasoning mechanisms and knowledge basis developed by specialized entities or service providers. Grant Agreement number: Page 121 of 136

122 The value of cognition is also related to the difficulty of the tasks and the amount of data and number of entities to be controlled and managed. In this respect a cognition framework in conjunction with virtualization capability and scalability to a large number fits perfectly together and they provide high value for customers. In addition icore is capable of organizing the knowledge by learning. This means that a knowledge provider will have a triple benefit in using its knowledge base in a icore system: a) the possibility to leverage (also from an economic point of view) its knowledge base; b) to increase the knowledge base in terms of more relationships, more concepts stemming from the application to a dynamic environment; c) to test the knowledge base against real cases. The icore cognitive approaches are further discussed in this section Although icore supports execution scenarios based on imperative scripts, one should exploit the opportunities offered by cognitive approaches. The knowledge mechanisms act on and improve both RWK and SK. The following three forms of cognition are supported by icore and their usage is further described: domain knowledge deductive reasoning machine learning Note that any of the above approaches is optional and they are not necessarily mutually exclusive. It is expected that at the beginning, an icore instance to start with the first two forms above. After data accumulate (e.g. intra/outer icore data, user feedback, etc.), the machine learning might come into scene to detect the pattern and produce predictions, rules or explanations Domain knowledge For the task at hand, domain knowledge might be elicited from experts. Based on their experience deployed into the icore platform, or on their insight in some scenarios, the icore platform can reach at the desired status based on the stimulus and on the current context. A well-known and largely practiced approach to include domain knowledge is based on script files, which allows for specifying workflows based on imperative or functional languages. This approach is widely used in computer science and IT industry. They have the incontestable advantages of being popular communication bridges between a human expert and the machine, easy to be debugged, maintained and deployed on various platforms. As a mechanism used inside icore, the complex event processing is a specific form of humanexpressed knowledge. The current input streams which feed on a CEP engine are further processed through various so-called agents who are specialized in filtering, pattern detection, and transformation (one may find such a CEP engine at the Level 1 of Situation Awareness module, but it can also be added to the CVO level, if needed). The rules / formulas / workflows delivered inside CEP are an expression of human knowledge which becomes icore asset. Another icore related example is the template concept. A template represents an orchestration of operations aiming to solve specific problems (for the SL and CVO level). Although icore could eventually create some templates automatically (e.g. through reasoning or transfer learning, see below), the natural premise is that at least for the beginning an expert creates such templates and deploys them into the icore instance. The Situation Awareness module is dependent on a domain expert/knowledge engineer, as depicted in Figure 39. Nevertheless, the situation recognition or classification might sometimes be supported by deductive or inductive reasoning. The SA output feeds the RWK. Grant Agreement number: Page 122 of 136

123 The following case shows how domain expertise might support a specific reasoning mechanism. It is about probabilistic graphical models, whose salient representatives are Bayesian networks and Markov models. This active area of research has at the time of writing a well-known set of success stories, founded on a strong mathematical basis. For a Bayesian network, a human expert is asked to enumerate the attributes of interest (which are modeled as random variables) and the causal relationships between them (which define a graph of relations). The expert can also provide the conditional probability distribution between the children and their parent nodes (otherwise, they are to be estimated based on statistical evidence). The result is a graph which allows for prediction, evidential reasoning and giving the most plausible explanations. The central role here belongs to the human expert, as she explicitly shows the qualitative and quantitative influences between the enumerated factors. The SK might have a huge benefit from this approach. When pattern recognition models are further embedded into icore, a difficult question is which one of the candidate models are to be used for a specific task. The linear regression might be suitable for specific predictive tasks, while for other classes of problems non-linear inductive mechanisms are more appropriate. The no free lunch theorem for search and optimization [36], shows that no algorithm can dominate in terms of performance all other candidate approaches for, say, all classification problems. The problem of choosing the best learning model might be avoided by taking into account that for a specific problem (or class of problems), there are machine learning algorithms that are more appropriate than the others. A human expert e.g. one with a deep expertise in ML approaches, and also with domain knowledge for the problem at hand - is mandatory to propose the actual machine learning approach(es) to be employed. Once we acknowledge the inherent inductive bias of every ML algorithm, we realize the necessity of this human expertise inside icore Deductive reasoning The deductive reasoning acts from the general cases to the specific results, i.e. draws particular conclusions from general principles. As an expression language, the deductive reasoning uses (but is not limited to) first order logic, propositional logic and ultimately defines the syllogisms, which are procedures that receives a set of valid input statements and produce valid output statements. A deductive mechanism relies on a knowledge base. When it receives a current input (context, situation) it produces a logically supported conclusion, which ideally should be the premise for a useful action. This approach is used for expert systems, which emulate the decision making of a human expert, based on a declarative description and on an inference engine. The usage of expert system one of the successful stories of using Artificial Intelligence approaches for practical tasks. In icore, both RWK and SK can act as knowledge bases. As an example, one can start from rules linking generic situations to parameterized templates, and then call a deductive mechanism to tell a concrete template to be used for the task at hand. Additionally, deductive reasoning is the standard technique use by semantic reasoners, which are used to infer logical consequences from a set of asserted facts or axioms. We recall here that the semantic web is one of the technologies to be used for describing icore assets, one of the motivations being the standardized support for deductive reasoning Machine learning One can see Machine learning (ML) as the counterpart of deductive reasoning: starting from a set of data, it aims to automatically discover patterns and infer behavioral models. One of the expected outcomes is improving performance for some task through the gained experience. Section includes some candidate applications (anomaly detection, time series forecasting, classification, regression, association learning, feature learning) which by themselves bring value to an icore Grant Agreement number: Page 123 of 136

124 process. For example, estimating the expected value for a time series recorded by a VO, or detecting anomalies for a group of VO sensors and performing data reconciliation are forms of servitization. One should take into account that the above mentioned ML approaches are the most popular ones, with acknowledged advantages and adding value to various tasks. Many of the existing ML approaches are functionally equivalent but with different inductive bias, and choosing between them is a matter of human expertise, trial and error, and hypothesis validation. Beside the enumerated applications, nothing prevents us to imagine two other potential and realistic usages of machine learning: when the service level is confronted with a specific situation, it has to search through the available SL templates. Deductive reasoning can return no template, and in this case one might use case-based reasoning or inductive transfer (transfer learning) to combine and adapt the existing solutions to the newly encountered situation. Obviously, proper metric measuring the difference between the given situation and pre-existing templates (alternatively: automatically detecting which features are relevant in comparison) is a critical issue. Here, distance/metric learning - deciding for the right distance function to measure the similarity between previous experience and current context - is an active research area with a clear potential. reinforcement learning was successfully used to improve the resource management/allocation, based on critic signals. If we admit that the user feedback is to be taken or that somehow the environment communicates the degree of success for the proposed approach, they might be further fed into a reinforcement learning algorithm to maximize a long time reward. Obviously, this approach can be extended to the CVO level, as long as a form of critic signal is provided to it. Beyond the ML approaches existing at the time of writing, we expect that more ML algorithms will be developed and become candidate techniques to improve the system s output. 5.5 Contributions to other Architectures Contributions to IOT-A The IoT-A Architectural Reference Model (ARM) is an abstract design concept for helping and guiding IoT architectures for designing application domain specific and concrete IoT architectures. The ARM contains examples of business scenarios and stakeholders and derives an IoT Reference Model and an IoT Reference Architecture. The IoT Reference Model describes IoT domain concepts that form a common ground for learning the IoT-A and guidelines for generating concrete IoT architectures. In this respect, the icore domain is one application domain for IoT-A. In this section we first compare IoT-A and icore concepts and provide common concepts between IoT-A and icore. Then using the IoT-A and icore concepts we provide functional architecture alignment for IoT-A and icore Conceptual Mapping Conceptual mapping is based on the comparison of terms and their relations found in IoT-A domain and information models of the ARM [46] and icore models in Section 3. The following concept mapping assumes that the IoT-A abstraction level is somewhat (90%) equivalent to icore VO level. Thus the application in IoT-A would be a CVO in icore. While this is not fully true, it helps positioning of terms and alignment between IoT-A and icore. Table 4 presents important concepts and shows how IoT-A terms can be mapped and used in icore. Grant Agreement number: Page 124 of 136

125 IoT-A Ref. Model Concept icore Concept Comments User (Human User or Digital Artefact) ICT Object (ICT or Non-ICT) Human User One class of in the non-ict class VO has User as owner Digital Artefact User Physical Entity (PE) (Discrete, identifiable part of physical environment that the User interacts with) Virtual Entity (VE) (digital representation of PE) Augmented Entity is a composite of VEs and PEs. ICT Object Real World Object (RWO) Either ICT or non-ict, any physical object that exist in real world. Virtual Object (VO), any PE requires User interaction but RWO does not. VO represents ICT Object that isassociatedto non-ict Object Does not have use case in icore. Augmented Entities could be implemented in the application. Device (technical artefact that provides link between VE and PE) Tags are Devices that explicitly identify a PE. Sensors are Devices that provide information (identity & measures of the physical states) about PE they monitor. Actuators are Devices that can modify the physical state of PE. ICT Object (ICT Object isassociatedto non-ict Object) One class for ICT or non-ict Object. One class for ICT Object. One class for ICT Object. is Active/passive ICT/non-ICT Services (SW mechanisms by which needs and capabilities are brought together) non-iotservices other general Services IoTServices are well defined and standardized interfaces. ICT Object Services General definition. icore Services (WP5) fit here. Appears to be the same concept with a different name Resource Services are IoTServices that expose the functionality of Device. VE Services are IoTServices that provide access to VE information (lookup, discover, resolve) Integrated Services are IoTServices that are combination Proprietary Services (related to VO Function) IoTServices VO FE Services Appears to be the same concept with a different name Appears to be the same concept with a different name. Related to Identity, naming, addressing. Grant Agreement number: Page 125 of 136

126 of Resource Services and VE Services Natural feature identification or Primary Identification is Sensor based PE identification. Involve reasoning and pattern recognition and thus icore CVO and Service levels Tag or label indentification or secondary identficiation is assertional identification. Resource (SW component that provides some functionality) General definition. Network Resources are Resources hosted by network nodes that are not IoT Devices. On-Device Resources are Resources hosted by Device Location is basic attribute of the PEs, Devices and Human Users. ICT and non-ict objects haslocation. Same in both Functional IoT-A Alignment Table 4: Conceptual mapping of IoT-A and icore Functional alignment is based on functional architecture views in found in IoT-A deliverable documents for the ARM [46] and for IoT-A Discovery and associations [47] and the icore architecture in chapter 3. The alignment for IoT-A happens between the icore VO and IoT-A IoT Service layers. The VO structure has a Front-End layer towards icore and Back-End layer towards IoT Service. Figure 65 presents both icore and IoT-A functional architectures. The icore specific parts are on the left side and IoT-A on the right. In the middle there are IoT-A functions up to IoT Service followed by icore specific VO IoT Back-End and other functionalities. The figure also illustrates how IoT-A Devices (bottom right) can be icore ICT objects with sensors and/or actuators. Both IoT-A and icore use resources that are not related to those devices that have sensing and/or actuation capabilities, but rather provide computational, communication and memory services for the applications e.g. cloud services. IoT-A calls these Network Resources as opposed to On-Device Resources with sensors, actuators and tags. In the icore the association between non-ict object and ICT object is high order function that require some cognitive capabilities described in chapter 3.9. Grant Agreement number: Page 126 of 136

127 Figure 65: Functional IoT-A alignment In the IoT-A, the association is between VE, VE service description and IoT Service. VE and VE descriptions can be mapped into VO and VO descriptions and after mapping IoT-A discovery and association functionalities can be used in icore. The discovery contains resolution and lookup query functions for finding VOs (VEs), IoT Services and their associations. The association contains functions for creating, modifying and deleting associations. In IoT-A there are six flavors of different discovery approaches: Geo-Discovery Semantic Discovery P2P Lookup Discovery Federation Based Discovery Domain based approach M3- uid Discovery Contributions to M2M Service Enablement Frameworks M2M service enablement frameworks are designed to provide horizontal support for different M2M applications. Rather than having a vertical technology solution for every individual M2M application, the idea behind M2M service enablement frameworks is to define common functionality for multiple M2M applications. There are different standardization bodies that have specified or are specifying M2M service enablement frameworks. One of the most well-known standardized M2M service enablement frameworks is that of ETSI TC M2M [48]. A new standardization body in the area of M2M service enablement frameworks is onem2m, a partnership between ETSI and other regional/national standardization organizations. Launched in July 2012, the purpose of onem2m is to develop a worldwide standard for a common M2M Service Layer. It is to be expected that the results from ETSI TC M2M will l form the starting point for work in onem2m, and that onem2m will take over the work for the further development of this architecture from ETSI TC M2M. However, a onem2m architecture has not yet been defined. Therefore, we use the current ETSI TC M2M service enablement framework as the basis for our discussion here. Grant Agreement number: Page 127 of 136

128 icore mapping to the ETSI TC M2M service enablement framework The ETSI TC M2M service enablement framework is shown in Figure 66. The architecture defines M2M Service Capability Layers for the network (NSCL Network Service Capabilities Layer), the M2M Device (DSCL Device Service Capabilities Layer) and the M2M Gateway (GSCL Gateway Service Capabilities Layer). M2M Devices are connected with the Network Service Capabilities Layer, either directly via a wide area network (wireless, mobile, fixed, or other) or first via a M2M area network (e.g. Zigbee) and a M2M Gateway. Figure 66: The ETSI TC M2M Service Enablement Framework The Service Capabilities Layers provide horizontal support for M2M Applications that run partly on the M2M device and partly in the network. Multiple M2M applications can make use of a single Service Capability Layer. The mia-interfaces between the Network Service Capabilities Layer and the network M2M Application, as well as the dia-interfaces between the Device/Gateway Service Capability Layer and the device M2M Application are based on REST. From a conceptual point, they are structured in the same as the icore interfaces toward the VO and/or CVO. Also the mid-interface between the Network Service Capabilities Layer and the Device/Gateway Capability Layer is based on REST principles. Looking at the mia and mid interfaces, there are two basic mappings possible between the icore architecture and the ETSI TC M2M service enablement framework. The two architecture mappings are shown in Table 5. In Option 1, the mid interface is mapped on the VO interface. In Option 2, it is the mia interface that is mapped on the VO interface. ETSI TC M2M icore Option 1 icore Option 2 M2M Device Real World Object Real World Object Device/Gateway Service Capability Virtual Object Real World Object Network Service Capabilities Composite Virtual Object Virtual Object Table 5: Two different mappings between icore and ETSI TC M2M service enablement framework Grant Agreement number: Page 128 of 136

129 One of the functions of ETSI s Network Service Capabilities Layer is to hide complexities of establishing and maintaining wide area network data connections from the M2M Applications. Following that goal suggests adopting the mapping in Option 2, where a Virtual Object could hide the complexities of the connectivity to the Real World Object it represents. Mapping such a similar hiding functionality to a Composite Virtual Object works a lot less well. The Virtual Object in the M2M Device would likely connect to multiple Composite Virtual Objects. Furthermore, Virtual Objects also communicate with other functions in the icore architecture (e.g. with the Registry). In the ETSI TC M2M Service Enablement Framework, the Network Service Capabilities Layer would be able to multiplex multiple M2M Applications on a single data communication link to the M2M Device. However, with the mapping in Option 1, the M2M Device needs multiple wide area network communication links to different network entities; not impossible, but certainly a complication factor. The ETSI TC M2M service enablement architecture assumes a business model where a central M2M provider offers M2M service enablement services to a multitude of M2M application providers. That implies a centralized Network Service Capabilities Layer. Also additional functionality (e.g. authentication, bootstrapping) not shown in Figure 66 are based on this centralized paradigm. These functions in the ETSI TC M2M architecture will not easily map to the, much more distributed, icore architecture. The icore architecture does not have an interface similar to the dia interface. Within icore, the Real World Object is seen as a monolithic entity. In the ETSI TC M2M Service Enablement Framework, on the other hand, the M2M functionality of the M2M Device is split into Device Service Capability Layer functionality, common for all M2M Applications, and the application specific Device M2M Application functionality icore contributions to M2M service enablement frameworks Both the icore and the ETSI TC M2M service enablement framework use REST based interfaces. Both also need to handle the same issues with REST based interfaces: Ontologies: The concept of REST defines a basic set of functions to handle all kinds of data. But ontologies are needed on top of that to give meaning to the data. ETSI TC M2M has identified the need for ontologies [49], but so far has only specified the basic data containers. A lot of work will go in defining ontologies for various areas / application scenarios. It is not completely clear who will define ontologies for all kinds of application scenarios, but it clear that a lot of specification work needs to be done. Streaming data: REST based interfaces are good at exchanging attribute value pairs. They are less suitable for exchanging streaming data, such as a video stream. Different solutions are possible (e.g. dividing the video stream in data blocks, or using REST to control a separate video stream), but more specification work is needed Contributions to Open Geospatial Consortium Sensor Web Enablement Planetary scientists first proposed the concept of standardized description files for sensor location in the early '90s. Subsequent work by NASA, the University of Alabama Huntsville and CEOS (Committee on Earth Observation Satellites) was brought into the OGC in 2001 for prototyping, testing and promotion as the OGC's Sensor Web Enablement (SWE) activity [51]. The OGC ( a geospatial standards organizations, took on the task of standardizing sensor communication because every sensor, whether in situ (such as a rain gauge) or remote (such as an Earth imaging device), has a location, and the location of a sensor is highly significant for many applications. The resulting suite of SWE standards now being widely implemented around the world enable developers to make all types of networked sensors, Grant Agreement number: Page 129 of 136

130 transducers and sensor data repositories discoverable, accessible and useable via the Web or other networks. OGC standards are downloadable at no charge, for use by anyone. Figure 67: Open Geospatial Consortium Sensor Web Enablement The OGC SWE standards address sensor interoperability from a broader, community and sensor agnostic perspective. The SWE standards provide web service interfaces and XML-based encodings that enable the emerging vision of sensor webs introduced above. The SWE suite of encodings and web services provide a foundation for sensor webs by offering standard functionality for: Discovering sensors Describing sensors and the methods by which those sensors derive observations Retrieving sensor observations (both archived and real-time) Tasking sensors Subscribing to and being notified of sensor alerts in real-time Figure 68: OGC SWE (layered) architecture The SWE suite consists mainly of three encodings and four web services, described in next table. The CS/W, while not traditionally classified as a SWE web service, is included in the table, since it provides important functionality for discovering SWE and other OGC web services. Other OGC web services such as the Web Map Service (WMS) and the Web Feature Service (WFS) are important in that they provide geographic context (e.g. background imagery and geographic features) to SWE sensors and data. Grant Agreement number: Page 130 of 136

131 Table 6: OGC SWE Encodings and Web Services Some of the services are also described and implemented as RESTful services North open source reference implementation of the OGC standards Amongst all the implementation of the OGC standards, the one of 52 North ( is strongly noticeable as being a reference one. The open source software initiative 52 North is an open international network of partners from research, industry and public administration. Its main purpose is to foster innovation in the field of Geoinformatics through a collaborative R&D process. The 52 North R&D communities develop new concepts and technologies e.g. for managing near realtime sensor data, integrating geoprocessing technologies into SDIs, making use of GRID- and Cloud technologies. They evaluate new macro trends, such as the Internet of Things, the Semantic Web or Linked Open Data, and find ways to unfold their use in practice. All 52 North partners have a long and outstanding record in the Geo-IT domain and actively contribute to the development of international standards, e.g. at W3C, ISO, OGC or INSPIRE. All software developed within this collaborative development process is published under an open source license. 52 North is a trusted and well established entity in the Geoinformatics arena. Its software is widely used in operational IT environments, research labs and education. Figure 69: OGC SWE 52 North implementation overview Grant Agreement number: Page 131 of 136

132 In pratice 52 North provides a complete sensors network gateway with sensors adapters, sensor bus, OGC services and several applications; see 52 North implementation makes also use of W3C SSN-XG ontology for semantic matchmaking OGC standards and architecture with icore concepts matching The current VO/CVO/services concepts and implementation developped in icore are all compatible / compliant with a layered architecture common to OGC SWE, IETF Constrained RESTful environments (CoRE) Working Group and ETSI M2M that is essentially based on: Sensors and actuators layer, Network layer, Gateways with services layer and Applications layer. According to this layered architecture, VO/CVO/services are mainly supported and related to gateways and services layer. If we look at a more precise mapping we have the VO layer that corresponds to Sensor Observation Service (SOS) and Sensor Planning Service (SPS), the CVO layer that is close to Web processing Service (WPS) possibly used in combination with workflows (BPEL) engines, while sensors web applications map well with the icore services layer. Regarding icore objective to tackle scalability of systems gathering huge number of heterogeneous data sources and controlling applications, OGC SWE architecture and various operational implementations around the world address this because it is a core goal of OGC to enable sensors usage at the Web level: through the definition of common abstraction and data models whatever the sensors, through the definition of registries for sensors observations and sensors meta information, through the definition of web services that provide simplified and standardized access and retrieval all around the world, etc. Also, OGC SWE and its implementations provide fundations (e.g. sensors meta-data, use of standardized ontologies, 52 North semantic mediators) for the Semantic and Cognitive Web. Based on this, the contribution and added value of icore to OGC SWE should be clearly the definition and prototyping of decision making interfaces and core functions that extend the OGC framework. 6. Conclusions and steps forward This deliverable presented the first complete icore Architecture defined according to a set of basic principles, building blocks, functional entities and guidelines needed for building icore compliant systems. The definition of these architectural principles has been carried out to satisfy requirements of different stakeholders (business perspective), top-down technical requirements (technology perspective) and bottom-up implementation requirements (use-cases perspective). This has produced a functional architecture capable of representing Real World Objects and their virtualization and providing support for more complex aggregations and combinations of these, promoting easy creation of compelling IoT applications that exploit also cognitive technologies to adapt over time to the changing situation. The illustrated icore Architecture also covers for security concerns, where access to the VO/CVOs must be regulated through a sticky policy management approach, i.e., the policy applicable to a piece of data (e.g. originating from a VO) travels with the data and is enforceable at every point it is used. In order to implement an icore system in compliance to the functional architecture definition, in this document the technology architecture was also described as derived by the needs and requirements exposed by several use cases. Grant Agreement number: Page 132 of 136

133 This deliverable also assessed the value of the icore architecture with respect to different stakeholders that have actively participated to the elicitation of requirements. The merits of the icore architecture were also evaluated with respect to possible integration and contribution to other architectures such as IOT-A and the ETSI s M2M. The next steps towards the fine-tuning of the presented architecture will consist of the elicitation of the envisaged governance model and associated procedures from various stakeholders point of view. Such a model will benefit from a more detailed analysis of the icore architecture implementation design and deployment challenges across business domains and stakeholders roles. One further step will be a more detailed analysis of the expected behavior of functional blocks over time, expected interactions and communication interfaces as well as associated services and applications lifecycle management in the context of the above-mentioned governance model and roles. The work on the icore architecture will be completed through a validation exercise resulting in a refinement that will consider also the progresses made in each of the technical workpackages, as the validation of functional and non-functional architecture features will be performed. Grant Agreement number: Page 133 of 136

134 7. References [1] Network Functions Virtualization Introductory White Paper available in [2] D2.1 Technical Requirements for the icore Cognitive Management and Control Framework, Contractual Deliverable of icore Project [3] D2.2 Security requirements for the icore cognitive management and control framework, Contractual Deliverable of icore Project [4] D6.1 Specification of proof of concept prototypes for four application domains, Contractual Deliverable of icore Project [5] D1.1 Use cases definitions and scenarios, Contractual Deliverable of icore Project [6] D1.2 Socio-economic requirements, Contractual Deliverable of icore Project [7] WP6 Workplan document [8] D7.2 First report on dissemination, standardization and exploitation activities, Contractual Deliverable of icore Project [9] D5.1 Application and user requirements, Contractual Deliverable of icore Project [10] D5.2 Final version of the Application knowledge inference toolkit, Contractual Deliverable of icore Project [11] D4.2 First version of CVO and management fabric design, Contractual Deliverable of icore Project [12] D4.1 Requirement and dependencies for CVOs, Contractual Deliverable of icore Project [13] D3.2 Real Object Awareness and Association, Contractual Deliverable of icore Project [14] URIs, URLs, and URNs: Clarifications and Recommendations 1.0, Report from the joint W3C/IETF URI Planning Interest Group", available at Accessed: Feb 2013 [15] EPC Global, GS1, "Object Naming Service (ONS)", EPCglobal Ratified Specification [16] Persistent Uniform Resource Identifier", available at " Accessed: Feb 2013 [17] ISO/IEC IEEE Std First edition In ISO/IEC IEEE Std First edition (15 July 2007), pp. c1-24, doi: /ieeestd [18] Robert Fuller, Neural Fuzzy System, Åbo Akademi University, ESF Series A: 443, 1995, 249 pages. [ISBN , ISSN ] [19] Togaf Version 9 Enterprise Edition: A Pocket Guide (TOGAF Series) (19 February 2009) by Van H. Publishing [20] OWL - Web Ontology Language Overview. W3C Semantic Web Standards. [Accessed: March 2013]. [21] RDF Resource Description Framework. W3C Semantic Web Standards. [Accessed: March 2013]. [22] Bishop B., et al. (2010). OWLIM: A family of scalable semantic repositories. Semantic Web Journal. 2(1). pp [23] W3C RDB2RDF Incubator Group. (2009). A Survey of Current Approaches for Mapping of Relational Databases to RDF. Online version: [Accessed: March, 2013]. [24] openrdf.org home of Sesame. [Accessed: March, 2013]. [25] SPARQL Query Language for RDF. W3C Semantic Web Standards. [Accessed: March 2013]. Grant Agreement number: Page 134 of 136

135 [26] ARQ - A SPARQL Processor for Jena. Apache Jena. [Accessed: March, 2013]. [27] Pellet: OWL 2 Reasoner for Java. [Accessed: March, 2013]. [28] RESTful Web services: The basics. [Accessed: March, 2013]. [29] Describe REST Web services with WSDL [Accessed: March, 2013]. [30] Juric.,M. A Hands-on Introduction to BPEL. [Accessed: March 2013]. [31] IBM ILOG CPLEX Optimizer. [Accessed: March 2013]. [32] Introduction to WordNet: An On-line Lexical Database, George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, Katherine Miller, [33] Bayesian Artificial Intelligence, Second Edition, Kevin B. Korb, Ann E. Nicholson, Chapman & Hall/CRC. [34] Khajanchi, A. (2003). Artificial Neural Networks: The next intelligence. USC, Technology Commercalization Alliance. edu/org/techalliance/anthology2003/final_khajanch. pdf. [35] Taye, M. (2010). Understanding Semantic Web and Ontologies: Theory and Applications. Journal of Computing 2(6). ISSN [36] Wolpert, David H., and William G. Macready. "No free lunch theorems for optimization." Evolutionary Computation, IEEE Transactions on 1.1 (1997): [37] S. Pearson and M. Casassa-Mont, Sticky Policies: An Approach for Managing Privacy across Multiple Parties, Computer, vol. 44, no. 9, pp , Sep [38] SOL/CEP Complex Event Processor [39] REST Representational State Transfer [40] ZeroMQ The guide [41] Apache ServiceMix enterprise services bus [42] Understanding Service Oriented Architecture [43] OSGi Alliance - The OSGi architecture [44] Zebra Crossing multi-protocol barcode library [45] X.667 : Information technology - Procedures for the operation of Object Identifier Registration Authorities: Generation of Universally Unique identifier (UUIDS) and their use in object identifiers Grant Agreement number: Page 135 of 136

136 [46] Internet of Things Architecture IoT-A Project Deliverable D1.4 Converged architectural reference model for the IoT v2.0 [47] Internet of Things Architecture IoT-A Project Deliverable D4.3 Concepts and Solutions for Entity-based Discovery of IoT Resources and Managing their Dynamic Associations [48] ETSI Technical Standard Machine-to-Machine communications (M2M): Functional architecture ETSI TS V1.1.1 ( ) [49] ETSI Technical Report Machine to Machine Communications (M2M): Study on Semantic support for M2M Data ETSI TR V0.5.0 ( ) [50] Time series prediction: Forecasting the future and understanding the past In Santa Fe Institute Studies in the Sciences of Complexity, Proceedings of the NATO Advanced Research Workshop on Comparative Time Series Analysis, held in Santa Fe, New Mexico, May 14-17, 1992, Reading, MA: Addison-Wesley, c1994, edited by Weigend, Andreas S.; Gershenfeld, Neil A. (1994) edited by A. S. Weigend, N. A. Gershenfeld [51] Open Geospatial Consortium Sensor Web Enablement (OGC SWE): Grant Agreement number: Page 136 of 136

Introduction to Service Oriented Architectures (SOA)

Introduction to Service Oriented Architectures (SOA) Introduction to Service Oriented Architectures (SOA) Responsible Institutions: ETHZ (Concept) ETHZ (Overall) ETHZ (Revision) http://www.eu-orchestra.org - Version from: 26.10.2007 1 Content 1. Introduction

More information

Meta-Model specification V2 D602.012

Meta-Model specification V2 D602.012 PROPRIETARY RIGHTS STATEMENT THIS DOCUMENT CONTAINS INFORMATION, WHICH IS PROPRIETARY TO THE CRYSTAL CONSORTIUM. NEITHER THIS DOCUMENT NOR THE INFORMATION CONTAINED HEREIN SHALL BE USED, DUPLICATED OR

More information

SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) SERVICE-ORIENTED SOFTWARE ARCHITECTURE MODEL LANGUAGE SPECIFICATIONS

SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) SERVICE-ORIENTED SOFTWARE ARCHITECTURE MODEL LANGUAGE SPECIFICATIONS SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) VERSION 2.1 SERVICE-ORIENTED SOFTWARE ARCHITECTURE MODEL LANGUAGE SPECIFICATIONS 1 TABLE OF CONTENTS INTRODUCTION... 3 About The Service-Oriented Modeling Framework

More information

D6.1: Service management tools implementation and maturity baseline assessment framework

D6.1: Service management tools implementation and maturity baseline assessment framework D6.1: Service management tools implementation and maturity baseline assessment framework Deliverable Document ID Status Version Author(s) Due FedSM- D6.1 Final 1.1 Tomasz Szepieniec, All M10 (31 June 2013)

More information

D 8.2 Application Definition - Water Management

D 8.2 Application Definition - Water Management (FP7 609081) Date 31st July 2014 Version [1.0] Published by the Almanac Consortium Dissemination Level: Public Project co-funded by the European Commission within the 7 th Framework Programme Objective

More information

ISSA Guidelines on Master Data Management in Social Security

ISSA Guidelines on Master Data Management in Social Security ISSA GUIDELINES ON INFORMATION AND COMMUNICATION TECHNOLOGY ISSA Guidelines on Master Data Management in Social Security Dr af t ve rsi on v1 Draft version v1 The ISSA Guidelines for Social Security Administration

More information

IoT is a King, Big data is a Queen and Cloud is a Palace

IoT is a King, Big data is a Queen and Cloud is a Palace IoT is a King, Big data is a Queen and Cloud is a Palace Abdur Rahim Innotec21 GmbH, Germany Create-Net, Italy Acknowledgements- ikaas Partners (KDDI and other partnes) Intelligent Knowledge-as-a-Service

More information

Digital Asset Manager, Digital Curator. Cultural Informatics, Cultural/ Art ICT Manager

Digital Asset Manager, Digital Curator. Cultural Informatics, Cultural/ Art ICT Manager Role title Digital Cultural Asset Manager Also known as Relevant professions Summary statement Mission Digital Asset Manager, Digital Curator Cultural Informatics, Cultural/ Art ICT Manager Deals with

More information

ASCETiC Whitepaper. Motivation. ASCETiC Toolbox Business Goals. Approach

ASCETiC Whitepaper. Motivation. ASCETiC Toolbox Business Goals. Approach ASCETiC Whitepaper Motivation The increased usage of ICT, together with growing energy costs and the need to reduce greenhouse gases emissions call for energy-efficient technologies that decrease the overall

More information

Realizing business flexibility through integrated SOA policy management.

Realizing business flexibility through integrated SOA policy management. SOA policy management White paper April 2009 Realizing business flexibility through integrated How integrated management supports business flexibility, consistency and accountability John Falkl, distinguished

More information

< IMPACT > START ACCELERATE IMPACT

< IMPACT > START ACCELERATE IMPACT START ACCELERATE IMPACT IMPACT project has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n 632828 START ACCELERATE IMPACT WEBINAR #2 Technology

More information

Huawei Managed Services Unified Platform (MS UP) v1.0

Huawei Managed Services Unified Platform (MS UP) v1.0 Huawei Managed Services Unified Platform (MS UP) v1.0 Representation of Solution Functionality/Capability Utilizing etom, ITIL and TL 9000, Huawei Managed Services has integrated these three global standards

More information

Collaborative Open Market to Place Objects at your Service

Collaborative Open Market to Place Objects at your Service Collaborative Open Market to Place Objects at your Service D6.2.1 Developer SDK First Version D6.2.2 Developer IDE First Version D6.3.1 Cross-platform GUI for end-user Fist Version Project Acronym Project

More information

Creating an IoT Ecosystem through scenarios

Creating an IoT Ecosystem through scenarios Creating an IoT Ecosystem through scenarios IEEE Internet of Things Initiative Hendrik Berndt, Scenarios Track Chair IoT Week Lisbon, June 17 th, 2015 1 16/06/15 An IoT Definition! The Internet of Things

More information

Data Virtualization A Potential Antidote for Big Data Growing Pains

Data Virtualization A Potential Antidote for Big Data Growing Pains perspective Data Virtualization A Potential Antidote for Big Data Growing Pains Atul Shrivastava Abstract Enterprises are already facing challenges around data consolidation, heterogeneity, quality, and

More information

Mind Commerce. http://www.marketresearch.com/mind Commerce Publishing v3122/ Publisher Sample

Mind Commerce. http://www.marketresearch.com/mind Commerce Publishing v3122/ Publisher Sample Mind Commerce http://www.marketresearch.com/mind Commerce Publishing v3122/ Publisher Sample Phone: 800.298.5699 (US) or +1.240.747.3093 or +1.240.747.3093 (Int'l) Hours: Monday - Thursday: 5:30am - 6:30pm

More information

Security challenges for internet technologies on mobile devices

Security challenges for internet technologies on mobile devices Security challenges for internet technologies on mobile devices - Geir Olsen [[email protected]], Senior Program Manager for Security Windows Mobile, Microsoft Corp. - Anil Dhawan [[email protected]],

More information

How can the Future Internet enable Smart Energy?

How can the Future Internet enable Smart Energy? How can the Future Internet enable Smart Energy? FINSENY overview presentation on achieved results Prepared by the FINSENY PMT April 2013 Outline Motivation and basic requirements FI-PPP approach FINSENY

More information

M2M Communications and Internet of Things for Smart Cities. Soumya Kanti Datta Mobile Communications Dept. Email: Soumya-Kanti.Datta@eurecom.

M2M Communications and Internet of Things for Smart Cities. Soumya Kanti Datta Mobile Communications Dept. Email: Soumya-Kanti.Datta@eurecom. M2M Communications and Internet of Things for Smart Cities Soumya Kanti Datta Mobile Communications Dept. Email: [email protected] WHAT IS EURECOM A graduate school & research centre in communication

More information

3SL. Requirements Definition and Management Using Cradle

3SL. Requirements Definition and Management Using Cradle 3SL Requirements Definition and Management Using Cradle November 2014 1 1 Introduction This white paper describes Requirements Definition and Management activities for system/product development and modification

More information

Enabling Self Organising Logistics on the Web of Things

Enabling Self Organising Logistics on the Web of Things Enabling Self Organising Logistics on the Web of Things Monika Solanki, Laura Daniele, Christopher Brewster Aston Business School, Aston University, Birmingham, UK TNO Netherlands Organization for Applied

More information

Contents. viii. 4 Service Design processes 57. List of figures. List of tables. OGC s foreword. Chief Architect s foreword. Preface.

Contents. viii. 4 Service Design processes 57. List of figures. List of tables. OGC s foreword. Chief Architect s foreword. Preface. iii Contents List of figures List of tables OGC s foreword Chief Architect s foreword Preface Acknowledgements v vii viii 1 Introduction 1 1.1 Overview 4 1.2 Context 4 1.3 Purpose 8 1.4 Usage 8 2 Management

More information

SERVICE OPENNESS- OPEN HORIZONTAL PLATFORM (AC4)

SERVICE OPENNESS- OPEN HORIZONTAL PLATFORM (AC4) SERVICE OPENNESS- OPEN HORIZONTAL PLATFORM (AC4) Session report Franck Le Gall icore Empowering IoT through Cognitive Technologies o Session objectives Need to provide horizontal solutions able to cope

More information

TECHNICAL SPECIFICATION: FEDERATED CERTIFIED SERVICE BROKERAGE OF EU PUBLIC ADMINISTRATION CLOUD

TECHNICAL SPECIFICATION: FEDERATED CERTIFIED SERVICE BROKERAGE OF EU PUBLIC ADMINISTRATION CLOUD REALIZATION OF A RESEARCH AND DEVELOPMENT PROJECT (PRE-COMMERCIAL PROCUREMENT) ON CLOUD FOR EUROPE TECHNICAL SPECIFICATION: FEDERATED CERTIFIED SERVICE BROKERAGE OF EU PUBLIC ADMINISTRATION CLOUD ANNEX

More information

Reimagining Business with SAP HANA Cloud Platform for the Internet of Things

Reimagining Business with SAP HANA Cloud Platform for the Internet of Things SAP Brief SAP HANA SAP HANA Cloud Platform for the Internet of Things Objectives Reimagining Business with SAP HANA Cloud Platform for the Internet of Things Connect, transform, and reimagine Connect,

More information

IEEE IoT IoT Scenario & Use Cases: Social Sensors

IEEE IoT IoT Scenario & Use Cases: Social Sensors IEEE IoT IoT Scenario & Use Cases: Social Sensors Service Description More and more, people have the possibility to monitor important parameters in their home or in their surrounding environment. As an

More information

Blueprints and feasibility studies for Enterprise IoT (Part Two of Three)

Blueprints and feasibility studies for Enterprise IoT (Part Two of Three) Blueprints and feasibility studies for Enterprise IoT (Part Two of Three) 1 Executive Summary The Internet of Things provides a host of opportunities for enterprises to design, develop and launch smart

More information

TECHNICAL SPECIFICATION: LEGISLATION EXECUTING CLOUD SERVICES

TECHNICAL SPECIFICATION: LEGISLATION EXECUTING CLOUD SERVICES REALIZATION OF A RESEARCH AND DEVELOPMENT PROJECT (PRE-COMMERCIAL PROCUREMENT) ON CLOUD FOR EUROPE TECHNICAL SPECIFICATION: LEGISLATION EXECUTING CLOUD SERVICES ANNEX IV (D) TO THE CONTRACT NOTICE TENDER

More information

FITMAN Future Internet Enablers for the Sensing Enterprise: A FIWARE Approach & Industrial Trialing

FITMAN Future Internet Enablers for the Sensing Enterprise: A FIWARE Approach & Industrial Trialing FITMAN Future Internet Enablers for the Sensing Enterprise: A FIWARE Approach & Industrial Trialing Oscar Lazaro. [email protected] Ainara Gonzalez [email protected] June Sola [email protected]

More information

MANAGING USER DATA IN A DIGITAL WORLD

MANAGING USER DATA IN A DIGITAL WORLD MANAGING USER DATA IN A DIGITAL WORLD AIRLINE INDUSTRY CHALLENGES AND SOLUTIONS WHITE PAPER OVERVIEW AND DRIVERS In today's digital economy, enterprises are exploring ways to differentiate themselves from

More information

EUK-02-2016: South Korea: IoT joint research

EUK-02-2016: South Korea: IoT joint research HORIZON 2020 WP 2016-17 EUK-02-2016: South Korea: IoT joint research DG CONNECT/DG AGRI/DG MOVE/DG RTD European Commission RIA EUK-02-2016: South Korea: IoT joint research Challenge: IoT has moved from

More information

Delivering Managed Services Using Next Generation Branch Architectures

Delivering Managed Services Using Next Generation Branch Architectures Delivering Managed Services Using Next Generation Branch Architectures By: Lee Doyle, Principal Analyst at Doyle Research Sponsored by Versa Networks Executive Summary Network architectures for the WAN

More information

A SECURITY ARCHITECTURE FOR AGENT-BASED MOBILE SYSTEMS. N. Borselius 1, N. Hur 1, M. Kaprynski 2 and C.J. Mitchell 1

A SECURITY ARCHITECTURE FOR AGENT-BASED MOBILE SYSTEMS. N. Borselius 1, N. Hur 1, M. Kaprynski 2 and C.J. Mitchell 1 A SECURITY ARCHITECTURE FOR AGENT-BASED MOBILE SYSTEMS N. Borselius 1, N. Hur 1, M. Kaprynski 2 and C.J. Mitchell 1 1 Royal Holloway, University of London 2 University of Strathclyde ABSTRACT Future mobile

More information

Proven Testing Techniques in Large Data Warehousing Projects

Proven Testing Techniques in Large Data Warehousing Projects A P P L I C A T I O N S A WHITE PAPER SERIES A PAPER ON INDUSTRY-BEST TESTING PRACTICES TO DELIVER ZERO DEFECTS AND ENSURE REQUIREMENT- OUTPUT ALIGNMENT Proven Testing Techniques in Large Data Warehousing

More information

NetVision. NetVision: Smart Energy Smart Grids and Smart Meters - Towards Smarter Energy Management. Solution Datasheet

NetVision. NetVision: Smart Energy Smart Grids and Smart Meters - Towards Smarter Energy Management. Solution Datasheet Version 2.0 - October 2014 NetVision Solution Datasheet NetVision: Smart Energy Smart Grids and Smart Meters - Towards Smarter Energy Management According to analyst firm Berg Insight, the installed base

More information

How to bridge the gap between business, IT and networks

How to bridge the gap between business, IT and networks ericsson White paper Uen 284 23-3272 October 2015 How to bridge the gap between business, IT and networks APPLYING ENTERPRISE ARCHITECTURE PRINCIPLES TO ICT TRANSFORMATION A digital telco approach can

More information

The 5G Infrastructure Public-Private Partnership

The 5G Infrastructure Public-Private Partnership The 5G Infrastructure Public-Private Partnership NetFutures 2015 5G PPP Vision 25/03/2015 19/06/2015 1 5G new service capabilities User experience continuity in challenging situations such as high mobility

More information

E-Business Technologies for the Future

E-Business Technologies for the Future E-Business Technologies for the Future Michael B. Spring Department of Information Science and Telecommunications University of Pittsburgh [email protected] http://www.sis.pitt.edu/~spring Overview

More information

Enterprise Application Enablement for the Internet of Things

Enterprise Application Enablement for the Internet of Things Enterprise Application Enablement for the Internet of Things Prof. Dr. Uwe Kubach VP Internet of Things Platform, P&I Technology, SAP SE Public Internet of Things (IoT) Trends 12 50 bn 40 50 % Devices

More information

Simplifying Data Data Center Center Network Management Leveraging SDN SDN

Simplifying Data Data Center Center Network Management Leveraging SDN SDN Feb 2014, HAPPIEST MINDS TECHNOLOGIES March 2014, HAPPIEST MINDS TECHNOLOGIES Simplifying Data Data Center Center Network Management Leveraging SDN SDN Author Author Srinivas Srinivas Jakkam Jakkam Shivaji

More information

JOURNAL OF OBJECT TECHNOLOGY

JOURNAL OF OBJECT TECHNOLOGY JOURNAL OF OBJECT TECHNOLOGY Online at www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2008 Vol. 7, No. 8, November-December 2008 What s Your Information Agenda? Mahesh H. Dodani,

More information

FWD. What the Internet of Things will mean for business

FWD. What the Internet of Things will mean for business Article 6: September 2014 Internet of Things This year the focus of business has shifted to the Internet of Things (IoT), the connection and sharing of information between objects, machines, people and

More information

IT Operations Managed Services A Perspective

IT Operations Managed Services A Perspective IT Operations Managed Services A Perspective 1 Introduction This paper examines the concept of Managed Services for IT Operations, the real business drivers, the key factors to be considered, the types

More information

White Paper Case Study: How Collaboration Platforms Support the ITIL Best Practices Standard

White Paper Case Study: How Collaboration Platforms Support the ITIL Best Practices Standard White Paper Case Study: How Collaboration Platforms Support the ITIL Best Practices Standard Abstract: This white paper outlines the ITIL industry best practices methodology and discusses the methods in

More information

Connect for new business opportunities

Connect for new business opportunities Connect for new business opportunities The world of connected objects How do we monitor the carbon footprint of a vehicle? How can we track and trace cargo on the move? How do we know when a vending machine

More information

for Oil & Gas Industry

for Oil & Gas Industry Wipro s Upstream Storage Solution for Oil & Gas Industry 1 www.wipro.com/industryresearch TABLE OF CONTENTS Executive summary 3 Business Appreciation of Upstream Storage Challenges...4 Wipro s Upstream

More information

Big Data Integration: A Buyer's Guide

Big Data Integration: A Buyer's Guide SEPTEMBER 2013 Buyer s Guide to Big Data Integration Sponsored by Contents Introduction 1 Challenges of Big Data Integration: New and Old 1 What You Need for Big Data Integration 3 Preferred Technology

More information

SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) SERVICE-ORIENTED BUSINESS INTEGRATION MODEL LANGUAGE SPECIFICATIONS

SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) SERVICE-ORIENTED BUSINESS INTEGRATION MODEL LANGUAGE SPECIFICATIONS SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) VERSION 2.1 SERVICE-ORIENTED BUSINESS INTEGRATION MODEL LANGUAGE SPECIFICATIONS 1 TABLE OF CONTENTS INTRODUCTION... 3 About The Service-Oriented Modeling Framework

More information

Independent Insight for Service Oriented Practice. An SOA Roadmap. John C. Butler Chief Architect. A CBDI Partner Company. www.cbdiforum.

Independent Insight for Service Oriented Practice. An SOA Roadmap. John C. Butler Chief Architect. A CBDI Partner Company. www.cbdiforum. Independent Insight for Oriented Practice An SOA Roadmap John C. Butler Chief Architect A CBDI Partner Company www.cbdiforum.com Agenda! SOA Vision and Opportunity! SOA Roadmap Concepts and Maturity Levels!

More information

Systems of Discovery The Perfect Storm of Big Data, Cloud and Internet-of-Things

Systems of Discovery The Perfect Storm of Big Data, Cloud and Internet-of-Things Systems of Discovery The Perfect Storm of Big Data, Cloud and Internet-of-Things Mac Devine CTO, IBM Cloud Services Division IBM Distinguished Engineer [email protected] twitter: mac_devine Forecast for

More information

Security in Internet of Things using Delegation of Trust to a Provisioning Server

Security in Internet of Things using Delegation of Trust to a Provisioning Server Security in Internet of Things using Delegation of Trust to a Provisioning Server Architecture overview Peter Waher Clayster Laboratorios Chile S.A, Blanco 1623, of. 1402, Valparaíso, Chile [email protected]

More information

Service Oriented Architecture (SOA) Architecture, Governance, Standards and Technologies

Service Oriented Architecture (SOA) Architecture, Governance, Standards and Technologies Service Oriented Architecture (SOA) Architecture, Governance, Standards and Technologies 3-day seminar Give Your Business the Competitive Edge SOA has rapidly seized the momentum and center stage because

More information

Scalable End-User Access to Big Data http://www.optique-project.eu/ HELLENIC REPUBLIC National and Kapodistrian University of Athens

Scalable End-User Access to Big Data http://www.optique-project.eu/ HELLENIC REPUBLIC National and Kapodistrian University of Athens Scalable End-User Access to Big Data http://www.optique-project.eu/ HELLENIC REPUBLIC National and Kapodistrian University of Athens 1 Optique: Improving the competitiveness of European industry For many

More information

NASCIO EA Development Tool-Kit Solution Architecture. Version 3.0

NASCIO EA Development Tool-Kit Solution Architecture. Version 3.0 NASCIO EA Development Tool-Kit Solution Architecture Version 3.0 October 2004 TABLE OF CONTENTS SOLUTION ARCHITECTURE...1 Introduction...1 Benefits...3 Link to Implementation Planning...4 Definitions...5

More information

SOA Governance and the Service Lifecycle

SOA Governance and the Service Lifecycle IBM SOA SOA Governance and the Service Lifecycle Naveen Sachdeva [email protected] IBM Software Group 2007 IBM Corporation IBM SOA Agenda What is SOA Governance? Why SOA Governance? Importance of SOA

More information

The ebbits project: from the Internet of Things to Food Traceability

The ebbits project: from the Internet of Things to Food Traceability The ebbits project: from the Internet of Things to Food Traceability Smart AgriMatics2014 Contribution to session 5.2 Meat Information Provenance 18-19 June 2014 Paolo Brizzi Istituto Superiore Mario Boella

More information

IAAS CLOUD EXCHANGE WHITEPAPER

IAAS CLOUD EXCHANGE WHITEPAPER IAAS CLOUD EXCHANGE WHITEPAPER Whitepaper, July 2013 TABLE OF CONTENTS Abstract... 2 Introduction... 2 Challenges... 2 Decoupled architecture... 3 Support for different consumer business models... 3 Support

More information

Business-Driven Software Engineering Lecture 3 Foundations of Processes

Business-Driven Software Engineering Lecture 3 Foundations of Processes Business-Driven Software Engineering Lecture 3 Foundations of Processes Jochen Küster [email protected] Agenda Introduction and Background Process Modeling Foundations Activities and Process Models Summary

More information

Vortex White Paper. Simplifying Real-time Information Integration in Industrial Internet of Things (IIoT) Control Systems

Vortex White Paper. Simplifying Real-time Information Integration in Industrial Internet of Things (IIoT) Control Systems Vortex White Paper Simplifying Real-time Information Integration in Industrial Internet of Things (IIoT) Control Systems Version 1.0 February 2015 Andrew Foster, Product Marketing Manager, PrismTech Vortex

More information

Sensing, monitoring and actuating on the UNderwater world through a federated Research InfraStructure Extending the Future Internet SUNRISE

Sensing, monitoring and actuating on the UNderwater world through a federated Research InfraStructure Extending the Future Internet SUNRISE Sensing, monitoring and actuating on the UNderwater world through a federated Research InfraStructure Extending the Future Internet SUNRISE Grant Agreement number 611449 Announcement of the Second Competitive

More information

Service management evolution

Service management evolution management evolution Vilho Räisänen 1, Wolfgang Kellerer 2, Pertti Hölttä 3, Olavi Karasti 4 and Seppo Heikkinen 4 Abstract This paper presents an outline for the evolution of service management. The outline

More information

Table of Contents. 1 Executive Summary... 2 2. SOA Overview... 3 2.1 Technology... 4 2.2 Processes and Governance... 8

Table of Contents. 1 Executive Summary... 2 2. SOA Overview... 3 2.1 Technology... 4 2.2 Processes and Governance... 8 Table of Contents 1 Executive Summary... 2 2. SOA Overview... 3 2.1 Technology... 4 2.2 Processes and Governance... 8 3 SOA in Verizon The IT Workbench Platform... 10 3.1 Technology... 10 3.2 Processes

More information

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,

More information

Data Analytics as a Service

Data Analytics as a Service Data Analytics as a Service unleashing the power of Cloud and Big Data 05-06-2014 Big Data in a Cloud DAaaS: Data Analytics as a Service DAaaS: Data Analytics as a Service Introducing Data Analytics as

More information

Essential Elements of an IoT Core Platform

Essential Elements of an IoT Core Platform Essential Elements of an IoT Core Platform Judith Hurwitz President and CEO Daniel Kirsch Principal Analyst and Vice President Sponsored by Hitachi Introduction The maturation of the enterprise cloud,

More information

HP SOA Systinet software

HP SOA Systinet software HP SOA Systinet software Govern the Lifecycle of SOA-based Applications Complete Lifecycle Governance: Accelerate application modernization and gain IT agility through more rapid and consistent SOA adoption

More information

Machina Research Viewpoint. The critical role of connectivity platforms in M2M and IoT application enablement

Machina Research Viewpoint. The critical role of connectivity platforms in M2M and IoT application enablement Machina Research Viewpoint The critical role of connectivity platforms in M2M and IoT application enablement June 2014 Connected devices (billion) 2 Introduction The growth of connected devices in M2M

More information

Industrial Roadmap for Connected Machines. Sal Spada Research Director ARC Advisory Group [email protected]

Industrial Roadmap for Connected Machines. Sal Spada Research Director ARC Advisory Group sspada@arcweb.com Industrial Roadmap for Connected Machines Sal Spada Research Director ARC Advisory Group [email protected] Industrial Internet of Things (IoT) Based upon enhanced connectivity of this stuff Connecting

More information

Internet of Things Value Proposition for Europe

Internet of Things Value Proposition for Europe Internet of Things Value Proposition for Europe European Commission - DG CONNECT Dr Florent Frederix, (Online) Trust and Cybersecurity unit 7 th European Conference on ICT for Transport Logistics 5 th

More information

Collaborative Open Market to Place Objects at your Service

Collaborative Open Market to Place Objects at your Service Collaborative Open Market to Place Objects at your Service D6.4.1 Marketplace integration First version Project Acronym COMPOSE Project Title Project Number 317862 Work Package WP6 Open marketplace Lead

More information

Telco s role in Smart Sustainable Cities

Telco s role in Smart Sustainable Cities Telco s role in Smart Sustainable Cities Turin, May 6th 2013 TILAB G. Rocca Introduction Smart Sustainable City is a great concept but needs to be supported by infrastructures and enabling platforms to

More information

Developing SOA solutions using IBM SOA Foundation

Developing SOA solutions using IBM SOA Foundation Developing SOA solutions using IBM SOA Foundation Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 4.0.3 4.0.3 Unit objectives After completing this

More information

Considerations: Mastering Data Modeling for Master Data Domains

Considerations: Mastering Data Modeling for Master Data Domains Considerations: Mastering Data Modeling for Master Data Domains David Loshin President of Knowledge Integrity, Inc. June 2010 Americas Headquarters EMEA Headquarters Asia-Pacific Headquarters 100 California

More information

Accenture and Oracle: Leading the IoT Revolution

Accenture and Oracle: Leading the IoT Revolution Accenture and Oracle: Leading the IoT Revolution ACCENTURE AND ORACLE The Internet of Things (IoT) is rapidly moving from concept to reality, as companies see the value of connecting a range of sensors,

More information

Utilizing big data to bring about innovative offerings and new revenue streams DATA-DERIVED GROWTH

Utilizing big data to bring about innovative offerings and new revenue streams DATA-DERIVED GROWTH Utilizing big data to bring about innovative offerings and new revenue streams DATA-DERIVED GROWTH ACTIONABLE INTELLIGENCE Ericsson is driving the development of actionable intelligence within all aspects

More information

Mobile Edge Computing: Unleashing the value chain

Mobile Edge Computing: Unleashing the value chain Mobile Edge Computing: Unleashing the value chain August 21.2015 Nan Zhong [email protected] Note :The content has been taken from an official ETSI Presentation on MEC but put in an Huawei template,

More information

- M2M Connections & Modules: Network connections, sim-cards, module types.

- M2M Connections & Modules: Network connections, sim-cards, module types. Brochure More information from http://www.researchandmarkets.com/reports/2785177/ Internet of Things (IoT) & Machine-To-Machine (M2M) Communication Market by Technologies & Platforms, M2M Connections &

More information

A PLATFORM FOR SHARING DATA FROM FIELD OPERATIONAL TESTS

A PLATFORM FOR SHARING DATA FROM FIELD OPERATIONAL TESTS A PLATFORM FOR SHARING DATA FROM FIELD OPERATIONAL TESTS Yvonne Barnard ERTICO ITS Europe Avenue Louise 326 B-1050 Brussels, Belgium [email protected] Sami Koskinen VTT Technical Research Centre

More information

Information Services for Smart Grids

Information Services for Smart Grids Smart Grid and Renewable Energy, 2009, 8 12 Published Online September 2009 (http://www.scirp.org/journal/sgre/). ABSTRACT Interconnected and integrated electrical power systems, by their very dynamic

More information

In the pursuit of becoming smart

In the pursuit of becoming smart WHITE PAPER In the pursuit of becoming smart The business insight into Comarch IoT Platform Introduction Businesses around the world are seeking the direction for the future, trying to find the right solution

More information

WHITE PAPER Business Performance Management: Merging Business Optimization with IT Optimization

WHITE PAPER Business Performance Management: Merging Business Optimization with IT Optimization Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com WHITE PAPER Performance Management: Merging Optimization with IT Optimization Sponsored by: IBM Paul

More information

Architectural Reference Model (ARM) Presenter: Martin Bauer (NEC Europe)

Architectural Reference Model (ARM) Presenter: Martin Bauer (NEC Europe) Architectural Reference Model (ARM) Presenter: Martin Bauer (NEC Europe) Overview Motivation Architectural Reference Model Reference Model Reference Architecture Best Practice / Guidelines Summary and

More information

Master Data Management

Master Data Management Master Data Management Managing Data as an Asset By Bandish Gupta Consultant CIBER Global Enterprise Integration Practice Abstract: Organizations used to depend on business practices to differentiate them

More information

Enabling Data Quality

Enabling Data Quality Enabling Data Quality Establishing Master Data Management (MDM) using Business Architecture supported by Information Architecture & Application Architecture (SOA) to enable Data Quality. 1 Background &

More information

Cisco Process Orchestrator Adapter for Cisco UCS Manager: Automate Enterprise IT Workflows

Cisco Process Orchestrator Adapter for Cisco UCS Manager: Automate Enterprise IT Workflows Solution Overview Cisco Process Orchestrator Adapter for Cisco UCS Manager: Automate Enterprise IT Workflows Cisco Unified Computing System and Cisco UCS Manager The Cisco Unified Computing System (UCS)

More information

Business Integration Architecture for Next generation OSS (NGOSS)

Business Integration Architecture for Next generation OSS (NGOSS) Business Integration Architecture for Next generation OSS (NGOSS) Bharat M. Gupta, Manas Sarkar Summary The existing BSS/OSS systems are inadequate in satisfying the requirements of automating business

More information

The Bosch IoT Suite: Technology for a ConnectedWorld. Software Innovations

The Bosch IoT Suite: Technology for a ConnectedWorld. Software Innovations The Bosch IoT Suite: Technology for a ConnectedWorld. Software Innovations 2 Bosch IoT Suite The Bosch IoT Suite: Technological basis for applications in the Internet of Things. Our world is changing fast.

More information

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com ` CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS Review Business and Technology Series www.cumulux.com Table of Contents Cloud Computing Model...2 Impact on IT Management and

More information

Cloud Computing: Computing as a Service. Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad

Cloud Computing: Computing as a Service. Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Cloud Computing: Computing as a Service Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Abstract: Computing as a utility. is a dream that dates from the beginning from the computer

More information

An Analysis of Reference Architectures for the Internet of Things

An Analysis of Reference Architectures for the Internet of Things 2015 An Analysis of Reference Architectures for the Internet of Things Everton Cavalcante 1,2, Marcelo Pitanga Alves 3, Thais Batista 1, Flavia C. Delicato 3, Paulo F. Pires 3 1 DIMAp, Federal University

More information

Smart Cities Solution Overview Innovation Center Network, Research & Innovation. SAP SE Reiner Bildmayer

Smart Cities Solution Overview Innovation Center Network, Research & Innovation. SAP SE Reiner Bildmayer Smart Cities Solution Overview Innovation Center Network, Research & Innovation SAP SE Reiner Bildmayer Why Cities need to be Run Better Challenges and Opportunities ~50% of the world s population currently

More information

HOW TO DO A SMART DATA PROJECT

HOW TO DO A SMART DATA PROJECT April 2014 Smart Data Strategies HOW TO DO A SMART DATA PROJECT Guideline www.altiliagroup.com Summary ALTILIA s approach to Smart Data PROJECTS 3 1. BUSINESS USE CASE DEFINITION 4 2. PROJECT PLANNING

More information