Andreas Harth, Katja Hose, Ralf Schenkel (eds.) Linked Data Management: Principles and Techniques
|
|
|
- Isabel Richard
- 10 years ago
- Views:
Transcription
1 Andreas Harth, Katja Hose, Ralf Schenkel (eds.) Linked Data Management: Principles and Techniques
2 2
3 List of Figures 1.1 Component diagram for the example application in section 1.5 using the components and connectors of our conceptual architecture Result of the architectural and functional analysis of an example application, the SIOC explorer
4 4
5 List of Tables 1.1 Implementation details by year, top 3 entries per cell Data integration by implementation strategy and year Results of the binary properties from the application questionnaire Results of the architectural analysis by year and component Summary of components and connectors
6 6
7 Contents I Introduction 1 1 Architecture of Linked Data Applications 3 Benjamin Heitmann, Richard Cyganiak, Conor Hayes, and Stefan Decker 1.1 Introduction An empirical survey of RDF-based applications Research method of the survey Findings Discussion of threats to validity Empirically-grounded conceptual architecture Architectural style of the conceptual architecture Decomposing the surveyed applications into components Description of components Graph access layer RDF store Data homogenisation service Data discovery service Graph query language service Graph-based navigation interface Structured data authoring interface Implementation challenges Integrating noisy and heterogeneous data Mismatch of data models between components Distribution of application logic across components Missing guidelines and patterns Analysis of an example application Future approaches for simplifying Linked Data application development More guidelines, best practices and design patterns More software libraries Software factories Related work Empirical surveys Architectures for Semantic Web applications Conclusions
8 Acknowledgements Bibliography 27
9 Part I Introduction 1
10
11 Chapter 1 Architecture of Linked Data Applications Benjamin Heitmann Digital Enterprise Research Institute, National University of Ireland, Galway Richard Cyganiak Digital Enterprise Research Institute, National University of Ireland, Galway Conor Hayes Digital Enterprise Research Institute, National University of Ireland, Galway Stefan Decker Digital Enterprise Research Institute, National University of Ireland, Galway Introduction An empirical survey of RDF-based applications Research method of the survey Findings Discussion of threats to validity Empirically-grounded conceptual architecture Architectural style of the conceptual architecture Decomposing the surveyed applications into components Description of components Graph access layer RDF store Data homogenisation service Data discovery service Graph query language service Graph-based navigation interface Structured data authoring interface Implementation challenges Integrating noisy and heterogeneous data Mismatch of data models between components Distribution of application logic across components Missing guidelines and patterns Analysis of an example application Future approaches for simplifying Linked Data application development More guidelines, best practices and design patterns More software libraries Software factories Related work Empirical surveys
12 4 Linked Data Management: Principles and Techniques Architectures for Semantic Web applications Conclusions Acknowledgements Introduction Before the emergence of RDF and Linked Data, Web applications have been designed around relational database standards of data representation and service. A move to an RDF-based data representation introduces challenges for the application developer in rethinking the Web application outside the standards and processes of database-driven Web development. These include, but are not limited to, the graph-based data model of RDF [16], the Linked Data principles [5], and formal and domain specific semantics [7]. To date, investigating the challenges which arise from moving to RDFbased data representation has not been given the same priority as the research on potential benefits. We argue that the lack of emphasis on simplifying the development and deployment of Linked Data and RDF-based applications has been an obstacle for real-world adoption of Semantic Web technologies and the emerging Web of Data. However, it is difficult to evaluate the adoption of Linked Data and Semantic Web technologies without empirical evidence. Towards this goal, we present the results of a survey of more than 100 RDF-based applications, as well as a component-based, conceptual architecture for Linked Data applications which is based on this survey. We also use the survey results to identify the main implementation challenges of moving from database-driven applications to RDF-based applications. Lastly, we suggest future approaches to facilitate the standardization of components and the development of software engineering tools to increase the uptake of Linked Data. In this chapter, we first perform an empirical survey of RDF-based applications over most of the past decade, from 2003 to As the Linked Data principles where introduced in 2006, this allows us to describe the current state-of-the-art for developing and deploying applications using RDF and Linked Data. It also allows us to determine how far the adoption of signature research topics, such as data reuse, data integration and reasoning, has progressed. We then use the results of this survey as the empirical foundation for a component-based conceptual architecture for Linked Data applications. Our conceptual architecture describes the high level components which are most often observed among the surveyed applications. These components implement the functionality that differentiates applications using RDF from databasedriven applications. For each component we describe the most common implementation strategies that are suited to specific application goals and deployment scenarios. The existence of such an architecture can be seen as evidence
13 Architecture of Linked Data Applications 5 of a convergence to a common set of principles to solve recurring development challenges arising from the use of RDF and Linked Data. Based on the empirical survey and the conceptual architecture, we discuss the main implementation challenges facing developers using RDF and Linked Data. We provide an example analysis of an application to show how our architecture and the implementation challenges apply to a real application. In addition we suggest future approaches in using the conceptual architecture to facilitate the standardisation of components and the development of software engineering tools to simplify the development of Linked Data applications. The chapter presents this perspective through the following sections: Section 1.2 presents the findings of the empirical survey, including our research method. We also discuss potential counter-arguments to the validity of our results. Based on this empirical foundation, section 1.3 proposes a conceptual architecture for Linked Data applications, and lists all components this architecture. We also list related chapters in this book for each component as well as other related literature. We discuss the main implementation challenges which arise from using RDF and Linked Data in section 1.4. A representative example of the analysis of an application is presented in 1.5. We present several approaches to ease the development of Linked Data applications in the future in section 1.6. Finally, section 1.7 discusses related work, and section 1.8 concludes the chapter and presents future work. 1.2 An empirical survey of RDF-based applications The evolution of Linked Data and Semantic Web technologies is characterised by the constant introduction of new ideas, standards and technologies, and thus appears to be in a permanent state of flux. However, more then a decade has passed since Tim Berners-Lee et al. published their comprehensive vision for a Semantic Web [6] in This allows us to look back on the empirical evidence left behind by RDF-based applications that have been developed and deployed during this time. Sjoberg et. al [46] argue that there are relatively few empirical studies in software engineering research, and that the priority for evaluating new technologies is lower than that of developing new technologies. We can transfer this observation to research on the engineering of RDF-based and Linked Data applications. Without empirical evidence it is difficult to evaluate the adoption rate of Linked Data and Semantic Web technologies. To provide such insights, we collect evidence from a representative sample of the population of developers deploying applications using RDF or Linked Data. In particular we perform an empirical survey of over a hundred applications from two demonstration challenges that received much attention from the research community.
14 6 Linked Data Management: Principles and Techniques The results of our survey enable us to determine the state of adoption of key research goals such as data reuse, data integration and reasoning. In this section, we first explain our research method. Then the findings of our empirical survey are presented, after which we discuss threats to the validity of the survey Research method of the survey The empirical evidence for our survey is provided by 124 applications that were submitted to two key demonstration challenges in the Semantic Web domain: (1) the Semantic Web challenge 1 [31] in the years 2003 to 2009 with 101 applications, which is organised as part of the International Semantic Web Conference; and (2) the Scripting for the Semantic Web challenge 2 in the years 2006 to 2009 with 23 applications, which is organised as part of the European Semantic Web Conference. As the Semantic Web was an emerging research topic, the majority of applicants were from research centres or university departments, though there were some industrial participants. Each challenge awards prizes for the top entries, as selected by the judges of the respective contest. In order to apply, the developers of an application were required to provide the judges with access to a working instance of their application. In addition, a scientific paper describing the goals and the implementation of the application was also required. The applications which were submitted to the Semantic Web challenge had to fulfill a set of minimal requirements, as described in [31]: The information sources of the application had to be geographically distributed, should have had diverse ownership so that there was no control of the evolution of the data; the data was required to be heterogeneous and from a real-world usecase. The applications should have assumed an open world model, meaning that the information never was complete. Lastly the applications were required to use a formal description of the data s meaning. Applications submitted to the Scripting for the Semantic Web challenge only had to be implemented using a scripting programming language. The applications from the two challenges are from very different domains, such as life sciences, cultural heritage, geo-information systems, as well as media monitoring, remote collaboration and general application development support. The Bio2RDF [37] project has converted 30 life sciences data-sets into RDF and then generated links between the data sets in a consistent way. DBpedia Mobile [4] is a location-aware client for mobile phones which displays nearby locations taken from DBpedia. As a related project, LinkedGeoData [3] transforms data from the OpenStreetMap project into RDF and interlinks it with other spatial data sets. Then there is the Semantic MediaWiki [32] platform, which extends the MediaWiki platform used by Wikipedia with semantic
15 Architecture of Linked Data Applications 7 capabilities, such as templates for entities and consistency checks. On the more applied side of the spectrum, there is MediaWatch [44], which crawls news articles for concepts related to climate change, extracts ontology concepts and provides a unified navigation interface. The MultimediaN E-Culture demonstrator [45] enables searching and browsing of cultural heritage catalogs of multiple museums in the Netherlands. As a final example, NASA has developed the SemanticOrganiser [29] which supports the remote collaboration of distributed and multidisciplinary teams of scientists, engineers and accident investigators with Semantic technologies. We collected the data about each application through a questionnaire that covered the details about the way in which each application implemented Semantic Web technologies and standards. It consisted of 12 questions which covered the following areas: (1) usage of programming languages, RDF libraries, Semantic Web standards, schemas, vocabularies and ontologies; (2) data reuse capabilities for data import, export or authoring; (3) implementation of data integration and schema alignment; (4) use of inferencing; (5) data access capabilities for the usage of decentralised sources, usage of data with multiple owners, data with heterogeneous formats, data updates and adherence to the Linked Data principles. The full data of the survey and a description of all the questions and possible answers is available online 3. For each application the data was collected in two steps: Firstly, the application details were filled into the questionnaire based on our own analysis of the paper submitted with the application. Then we contacted the authors of each paper, and asked them to verify or correct the data about their application. This allowed us to fill in questions about aspects of an application that might not have been covered on a paper. For instance the implementation details of the data integration are not discussed by most papers. 65% of authors replied to this request for validation Findings The results of our empirical survey show that the adoption of the capabilities that characterise Semantic Web technologies has steadily increased over the course of the demonstration challenges. In this section we present the results of the survey and analyse the most salient trends in the data. Tables 1.1, 1.2 and 1.3 summarise the main findings. Table 1.1 shows the survey results about the programming languages and RDF libraries that were used to implement the surveyed applications. In addition, it shows the most implemented Semantic Web standards and the most supported schemas, vocabularies and ontologies. Java was the most popular programming language choice throughout the survey time-span. This accords with the fact that the two most mature and popular RDF libraries, Jena and Sesame both require Java. On the side of the scripting languages, the most 3
16 8 Linked Data Management: Principles and Techniques TABLE 1.1: Implementation details by year, top 3 entries per cell Programming languages Java 60% C 20% Java 56% JS 12% Java 66% Java 10% JS 15% PHP 26% RDF libraries Jena 18% Sesame 12% Lucene 18% RAP 15% RDFLib 10% SemWeb standards RDF 100% OWL 30% RDF 87% RDFS 37% OWL 37% RDF 66% OWL 66% RDFS 50% RDF 89% OWL 42% SPARQL 15% Schemas/ vocabularies/ ontologies RSS 20% FOAF 20% DC 20% DC 12% SWRC 12% FOAF 26% RSS 15% Bibtex 10% Programming languages RDF libraries SemWeb standards Java 50% PHP 25% overall Sesame 33% Jena 8% RDF 100% SPARQL 50% OWL 41% Java 43% PHP 21% Sesame 17% ARC 17% Jena 13% RDF 100% SPARQL 17% OWL 10% Java 46% JS 23% PHP 23% Sesame 23% RDF 100% SPARQL 69% OWL 46% Java 48% PHP 19% JS 13% Sesame 19% Jena 9% RDF 96% OWL 43% SPARQL 41% Schemas/ vocabularies/ ontologies FOAF 41% DC 20% SIOC 20% FOAF 30% DC 21% DBpedia 13% FOAF 34% DC 15% SKOS 15% FOAF 27% DC 13% SIOC 7% mature RDF library is ARC, which explains the popularity of PHP for scripting Semantic Web applications. From 2006 there is a noticeable consolidation trend towards supporting RDF, OWL and SPARQL, reflecting an emergent community consensus. The survey data about vocabulary support requires a different interpretation: While in 2009 the support for the top three vocabularies went down, the total number of supported vocabularies went from 9 in 2003 to 19 in This can be explained by the diversity of domains for which Linked Data became available. The simplification of data integration is claimed as one of the central benefits of implementing Semantic Web technologies. Table 1.2 shows the implementation of data integration grouped by implementation strategy and year. The results suggest that, for a majority of applications, data integration still requires manual inspection of the data and human creation of rules, scripts and other means of integration. However the number of applications that implement fully automatic integration is on the rise, while the number of ap-
17 Architecture of Linked Data Applications 9 TABLE 1.2: and year Data integration by implementation strategy manual 30% 13% 0% 16% 9% 5% 4% semiautomatic 70% 31% 100% 47% 58% 65% 61% automatic 0% 25% 0% 11% 13% 4% 19% not needed 0% 31% 0% 26% 20% 26% 16% plications which require manual integration of the data is steadily declining. The steady number of applications that do not require data integration, can be explained by the availability of homogeneous sources of RDF and Linked Data that do not need to be integrated. Table 1.3 shows the survey results on the support for particular features over the time period. Some capabilities have been steadily supported by a majority of applications throughout the whole period, while other capabilities have seen increases recently or have suffered decreases. Support for decentralised sources, data from multiple owners, and sources with heterogeneous formats is supported each year by a large majority of the applications. We can attribute this to the central role that these capabilities have in all of the early foundational standards of the Semantic Web such as RDF and OWL. While support for importing and exporting data has fluctuated over the years, it still remains a popular feature. Since 2007, the SPARQL standard has been increasingly used to implement APIs for importing data. The Linked Data principles are supported by more then half of the contest entries in 2009, after being formulated in 2006 by Tim Berners-Lee [5]. Interestingly the increasing usage of the Linking Open Data (LOD) cloud may explain some of the positive and negative trends we observe in other capabilities. For example, the negative correlation (-0.61) between support for linked data principles and inferencing is probably explained by the fact that data from the LOD cloud already uses RDF and usually does not require any integration. Support for the creation of new structured data is strongly correlated (0.75) with support for the Linked Data principles. This can be explained by the growing maturity of RDF stores and their APIs for creating data, as well as increased community support for adding to the Linked Data cloud. On the other hand, support for updating data after it was acquired is strongly negatively correlated (-0.74) with the growth in support for the LOD principles. This could reflect a tendency to treat structured data as being static. To summarise: Support for features which differentiate RDF-based from database-driven applications is now widespread amongst the development communities of the challenges. Support for the graph-based data model of RDF is virtually at 100%, and most applications reuse formal domain seman-
18 10 Linked Data Management: Principles and Techniques TABLE 1.3: Results of the binary properties from the application questionnaire Data creation 20% 37% 50% 52% 37% 52% 76% Data import 70% 50% 83% 52% 70% 86% 73% Data export 70% 56% 83% 68% 79% 86% 73% Inferencing 60% 68% 83% 57% 79% 52% 42% Decentralised sources Multiple owners Heterogeneous formats 90% 75% 100% 57% 41% 95% 96% 90% 93% 100% 89% 83% 91% 88% 90% 87% 100% 89% 87% 78% 88% Data updates 90% 75% 83% 78% 45% 73% 50% Linked Data principles 0% 0% 0% 5% 25% 26% 65% tics such as the FOAF, DublinCore and SIOC vocabularies. Support for the Linked Data principles has reached more then half of the contest entries in And while the choice of implemented standards has crystallised around RDF, OWL and SPARQL, the choice of programming languages and RDF libraries has gravitated towards Java, Sesame and Jena, as well as PHP and ARC. In addition, the majority of applications still require some human intervention for the integration service Discussion of threats to validity We will discuss and deflect the three most commonly raised threats to the validity of our empirical survey. These are (i) the representativeness, (ii) the number of authors validating the analysis of their applications and (iii) the reliance on analysing academic papers instead of source code. The first threat to validity concerns the selection of applications for our empirical survey. We choose to use only applications from two academic demonstration challenges for our survey. This may raise the question of representativeness, as most of the applications are academic prototypes. However, as both challenges explicitly also invited and received some industry submissions, the two challenges do not have a purely academic audience. In addition, several of the applications from the Semantic Web challenge
19 Architecture of Linked Data Applications 11 were successfully commercialised in the form of start-ups: The Sindice [40] crawler and lookup index for Linked Data resources became the foundation for SindiceTech.com which provides products in the area of knowledge data clouds. The Seevl application [41], a challenge winner in the year 2011, provides music recommendations and additional context for musical artists on YouTube and became a successful start-up. The Event Media [30] application aggregates information about media events, and interlinks them to Linked Data. Event Media was a winner of the challenge in 2012, and entered a partnership with Nokia. Furthermore, we believe that the data of our survey is representative for the following reasons: Firstly, the applications could not just be tools or libraries, but had to be complete applications that demonstrate the benefits of RDF or Linked Data to the end-user of the application. Secondly, the applications submitted to the challenges already follow baseline criteria as required in [31], e.g. most of them use real-world data. Thirdly, the authors of each submitted application were also required to submit a scientific paper focusing on the implementation aspects of their application, which allows them to explain the goals of their application in their own terms. Finally, each challenge was held over multiple years, which makes it easier to compare the submissions from the different years in order to see trends, such as the uptake of the Linked Data principles. The second threat to validity, is the number of authors who verified the data about their applications. Only 65% of authors replied to this request for validation. This may be due to the short-lived nature of academic addresses. In several instances, we tried to find an alternative address, if an address had expired. Every author that we successfully contacted validated the data about their application, which usually included only small corrections for one or two properties. We were unable to validate the data for authors that could not be reached. As the first challenge was already held in 2003, we do not believe that a much higher validation rate would have been possible at the time of writing. The third threat to validity is that for each application only the associated academic paper and not the source code were analysed for the survey. There are several reasons for this. At the time of writing most demonstration prototypes, except those from the most recent years, were not deployed anymore. Publication of the source code of the applications was not a requirement for both challenges. In addition, it is very likely that most applications would not have been able to make their source code available due to IP or funding restrictions.
20 12 Linked Data Management: Principles and Techniques 1.3 Empirically-grounded conceptual architecture Based on our empirical analysis, we now present a conceptual architecture for Linked Data applications. As defined by Soni et al. [47], a conceptual architecture describes a system in terms of its major design elements and the relationships among them, using design elements and relationships specific to the domain. It enables the discussion of the common aspects of implementations from a particular domain, and can be used as a high-level architectural guideline when implementing a single application instance for that domain. Our conceptual architecture describes the high level components most often used among the surveyed applications to implement functionality that substantially differentiates applications using RDF or Linked Data from databasedriven applications. For each component we describe the most common implementation strategies that are suited for specific application goals and deployment scenarios. We first explain the choice of architectural style for our conceptual architecture, and we describe our criteria for decomposing the surveyed applications into components. Then we provide a detailed description of the components. This is followed by a discussion of the main implementation challenges that have emerged from the survey data. In addition, we suggest future approaches for applying the conceptual architecture Architectural style of the conceptual architecture Fielding [18] defines an architectural style as a coordinated set of architectural constraints that restricts the roles and features of architectural elements and the allowed relationships among those elements within any architecture that conforms to that style. Our empirical survey of RDF-based applications showed that there is a range of architectural styles among the surveyed applications, which is dependent on the constraints of the respective software architects. For example, as existing Web service infrastructure can be reused as part of an application, many applications chose a service-oriented architecture. Applications that are focused on reusing existing databases and middleware infrastructure preferred a layered style. The client-server architecture was used when decentralised deployment across organisations or centralised storage were important. Applications that prioritised robustness, decentralisation or independent evolution favoured a peer-to-peer architecture. Finally, the architecture of all other applications usually was defined by the reuse of existing tools and libraries as components in a component-based architecture. As the component-based architectural style is the lowest common denominator of the surveyed applications, we chose the component-based architectural style for our conceptual architecture. It allows us to express the most
21 TABLE 1.4: year number of apps Architecture of Linked Data Applications 13 Results of the architectural analysis by year and component graph access layer RDF store graphbased nav. interface data homog. service graph query language service structured data authoring interface data discovery service % 80% 90% 90% 80% 20% 50% % 94% 100% 50% 88% 38% 25% % 100% 100% 83% 83% 33% 33% % 95% 89% 63% 68% 37% 16% % 92% 96% 88% 88% 33% 54% % 87% 83% 70% 78% 26% 30% % 77% 88% 80% 65% 19% 15% total % 88% 91% 74% 77% 29% 30% common high-level functionality that is required to implement Semantic Web technologies as separate components Decomposing the surveyed applications into components In order to arrive at our component-based conceptual architecture, we started with the most commonly found functionality amongst the surveyed applications: a component that handles RDF, an integration component and a user interface. With these components as a starting point, we examined whether the data from the architectural analysis suggested we split them into further components. Table 1.4 shows the results of this architectural analysis. The full data of the analysis is available online 4. Every surveyed application (100%) makes use of RDF data. In addition, a large majority (88%), but not all surveyed applications implement persistent storage for the RDF data. In practice many triple stores and RDF libraries provide both functionality, but there are enough cases where this functionality is de-coupled, e.g. if the application has no local data storage, or only uses SPARQL to access remote data. This requires splitting of the RDF-handling component into two components. First, a graph access layer, which provides an abstraction for the implementation, number and distribution of data sources of an application. Second, an RDF store for persistent storage of graph-based RDF data. A query service which implements SPARQL, or other graph-based query languages, in addition to searching on unstructured data, is implemented by 77% of surveyed applications. Such high coverage suggested the need for a graph query language service. It is separate from the graph access layer, which provides native API access to RDF graphs. A component for integrating data is implemented by 74% of the applications. It provides a homogeneous perspective on the external data sources 4
22 14 Linked Data Management: Principles and Techniques Graph-based navigation interface Data homogenisation service binary API binary API SPARQL over HTTP Application logic binary API Graph access layer binary API Data discovery service RDF store FIGURE 1.1: Component diagram for the example application in section 1.5 using the components and connectors of our conceptual architecture provided by the graph access layer. Usually external data first needs to be discovered and aggregated before it can be integrated - an integration service would offer this functionality. However only 30% of the surveyed applications required this functionality. For this reason, we split the integration functionality into a data homogenisation service and a data discovery service. Most (91%) applications have a user interface. All user interfaces from the surveyed applications allow the user to navigate the graph-based data provided by the application. Only 29% provide the means for authoring new data. Thus the user interface functions were split between two components: the graph-based navigation interface and the structured data authoring interface Description of components The resulting seven components describe the largest common high-level functionality which is shared between the surveyed applications in order to implement the Linked Data principles and Semantic Web technologies. For each component we give a name, a description of the role of the component, and a list of the most common implementation strategies for each component. In addition, we list related chapters in this book for each component, as well as other related literature. Figure 1.1 shows the component diagram for the example application in section 1.5 using the components and connectors of our conceptual architecture, which are summarised in table Graph access layer Also known as data adapter or data access provider. This component provides the interface needed by the application logic to access local or remote data sources, with the distinction based on physical, administrative or or-
23 Architecture of Linked Data Applications 15 TABLE 1.5: Components Summary of components and connectors Graph access layer RDF store Data homogenisation service Data discovery service Graph query language service Graph-based navigation interface Structured data authoring interface Connectors HTTP HTTPS SPARQL over HTTP SQL connection binary API ganisational remoteness. In addition this component provides a translation or mapping from the native data model of the programming language to the graph-based data model of RDF. All (100%) of the applications have a graph access layer. This component is separate and distinct from the RDF store and graph query language service components, as it provides an abstraction layer on top of other RDF stores, query interfaces and legacy document storage or relational databases. Libraries for accessing RDF stores locally or remotely via SPARQL are available for all major programming and scripting languages. Oren et al. [39] describes the ActiveRDF approach for mapping between object oriented programming languages and the RDF data model. Another more recent approach to map between object oriented porgramming languages and the RDF data model is the Object triple mapping (OTM) described by Quasthoff et al. [42]. In chapter??, Calbimonte et al. discuss mapping the information generated from e.g. mobile devices and sensors to RDF, and querying the streams of subsequently generated Linked Data. Implementation strategies: Accessing local data is implemented via programmatic access through RDF libraries by at least 50% of the applications. 47% use a query language for accessing local or remote data sources. Most of these applications use the SPARQL standard. Decentralised sources are used by 85%; 91% use data from multiple owners; 73% can import external data and 69% can export their data or provide a SPARQL end-point to make their data reusable. 65% of applications support data-updating during run-time RDF store Also known as triple store, persistent storage or persistence layer. This component provides persistent storage for RDF and other graph based data. It is accessed via the graph access layer. 88% of applications have such functionality. The current state-of-the-art approaches for RDF storage, indexing and query processing is described in chapter??, and chapter?? describes how to store RDF in the Amazon Web Services (AWS) cloud.
24 16 Linked Data Management: Principles and Techniques Bizer et al. [9] provide an overview of the features and the performance of RDF stores as part of the Berlin SPARQL Benchmark results. Notable examples of RDF stores include OpenLink Virtuoso, which is described in chapter??, the Bigdata RDF store, which is described in chapter??, as well as Jena [50] and the Jena TDB 5 backend. Implementation strategies: Possible supported standards include, but are not limited to, data representation languages (XML, RDF), meta-modelling languages (OWL, RDFS) and query languages (SQL, SPARQL). RDF is explicitly mentioned by 96% of applications; OWL is supported by 43%; RDF Schema is supported by 20%. Inferencing or reasoning on the stored data is explicitly mentioned by 58% of the applications. Another implementation strategy is to use a relational database to store RDF data, which is described in more detail in chapter?? Data homogenisation service Also known as integration service, aggregation service, mediation service or extraction layer. This component provides a means for addressing the structural, syntactic or semantic heterogeneity of data resulting from data access to multiple data sources with different formats, schemas or structure. The service goal is to produce a homogeneous view on all data for the application. The data homogenisation service often needs to implement domain or application specific logic. 74% of the applications implement this component. [17] provides a general overview of semantic integration, and [1] provides an overview of using Semantic Web technologies for integration. Chapter?? goes into details of the current state-of-the-art in aligning ontologies for integrating Linked Data. In addition, chapter?? describes incremental reasoning on RDF streams, which can be used for integrating Linked Data from different sources. Implementation strategies: Integration of heterogeneous data is supported by 87% of the applications; 91% support data integration from sources with different ownership. Integration of data from distributed, decentralised data sources is supported by 85%. These three properties are orthogonal, as it would be possible, for example, to support just SIOC data, which is not heterogeneous, but which may be aggregated from personal websites, so that the data sources are distributed and under different ownership. The four major styles of implementing the data homogenisation service are: Automatic integration (11%), which is performed using heuristics or other techniques to avoid human interaction. Semi-automatic integration (59%), which requires human inspection of the source data, after which rules, scripts and other means are manually created to integrate the data automatically in the future. Manual integration (10%) in which the data is completely edited by a human. Finally, 20% do not need any data integration because they operate on homogeneous sources. 5
25 Architecture of Linked Data Applications Data discovery service Also known as crawler, harvester, scutter or spider. This component implements automatic discovery and retrieval of external data. This is required where data should be found and accessed in a domain specific way before it can be integrated. 30% of applications implement a data discovery service. Chapter?? describes how to discovery and access Linked Data from the Web as a distributed database. Harth et al. [22] introduce different research issues related to data crawling. Kafer et al. [28] provide an empirical survey of the change frequency of Linked Data sources. Chapter?? describes how to represent and store provenance information for Linked Data. Finally, chapter?? Speiser, describes the LInked Data Services (LIDS) approach which enables (semi-)automatic discovery and integration for distributed Linked Data services. Implementation strategies: The component should support different discovery and access mechanisms, like HTTP, HTTPS, RSS. Natural language processing or expression matching to parse search results or other web pages can be employed. The service can run once if data is assumed to be static or continuously if the application supports updates to its data (65%) Graph query language service Also known as query engine or query interface. This component provides the ability to perform graph-based queries on the data, in addition to search on unstructured data. Interfaces for humans, machine agents or both can be provided. 77% of applications provide a graph query language service. Mangold et al. [34] provides a survey and classification of general semantic search approaches. Arenas et al. [2] provide an introduction to SPARQL and how it applies to Linked Data. Chapter?? discusses optimization of SPARQL query processing and provides a survey of join and path traversal processing in RDF databases. Chapter?? provides an overview of the state-of-the-art of the concepts and implementation details for federated query processing on linked data. Chapter?? introduces data summaries for Linked Data, which enable source selection and query optimisation for federated SPARQL queries. Finally, chapter?? describes peer to peer based query processing over Linked Data as an alternative approach. Implementation strategies: Besides search on features of the data structure or semantics, generic full text search can be provided. An interface for machine agents may be provided by a SPARQL, web service or REST endpoint. SPARQL is implemented by 41% of applications, while the preceding standards SeRQL and RDQL are explicitly mentioned by only 6% Graph-based navigation interface Also known as portal interface or view. This component provides a human accessible interface for navigating the graph-based data of the application. It
26 18 Linked Data Management: Principles and Techniques does not provide any capabilities for modifying or creating new data. 91% of applications have a graph-based navigation interface. Dadzie and Rowe [15] provide an overview of approaches used for visualising Linked Data. One of the most popular JavaScript libraries for displaying RDF and other structured data as faceted lists, maps and timelines is Exhibit [27]. The NautiLOD language for facilitating semantic navigation on Linked Data, is described in chapter??. Implementation strategies: The navigation can be based on data or metadata, such as a dynamic menu or faceted navigation. The presentation may be in a generic format, e.g. in a table, or it may use a domain specific visualisation, e.g. on a map. Most navigation interfaces allow the user to perform searches, Hildebrand et al. [26] provides an analysis of approaches for user interaction with semantic searching Structured data authoring interface This component allows the user to enter new data, edit existing data, and import or export data. The structured data authoring component depends on the navigation interface component, and enhances it with capabilities for modifying and writing data. Separation between the navigation interface and the authoring interface reflects the low number of applications (29%) implementing write access to data. The Linked Data Platform (LDP) specification, which enables accessing, updating, creating and deleting resources on servers that expose their resources as Linked Data, is described in chapter??. The Semantic MediaWiki project [32] is an example where a user interface is provided for authoring data. Another example is the OntoWiki [24] project. Implementation strategies: The annotation task can be supported by a dynamic interface based on schema, content or structure of data. Direct editing of data using standards such as e.g. RDF, RDF Schema, OWL or XML can be supported. Input of weakly structured text using, for example, wiki formatting can be implemented. Suggestions for the user can be based on vocabulary or the structure of the data. 1.4 Implementation challenges When comparing the development of database-driven and RDF-based applications, different implementation challenges arise that are unique to Linked Data and Semantic Web technologies. Based on our empirical analysis we have identified the most visible four challenges: Integrating noisy data, mismatched data models between components, distribution of application logic across components and missing guidelines and patterns.
27 Architecture of Linked Data Applications 19 Identifying these challenges has two benefits. It allows practitioners to better estimate the effort of using RDF or Linked Data in an application. In addition, it allows tool developers to anticipate and mitigate these challenges in future versions of new libraries and software frameworks Integrating noisy and heterogeneous data One objective of RDF is to facilitate data integration [25] and data aggregation [7]. However, the empirical data from our analysis shows that the integration of noisy and heterogeneous data contributes a major part of the functional requirements for utilising RDF or Linked Data. Our analysis demonstrates the impact of the integration challenge on application development: The majority (74%) of analysed applications implemented a data homogenisation service. However some degree of manual intervention was necessary for 69% of applications. This means that prior to integration, data was either manually edited or data from different sources was manually inspected in order to create custom rules or code. Only 11% explicitly mention fully automatic integration using e.g. heuristics or natural language processing. 65% allowed updating of the data after the initial integration, and reasoning and inferencing was also used for 52% of integration services Mismatch of data models between components The surveyed applications frequently had to cope with a software engineering mismatch between the internal data models of components. Accessing RDF data from a component requires mapping of an RDF graph to the data model used by the component [39]. Most of the analysed applications (about 90%) were implemented using object-oriented languages, and many of the analysed applications stored data in relational databases. This implies that these applications often had to handle the mismatch between three different data-models (graph-based, relational and object-oriented). This can have several disadvantages, such as introducing a high overhead in communication between components and inconsistencies for round-trip conversion between data models Distribution of application logic across components For many of the components identified by our analysis, the application logic was not expressed as code but as part of queries, rules and formal vocabularies. 52% of applications used inferencing and reasoning, which often encode some form of domain and application logic; the majority of applications explicitly mentioned using a formal vocabulary, and 41% make use of an RDF query language. This results in the application logic being distributed across the different components.
28 20 Linked Data Management: Principles and Techniques The distribution of application logic is a well known problem for databasedriven Web applications [33]. Current web frameworks such as Ruby on Rails 6 or the Google Web Toolkit 7 allow the application developer to control the application interface and the persistent storage of data programmatically through Java or Ruby, without resorting to a scripting language for the interface and SQL for the data storage. Similar approaches for centralising the application logic of Linked Data applications still have to be developed Missing guidelines and patterns Most Semantic Web technologies are standardised without providing explicit guidelines for the implementation of the standards. This can lead to incompatible implementations especially when the interplay of multiple standards is involved. The establishment of a community consensus generally requires comparison and alignment of existing implementations. However, the waiting period until agreed guidelines and patterns for the standard have been published can have a negative impact on the adoption of a standard. To illustrate this implementation challenge, we discuss the most visible instances from our survey data. Publishing and consuming data: All of the applications consume RDF data of some form; 73% allow access to or importing of user-provided external data, and 69% can export data or can be reused as a source for another application. However the analysis shows that there are several incompatible possible implementations for publishing and consuming of RDF data. Only after the publication of the Linked Data principles [5] in 2006 and the best practices for publishing Linked Data [8] in 2007, did an interoperable way for consuming and publishing of data emerge. This is also reflected by the increasing use of these guidelines by the analysed applications from 2007 on. Embedding RDF on web pages: While the majority of applications in our survey have web pages as user interfaces, RDF data was published or stored separately from human readable content. This resulted in out-of-sync data, incompatibilities and difficulties for automatically discovering data. Our analysis shows, that after finalising the RDFa standard 8 in 2008, embedding of RDF on web pages strongly increased. Writing data to a remote store: While SPARQL standardised remote querying of RDF stores, it did not include capabilities for updating data. Together with a lack of other standards for writing or updating data, this has resulted in a lot of applications (71%) in our survey which do not include a user interface for authoring data. The lack of capabilities of creating, editing or deleting data is addressed by SPARQL Version in Restricting read and write access: One capability that is currently mostly
29 Architecture of Linked Data Applications 21 Graph-based navigation interface: BrowseRDF (Ruby) (primary) Application Logic Ruby Graph access layer: ActiveRDF, data-model: object oriented Ruby Data homogenisation service: OWLIM+Sesame (Java) via SPARQL Ruby Persistent Storage: Redland, data-model: graph MySQL, data-model: relational Data discovery service: command line utilities Ruby FIGURE 1.2: Result of the architectural and functional analysis of an example application, the SIOC explorer missing from all Semantic Web standards is the management of permissions for read or write access to resources and RDF stores. Our empirical analysis shows, that if such capabilities are required for an application, then they will be implemented indirectly via other means. We hope that this will be addressed in the near future by the successful standardisation of current efforts such as WebID [48], which provides a generic, decentralised mechanism for authentication through the use of Semantic Web technologies. At the time of writing, the W3C is working on standardising write access for Linked Data via the Linked Data Platform (LDP) working group 10, which is described in?? of this book. 1.5 Analysis of an example application The challenges of implementing RDF-based applications become apparent even for small applications. Figure 1.2 shows the architecture of an application from the authors previous work, the SIOC explorer [10]. It aggregates content from weblogs and forums exposing their posts and comments as RDF data using the SIOC vocabulary [11]. This allows following multiple blogs or forums from a single application. The application logic and most parts of the application are implemented using the Ruby scripting language and the 10
30 22 Linked Data Management: Principles and Techniques Ruby on Rails web application framework. The graph-based navigation interface allows faceted browsing of the SIOC data and is implemented through the BrowseRDF Ruby component [38]. The graph access layer is provided by ActiveRDF [39], which is an object-oriented Ruby API for accessing RDF data. The data homogenisation service is implemented in two steps: (1) generic object consolidation is provided by the OWLIM extension for Sesame 11, written in Java, and accessed via SPARQL, and (2) domain specific integration of SIOC data is implemented as Ruby code. The Redland library 12 is used as an RDF store. Other application data (non-rdf data) is persistent to a MySQL relational database. A data discovery service is implemented through several Unix command line utilities which are controlled by Ruby. The SIOC explorer does not implement a graph query language service search service or a structured data authoring interface. All four identified implementation challenges affect the SIOC explorer: (1) Even though all data sources use RDF and the SIOC vocabulary, the data is noisy enough to require two steps of data homogenisation. The OWLIM extension of Sesame provides generic object consolidation, and integration specific to SIOC data is implemented as Ruby code. (2) The components are mismatched, regarding both the data models(object oriented, relational and graph based) and the programming language APIs (Ruby and Java). This requires mapping RDF to Ruby objects (ActiveRDF) and mapping relational data to Ruby objects (ActiveRecord). Sesame has no Ruby API, so SPARQL is used to access Sesame, resulting in slow performance for large numbers of concurrent read operations. (3) Unclear standards and best practices affect the crawler implementation, as different SIOC exporters require different methods to discover and aggregate the SIOC data, as RDFa and GRDDL were not in wide use when the SIOC explorer was developed in (4) The application logic is distributed across the primary application logic component, the data interface, the rules of the integration service and the code which controls the crawler. 1.6 Future approaches for simplifying Linked Data application development We propose software engineering and design approaches to facilitate the implementation of Linked Data applications in the future, and which mitigate the identified implementation challenges. These approaches are: (1) guidelines, best practices and design patterns; (2) software libraries; and (3) software factories
31 Architecture of Linked Data Applications 23 All of these approaches can be used to provide ready-made solutions and lower the entry barriers for software that is consuming from or publishing to the Web of Data. In addition, the reuse of such existing solutions can enable greater interoperability between Linked Data applications, thus leading to a more coherent Web of Linked Data More guidelines, best practices and design patterns The Linked Data community has already developed some guidelines and collections of best practices for implementing Semantic Web technologies. These include best practices for publishing of Linked Data [23] and the naming of resources [43], and the Linked Data principles themselves. In the future this can be complemented by implementation-oriented guidelines such as design patterns. A design pattern is a description of a solution for a reoccurring implementation problem [19]. A first collection of implementation-oriented design patterns is provided by the Linked Data patterns collection 13, while [36] provides guidelines for requirements engineering and design of Linked Data applications More software libraries In order to go beyond libraries for accessing and persisting of RDF data, more software libraries in the future will directly provide reusable implementations of guidelines and patterns. Several such libraries are currently being developed such as ActiveRDF, which maps the native data model of a programming language to RDF [39], and the SIOC PHP API 14 for implementing access to SIOC data. Best practices for converting relational databases to RDF are implemented in the D2R server 15, while any23 16 implements best practices for conversion of many structured data formats to RDF. The Silk framework 17 can be used for interlinking data sets. Additional projects for encoding best practices can be found on the LOD community wiki Software factories Software factories provide the means to create complete and fully functioning applications by building on patterns, best practices and libraries [21]. They allow the assembly of complete applications from existing components and libraries that implement community best practices and patterns. Customi
32 24 Linked Data Management: Principles and Techniques sation for a domain or a specific application is possible through pre-defined extension points and hooks. While the majority of analysed applications uses RDF libraries, some components (e.g. navigation interface, homogenisation and data discovery services) are usually custom made for each application. A software factory for Linked Data applications could provide a pre-made solution for each of the components from our conceptual architecture. Then the application developer could add the application and domain-specific logic and customise each of the components. This would allow rapid assembly of Linked Data applications. Similar benefits are already provided by modern Web application frameworks such as Ruby on Rails, PHPCake and Django. [38] presented a first prototype of such a software factory for Linked Data applications. 1.7 Related work We now discuss related work in the areas of empirical surveys about the adoption of Semantic Web technologies and architectures for Semantic Web applications Empirical surveys Cardoso [12] presents the results of a survey of 627 Semantic Web researchers and practitioners carried out in January The questions from the survey covered the categories of demographics, tools, languages and ontologies. The goal of the survey was to characterise the uptake of Semantic Web technologies and the types of use-cases for which they were being deployed. Concrete applications were not taken into account. As the survey was carried out over a two months period of time, it does not allow conclusions about long term trends before or after the survey was taken. Another similar survey of 257 participants (161 researchers and 96 industry-oriented participants) was published online 19 in The data for this survey was taken from a 3 month period. This survey had the goal of measuring the general awareness of Semantic Web technologies and social software in academia and enterprise. As such, the questions did not go into any implementation specific details. Cunha [13] performed a survey of 35 applications from the Semantic Web challenges in 2003, 2004 and The goal of the survey was to cluster the applications based on their functionality and their architecture. The result is a list of 25 application categories with the number of applications per category for each year. This survey covers only 3 years, and it is not concerned with the 19
33 Architecture of Linked Data Applications 25 adoption of Semantic Web technologies and capabilities, whereas our empirical survey provides empirical data on the uptake of e.g. data integration or the Linked Data principles Architectures for Semantic Web applications García-Castro et. al [20] propose a component-based framework for developing Semantic Web applications. The framework consists of 32 components aligned on 7 dimensions, and is evaluated in 8 use-cases. While existing software tools and libraries that implement these components are identified, the identification of the components is not based on an empirical grounding. Tran et. al [49] propose a layered architecture for ontology-based applications that is structured in a presentation layer, logic layer and data layer. It is based on best practices for service-oriented architecture and on the authors model of life-cycle management of ontologies, and evaluated in a use-case. However there is no empirical grounding for the architecture. A similar approach is taken by Mika and Akkermans [35] who propose a layered architecture for ontology-based applications, with layers for the application, the middle-ware and the ontology. The architecture is based on a requirements analysis of KM applications. Cunha [14] builds on his earlier survey in [13] and presents an UML architecture based on the 35 analysed applications. The goal of the architecture is to provide the foundation for a potential software framework, but no evaluation or implementation of the architecture is provided. 1.8 Conclusions In this chapter we focused on the software architecture and engineering issues involved in designing and deploying Linked Data applications. We argue that software engineering can unlock the full potential of the emerging Web of Linked Data through more guidelines, best practices and patterns, and through software libraries and software factories that implement those guidelines and patterns. Greater attention needs to be given to standardising the methods and components required to deploy Linked Data in order to prevent an adoption bottleneck. Modern Web application frameworks such as Ruby on Rails for Ruby, PHPCake for PHP and Django for Python provide standard components and architectural design patterns for rolling out database-driven Web applications. The equivalent frameworks for Linked Data applications are still in their infancy. As a guide to developers of Linked Data applications, we present an architectural analysis of 124 applications. From this we put forward a conceptual
34 26 Linked Data Management: Principles and Techniques architecture for Linked Data applications, which is intended as a template of the typical high level components required for consuming, processing and publishing Linked Data. Prospective developers can use it as a guideline when designing an application for the Web of Linked Data. For experienced practitioners it provides a common terminology for decomposing and analysing the architecture of a Linked Data application Acknowledgements The work presented in this paper has been funded by Science Foundation Ireland under Grant No. SFI/08/CE/I1380 (Líon-2). We also thank everybody from the Digital Enterprise Research Institute (DERI), National University of Ireland, Galway, for their feedback and support, especially Laura Dragan and Maciej Dabrowski.
35 Bibliography [1] CROSI - Capturing, Representing and Operationalising Semantic Integration, The University of Southampton and Hewlett Packard Laboratories [2] Marcelo Arenas and Jorge Pérez. Querying semantic web data with sparql. In Proceedings of the thirtieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, PODS 11, pages , New York, NY, USA, ACM. [3] Sören Auer, Jens Lehmann, and Sebastian Hellmann. Linkedgeodata: Adding a spatial dimension to the web of data. In The Semantic Web- ISWC 2009, pages Springer, [4] Christian Becker and Christian Bizer. DBpedia Mobile: A Location- Aware Semantic Web Client. In Semantic Web Challenge at the International Semantic Web Conference, [5] Tim Berners-Lee. Linked Data - Design Issues [6] Tim Berners-Lee, James A. Hendler, and Ora Lassila. The Semantic Web. Scientific American, 284(5):34 43, [7] C. Bizer. The Emerging Web of Linked Data. IEEE Intelligent Systems, pages 87 92, [8] Chris Bizer, Richard Cyganiak, and Tom Heath. How to Publish Linked Data on the Web. Technical report, FU Berlin, [9] Christian Bizer and Andreas Schultz. The berlin sparql benchmark. International Journal on Semantic Web and Information Systems (IJSWIS), 5(2):1 24, [10] Uldis Bōjars, Benjamin Heitmann, and Eyal Oren. A Prototype to Explore Content and Context on Social Community Sites. In International Conference on Social Semantic Web, September [11] John Breslin, Stefan Decker, Andreas Harth, and Uldis Bojars. Sioc: An approach to connect web-based communities. The International Journal of Web-Based Communities,
36 28 Linked Data Management: Principles and Techniques [12] Jorge Cardoso. The Semantic Web Vision: Where Are We? IEEE Intelligent Systems, 22:84 88, [13] Leonardo Magela Cunha and Carlos Jose Pereira de Lucena. Cluster The Semantic Web Challenges Applications: Architecture and Metadata Overview. Technical report, Pontificia Universidade Catolica do Rio de Janeiro, [14] Leonardo Magela Cunha and Carlos Jose Pereira de Lucena. A Semantic Web Application Framework. Technical report, Pontificia Universidade Catolica do Rio de Janeiro, [15] Aba-Sah Dadzie and Matthew Rowe. Approaches to visualising Linked Data: A survey. Semantic Web, 2(2):89 124, [16] S. Decker, S. Melnik, F. Van Harmelen, D. Fensel, M. Klein, J. Broekstra, M. Erdmann, and I. Horrocks. The Semantic Web: The roles of XML and RDF. IEEE Internet computing, pages 63 73, [17] Anhai Doan and Alon Y. Halevy. Semantic integration research in the database community: A brief survey. AI Magazine, pages 83 94, [18] Roy T. Fielding. Architectural Styles and the Design of Network-based Software Architectures. PhD thesis, UC Irvine, [19] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design patterns: elements of reusable object-oriented software. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, [20] Raul Garcia-Castro, Asunción Gómez-Pérez, Óscar Muñoz-García, and Lyndon J. B. Nixon. Towards a Component-Based Framework for Developing Semantic Web Applications. In Asian Semantic Web Conference, [21] J. Greenfield and K. Short. Software factories: assembling applications with patterns, models, frameworks and tools. In Conference on Objectoriented programming, systems, languages, and applications, [22] Andreas Harth, Katja Hose, Marcel Karnstedt, Axel Polleres, Kai-Uwe Sattler, and Jürgen Umbrich. Data Summaries for On-demand Queries over Linked Data. In World Wide Web Conference, [23] Tom Heath and Christian Bizer. Linked data: Evolving the web into a global data space. Synthesis Lectures on the Semantic Web: Theory and Technology, 1(1):1 136, [24] Norman Heino, Sebastian Dietzold, Michael Martin, and Sören Auer. Developing semantic web applications with the ontowiki framework. In Tassilo Pellegrini, Sóren Auer, Klaus Tochtermann, and Sebastian Schaffert,
37 Architecture of Linked Data Applications 29 editors, Networked Knowledge - Networked Media, volume 221 of Studies in Computational Intelligence, pages Springer Berlin Heidelberg, [25] J. Hendler. Web 3.0 Emerging. IEEE Computer, 42(1): , [26] M. Hildebrand, Jacco van Ossenbruggen, and Lynda Hardman. An analysis of search-based user interaction on the Semantic Web. Technical report, CWI Information Systems, [27] D.F. Huynh, D.R. Karger, and R.C. Miller. Exhibit: lightweight structured data publishing. In World Wide Web Conference, [28] Tobias Käfer, Ahmed Abdelrahman, Jürgen Umbrich, Patrick O Byrne, and Aidan Hogan. Observing linked data dynamics. European Semantic Web Conference, [29] Richard M. Keller, Daniel C. Berrios, Robert E. Carvalho, David R. Hall, Stephen J. Rich, Ian B. Sturken, Keith J. Swanson, and Shawn R. Wolfe. SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams. In Semantic Web Challenge at the International Semantic Web Conference, [30] Houd Khrouf, Vuk Milicic, and Raphael Troncy. Event media. In Semantic Web Challenge at the International Semantic Web Conference, [31] M. Klein and U. Visser. Guest Editors Introduction: Semantic Web Challenge IEEE Intelligent Systems, pages 31 33, [32] M. Krötzsch, D. Vrandečić, and M. Völkel. Semantic MediaWiki. In International Semantic Web Conference, [33] A. Leff and J.T. Rayfield. Web-application development using the Model/View/Controller design pattern. International Enterprise Distributed Object Computing Conference, pages , [34] C. Mangold. A survey and classification of semantic search approaches. International Journal of Metadata, Semantics and Ontologies, 2(1):23 34, [35] P. Mika and H. Akkermans. D1.2 Analysis of the State-of-the-Art in Ontology-based Knowledge Management. Technical report, SWAP Project, February [36] Óscar Muñoz-García and Raul Garcia-Castro. Guidelines for the Specification and Design of Large-Scale Semantic Applications. In Asian Semantic Web Conference, 2009.
38 30 Linked Data Management: Principles and Techniques [37] Marc-Alexandre Nolin, Peter Ansell, Francois Belleau, Kingsley Idehen, Philippe Rigault, Nicole Tourigny, Paul Roe, James M. Hogan, and Michel Dumontier. Bio2RDF Network of Linked Data. In Semantic Web Challenge at the International Semantic Web Conference, [38] E. Oren, A. Haller, M. Hauswirth, B. Heitmann, S. Decker, and C. Mesnage. A Flexible Integration Framework for Semantic Web 2.0 Applications. IEEE Software, [39] Eyal Oren, Benjamin Heitmann, and Stefan Decker. ActiveRDF: embedding Semantic Web data into object-oriented languages. Journal of Web Semantics, [40] Eyal Oren and Giovanni Tummarello. Sindice. In Semantic Web Challenge at the International Semantic Web Conference, [41] Alexandre Passant. seevl: mining music connections to bring context, search and discovery to the music you like. In Semantic Web Challenge at the International Semantic Web Conference, [42] M. Quasthoff and C. Meinel. Supporting object-oriented programming of semantic-web software. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 42(1):15 24, [43] L. Sauermann, R. Cyganiak, and M. Volkel. Cool URIs for the Semantic Web. W3C note, W3C, [44] Arno Scharl, Albert Weichselbraun, Alexander Hubmann-Haidvogel, Hermann Stern, Gerhard Wohlgenannt, and Dmytro Zibold. Media Watch on Climate Change: Building and Visualizing Contextualized Information Spaces. In Semantic Web Challenge at the International Semantic Web Conference, [45] Guus Schreiber, Alia Amin, Mark van Assem, Victor de Boer, and Lynda Hardman. MultimediaN E-Culture demonstrator. In Semantic Web Challenge at the International Semantic Web Conference, [46] Dag I. K. Sjoberg, Tore Dyba, and Magne Jorgensen. The Future of Empirical Methods in Software Engineering Research. Future of Software Engineering, pages , [47] Dilip Soni, Robert L. Nord, and Christine Hofmeister. Software architecture in industrial applications. International Conference on Software Engineering, [48] Henry Story, Bruno Harbulot, Ian Jacobi, and Mike Jones. FOAF+SSL: RESTful Authentication for the Social Web. In Workshop on Trust and Privacy on the Social and Semantic Web, 2009.
39 Architecture of Linked Data Applications 31 [49] Thanh Tran, Peter Haase, Holger Lewen, Óscar Muñoz-García, Asunción Gómez-Pérez, and Rudi Studer. Lifecycle-Support in Architectures for Ontology-Based Information Systems. In International Semantic Web Conference, [50] K. Wilkinson, C. Sayers, H. Kuno, and D. Reynolds. Efficient RDF storage and retrieval in Jena2. In Workshop on Semantic Web and Databases, 2003.
Towards a reference architecture for Semantic Web applications
Towards a reference architecture for Semantic Web applications Benjamin Heitmann 1, Conor Hayes 1, and Eyal Oren 2 1 [email protected] Digital Enterprise Research Institute National University
Leveraging existing Web frameworks for a SIOC explorer to browse online social communities
Leveraging existing Web frameworks for a SIOC explorer to browse online social communities Benjamin Heitmann and Eyal Oren Digital Enterprise Research Institute National University of Ireland, Galway Galway,
DISCOVERING RESUME INFORMATION USING LINKED DATA
DISCOVERING RESUME INFORMATION USING LINKED DATA Ujjal Marjit 1, Kumar Sharma 2 and Utpal Biswas 3 1 C.I.R.M, University Kalyani, Kalyani (West Bengal) India [email protected] 2 Department of Computer
Short Paper: Enabling Lightweight Semantic Sensor Networks on Android Devices
Short Paper: Enabling Lightweight Semantic Sensor Networks on Android Devices Mathieu d Aquin, Andriy Nikolov, Enrico Motta Knowledge Media Institute, The Open University, Milton Keynes, UK {m.daquin,
Lightweight Data Integration using the WebComposition Data Grid Service
Lightweight Data Integration using the WebComposition Data Grid Service Ralph Sommermeier 1, Andreas Heil 2, Martin Gaedke 1 1 Chemnitz University of Technology, Faculty of Computer Science, Distributed
Semantic Interoperability
Ivan Herman Semantic Interoperability Olle Olsson Swedish W3C Office Swedish Institute of Computer Science (SICS) Stockholm Apr 27 2011 (2) Background Stockholm Apr 27, 2011 (2) Trends: from
LinkZoo: A linked data platform for collaborative management of heterogeneous resources
LinkZoo: A linked data platform for collaborative management of heterogeneous resources Marios Meimaris, George Alexiou, George Papastefanatos Institute for the Management of Information Systems, Research
Scalable End-User Access to Big Data http://www.optique-project.eu/ HELLENIC REPUBLIC National and Kapodistrian University of Athens
Scalable End-User Access to Big Data http://www.optique-project.eu/ HELLENIC REPUBLIC National and Kapodistrian University of Athens 1 Optique: Improving the competitiveness of European industry For many
Getting Started with Service- Oriented Architecture (SOA) Terminology
Getting Started with - Oriented Architecture (SOA) Terminology Grace Lewis September 2010 -Oriented Architecture (SOA) is a way of designing, developing, deploying, and managing systems it is neither a
Publishing Linked Data Requires More than Just Using a Tool
Publishing Linked Data Requires More than Just Using a Tool G. Atemezing 1, F. Gandon 2, G. Kepeklian 3, F. Scharffe 4, R. Troncy 1, B. Vatant 5, S. Villata 2 1 EURECOM, 2 Inria, 3 Atos Origin, 4 LIRMM,
City Data Pipeline. A System for Making Open Data Useful for Cities. [email protected]
City Data Pipeline A System for Making Open Data Useful for Cities Stefan Bischof 1,2, Axel Polleres 1, and Simon Sperl 1 1 Siemens AG Österreich, Siemensstraße 90, 1211 Vienna, Austria {bischof.stefan,axel.polleres,simon.sperl}@siemens.com
Introduction to Service Oriented Architectures (SOA)
Introduction to Service Oriented Architectures (SOA) Responsible Institutions: ETHZ (Concept) ETHZ (Overall) ETHZ (Revision) http://www.eu-orchestra.org - Version from: 26.10.2007 1 Content 1. Introduction
How To Write A Drupal 5.5.2.2 Rdf Plugin For A Site Administrator To Write An Html Oracle Website In A Blog Post In A Flashdrupal.Org Blog Post
RDFa in Drupal: Bringing Cheese to the Web of Data Stéphane Corlosquet, Richard Cyganiak, Axel Polleres and Stefan Decker Digital Enterprise Research Institute National University of Ireland, Galway Galway,
A SOA visualisation for the Business
J.M. de Baat 09-10-2008 Table of contents 1 Introduction...3 1.1 Abbreviations...3 2 Some background information... 3 2.1 The organisation and ICT infrastructure... 3 2.2 Five layer SOA architecture...
Revealing Trends and Insights in Online Hiring Market Using Linking Open Data Cloud: Active Hiring a Use Case Study
Revealing Trends and Insights in Online Hiring Market Using Linking Open Data Cloud: Active Hiring a Use Case Study Amar-Djalil Mezaour 1, Julien Law-To 1, Robert Isele 3, Thomas Schandl 2, and Gerd Zechmeister
Application of OASIS Integrated Collaboration Object Model (ICOM) with Oracle Database 11g Semantic Technologies
Application of OASIS Integrated Collaboration Object Model (ICOM) with Oracle Database 11g Semantic Technologies Zhe Wu Ramesh Vasudevan Eric S. Chan Oracle Deirdre Lee, Laura Dragan DERI A Presentation
Linked Open Data Infrastructure for Public Sector Information: Example from Serbia
Proceedings of the I-SEMANTICS 2012 Posters & Demonstrations Track, pp. 26-30, 2012. Copyright 2012 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
María Elena Alvarado gnoss.com* [email protected] Susana López-Sola gnoss.com* [email protected]
Linked Data based applications for Learning Analytics Research: faceted searches, enriched contexts, graph browsing and dynamic graphic visualisation of data Ricardo Alonso Maturana gnoss.com *Piqueras
Performance Analysis, Data Sharing, Tools Integration: New Approach based on Ontology
Performance Analysis, Data Sharing, Tools Integration: New Approach based on Ontology Hong-Linh Truong Institute for Software Science, University of Vienna, Austria [email protected] Thomas Fahringer
SmartLink: a Web-based editor and search environment for Linked Services
SmartLink: a Web-based editor and search environment for Linked Services Stefan Dietze, Hong Qing Yu, Carlos Pedrinaci, Dong Liu, John Domingue Knowledge Media Institute, The Open University, MK7 6AA,
Structured Content: the Key to Agile. Web Experience Management. Introduction
Structured Content: the Key to Agile CONTENTS Introduction....................... 1 Structured Content Defined...2 Structured Content is Intelligent...2 Structured Content and Customer Experience...3 Structured
Towards the Integration of a Research Group Website into the Web of Data
Towards the Integration of a Research Group Website into the Web of Data Mikel Emaldi, David Buján, and Diego López-de-Ipiña Deusto Institute of Technology - DeustoTech, University of Deusto Avda. Universidades
LDIF - Linked Data Integration Framework
LDIF - Linked Data Integration Framework Andreas Schultz 1, Andrea Matteini 2, Robert Isele 1, Christian Bizer 1, and Christian Becker 2 1. Web-based Systems Group, Freie Universität Berlin, Germany [email protected],
CitationBase: A social tagging management portal for references
CitationBase: A social tagging management portal for references Martin Hofmann Department of Computer Science, University of Innsbruck, Austria [email protected] Ying Ding School of Library and Information Science,
How To Use An Orgode Database With A Graph Graph (Robert Kramer)
RDF Graph Database per Linked Data Next Generation Open Data, come sfruttare l innovazione tecnologica per creare nuovi scenari e nuove opportunità. [email protected] 1 Copyright 2011, Oracle
Linked Data Interface, Semantics and a T-Box Triple Store for Microsoft SharePoint
Linked Data Interface, Semantics and a T-Box Triple Store for Microsoft SharePoint Christian Fillies 1 and Frauke Weichhardt 1 1 Semtation GmbH, Geschw.-Scholl-Str. 38, 14771 Potsdam, Germany {cfillies,
New Generation of Social Networks Based on Semantic Web Technologies: the Importance of Social Data Portability
New Generation of Social Networks Based on Semantic Web Technologies: the Importance of Social Data Portability Liana Razmerita 1, Martynas Jusevičius 2, Rokas Firantas 2 Copenhagen Business School, Denmark
Semantic Web Tool Landscape
Semantic Web Tool Landscape CENDI-NFAIS-FLICC Conference National Archives Building November 17, 2009 Dr. Leo Obrst MITRE Information Semantics Group Information Discovery & Understanding Command and Control
Ontology and automatic code generation on modeling and simulation
Ontology and automatic code generation on modeling and simulation Youcef Gheraibia Computing Department University Md Messadia Souk Ahras, 41000, Algeria [email protected] Abdelhabib Bourouis
We have big data, but we need big knowledge
We have big data, but we need big knowledge Weaving surveys into the semantic web ASC Big Data Conference September 26 th 2014 So much knowledge, so little time 1 3 takeaways What are linked data and the
Fraunhofer FOKUS. Fraunhofer Institute for Open Communication Systems Kaiserin-Augusta-Allee 31 10589 Berlin, Germany. www.fokus.fraunhofer.
Fraunhofer Institute for Open Communication Systems Kaiserin-Augusta-Allee 31 10589 Berlin, Germany www.fokus.fraunhofer.de 1 Identification and Utilization of Components for a linked Open Data Platform
UIMA and WebContent: Complementary Frameworks for Building Semantic Web Applications
UIMA and WebContent: Complementary Frameworks for Building Semantic Web Applications Gaël de Chalendar CEA LIST F-92265 Fontenay aux Roses [email protected] 1 Introduction The main data sources
Linked Statistical Data Analysis
Linked Statistical Data Analysis Sarven Capadisli 1, Sören Auer 2, Reinhard Riedl 3 1 Universität Leipzig, Institut für Informatik, AKSW, Leipzig, Germany, 2 University of Bonn and Fraunhofer IAIS, Bonn,
MEng, BSc Applied Computer Science
School of Computing FACULTY OF ENGINEERING MEng, BSc Applied Computer Science Year 1 COMP1212 Computer Processor Effective programming depends on understanding not only how to give a machine instructions
Semantic Data Management. Xavier Lopez, Ph.D., Director, Spatial & Semantic Technologies
Semantic Data Management Xavier Lopez, Ph.D., Director, Spatial & Semantic Technologies 1 Enterprise Information Challenge Source: Oracle customer 2 Vision of Semantically Linked Data The Network of Collaborative
LINKED DATA EXPERIENCE AT MACMILLAN Building discovery services for scientific and scholarly content on top of a semantic data model
LINKED DATA EXPERIENCE AT MACMILLAN Building discovery services for scientific and scholarly content on top of a semantic data model 22 October 2014 Tony Hammond Michele Pasin Background About Macmillan
THE BRITISH LIBRARY. Unlocking The Value. The British Library s Collection Metadata Strategy 2015-2018. Page 1 of 8
THE BRITISH LIBRARY Unlocking The Value The British Library s Collection Metadata Strategy 2015-2018 Page 1 of 8 Summary Our vision is that by 2020 the Library s collection metadata assets will be comprehensive,
The Ontological Approach for SIEM Data Repository
The Ontological Approach for SIEM Data Repository Igor Kotenko, Olga Polubelova, and Igor Saenko Laboratory of Computer Science Problems, Saint-Petersburg Institute for Information and Automation of Russian
A DATA QUERYING AND MAPPING FRAMEWORK FOR LINKED DATA APPLICATIONS
A DATA QUERYING AND MAPPING FRAMEWORK FOR LINKED DATA APPLICATIONS Niels Ockeloen VU University Amsterdam Amsterdam, The Netherlands [email protected] ABSTRACT The Linked Open Data (LOD) cloud has
A Comparison of Service-oriented, Resource-oriented, and Object-oriented Architecture Styles
A Comparison of Service-oriented, Resource-oriented, and Object-oriented Architecture Styles Jørgen Thelin Chief Scientist Cape Clear Software Inc. Abstract The three common software architecture styles
The University of Jordan
The University of Jordan Master in Web Intelligence Non Thesis Department of Business Information Technology King Abdullah II School for Information Technology The University of Jordan 1 STUDY PLAN MASTER'S
Enterprise Application Designs In Relation to ERP and SOA
Enterprise Application Designs In Relation to ERP and SOA DESIGNING ENTERPRICE APPLICATIONS HASITH D. YAGGAHAVITA 20 th MAY 2009 Table of Content 1 Introduction... 3 2 Patterns for Service Integration...
Smart Cities require Geospatial Data Providing services to citizens, enterprises, visitors...
Cloud-based Spatial Data Infrastructures for Smart Cities Geospatial World Forum 2015 Hans Viehmann Product Manager EMEA ORACLE Corporation Smart Cities require Geospatial Data Providing services to citizens,
Semantic Stored Procedures Programming Environment and performance analysis
Semantic Stored Procedures Programming Environment and performance analysis Marjan Efremov 1, Vladimir Zdraveski 2, Petar Ristoski 2, Dimitar Trajanov 2 1 Open Mind Solutions Skopje, bul. Kliment Ohridski
Semantic Exploration of Archived Product Lifecycle Metadata under Schema and Instance Evolution
Semantic Exploration of Archived Lifecycle Metadata under Schema and Instance Evolution Jörg Brunsmann Faculty of Mathematics and Computer Science, University of Hagen, D-58097 Hagen, Germany [email protected]
Web Cloud Architecture
Web Cloud Architecture Introduction to Software Architecture Jay Urbain, Ph.D. [email protected] Credits: Ganesh Prasad, Rajat Taneja, Vikrant Todankar, How to Build Application Front-ends in a Service-Oriented
Core Enterprise Services, SOA, and Semantic Technologies: Supporting Semantic Interoperability
Core Enterprise, SOA, and Semantic Technologies: Supporting Semantic Interoperability in a Network-Enabled Environment 2011 SOA & Semantic Technology Symposium 13-14 July 2011 Sven E. Kuehne [email protected]
An industry perspective on deployed semantic interoperability solutions
An industry perspective on deployed semantic interoperability solutions Ralph Hodgson, CTO, TopQuadrant SEMIC Conference, Athens, April 9, 2014 https://joinup.ec.europa.eu/community/semic/event/se mic-2014-semantic-interoperability-conference
MEng, BSc Computer Science with Artificial Intelligence
School of Computing FACULTY OF ENGINEERING MEng, BSc Computer Science with Artificial Intelligence Year 1 COMP1212 Computer Processor Effective programming depends on understanding not only how to give
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2008 Vol. 7, No. 8, November-December 2008 What s Your Information Agenda? Mahesh H. Dodani,
Standards, Tools and Web 2.0
Standards, Tools and Web 2.0 Web Programming Uta Priss ZELL, Ostfalia University 2013 Web Programming Standards and Tools Slide 1/31 Outline Guidelines and Tests Logfile analysis W3C Standards Tools Web
Federated Query Processing over Linked Data
An Evaluation of Approaches to Federated Query Processing over Linked Data Peter Haase, Tobias Mathäß, Michael Ziller fluid Operations AG, Walldorf, Germany i-semantics, Graz, Austria September 1, 2010
Chapter 2 Database System Concepts and Architecture
Chapter 2 Database System Concepts and Architecture Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Outline Data Models, Schemas, and Instances Three-Schema Architecture
Big Data, Cloud Computing, Spatial Databases Steven Hagan Vice President Server Technologies
Big Data, Cloud Computing, Spatial Databases Steven Hagan Vice President Server Technologies Big Data: Global Digital Data Growth Growing leaps and bounds by 40+% Year over Year! 2009 =.8 Zetabytes =.08
Semantic Modeling with RDF. DBTech ExtWorkshop on Database Modeling and Semantic Modeling Lili Aunimo
DBTech ExtWorkshop on Database Modeling and Semantic Modeling Lili Aunimo Expected Outcomes You will learn: Basic concepts related to ontologies Semantic model Semantic web Basic features of RDF and RDF
Open Data Integration Using SPARQL and SPIN
Open Data Integration Using SPARQL and SPIN A Case Study for the Tourism Domain Antonino Lo Bue, Alberto Machi ICAR-CNR Sezione di Palermo, Italy Research funded by Italian PON SmartCities Dicet-InMoto-Orchestra
Graph Database Performance: An Oracle Perspective
Graph Database Performance: An Oracle Perspective Xavier Lopez, Ph.D. Senior Director, Product Management 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved. Program Agenda Broad Perspective
Middleware- Driven Mobile Applications
Middleware- Driven Mobile Applications A motwin White Paper When Launching New Mobile Services, Middleware Offers the Fastest, Most Flexible Development Path for Sophisticated Apps 1 Executive Summary
A Comprehensive Solution for API Management
An Oracle White Paper March 2015 A Comprehensive Solution for API Management Executive Summary... 3 What is API Management?... 4 Defining an API Management Strategy... 5 API Management Solutions from Oracle...
Setting Up an AS4 System
INT0697_150625 Setting up an AS4 system V1r0 1 Setting Up an AS4 System 2 Version 1r0 ENTSOG AISBL; Av. de Cortenbergh 100, 1000-Brussels; Tel: +32 2 894 5100; Fax: +32 2 894 5101; [email protected], www.entsog.eu,
Combining Unstructured, Fully Structured and Semi-Structured Information in Semantic Wikis
Combining Unstructured, Fully Structured and Semi-Structured Information in Semantic Wikis Rolf Sint 1, Sebastian Schaffert 1, Stephanie Stroka 1 and Roland Ferstl 2 1 {firstname.surname}@salzburgresearch.at
TRANSFoRm: Vision of a learning healthcare system
TRANSFoRm: Vision of a learning healthcare system Vasa Curcin, Imperial College London Theo Arvanitis, University of Birmingham Derek Corrigan, Royal College of Surgeons Ireland TRANSFoRm is partially
Semantically Enhanced Web Personalization Approaches and Techniques
Semantically Enhanced Web Personalization Approaches and Techniques Dario Vuljani, Lidia Rovan, Mirta Baranovi Faculty of Electrical Engineering and Computing, University of Zagreb Unska 3, HR-10000 Zagreb,
A Comparison of Service-oriented, Resource-oriented, and Object-oriented Architecture Styles
A Comparison of Service-oriented, Resource-oriented, and Object-oriented Architecture Styles Jørgen Thelin Chief Scientist Cape Clear Software Inc. Abstract The three common software architecture styles
Using Open Source software and Open data to support Clinical Trial Protocol design
Using Open Source software and Open data to support Clinical Trial Protocol design Nikolaos Matskanis, Joseph Roumier, Fabrice Estiévenart {nikolaos.matskanis, joseph.roumier, fabrice.estievenart}@cetic.be
Service Oriented Architecture
Service Oriented Architecture Charlie Abela Department of Artificial Intelligence [email protected] Last Lecture Web Ontology Language Problems? CSA 3210 Service Oriented Architecture 2 Lecture Outline
It s all around the domain ontologies - Ten benefits of a Subject-centric Information Architecture for the future of Social Networking
It s all around the domain ontologies - Ten benefits of a Subject-centric Information Architecture for the future of Social Networking Lutz Maicher and Benjamin Bock, Topic Maps Lab at University of Leipzig,
K@ A collaborative platform for knowledge management
White Paper K@ A collaborative platform for knowledge management Quinary SpA www.quinary.com via Pietrasanta 14 20141 Milano Italia t +39 02 3090 1500 f +39 02 3090 1501 Copyright 2004 Quinary SpA Index
A generic approach for data integration using RDF, OWL and XML
A generic approach for data integration using RDF, OWL and XML Miguel A. Macias-Garcia, Victor J. Sosa-Sosa, and Ivan Lopez-Arevalo Laboratory of Information Technology (LTI) CINVESTAV-TAMAULIPAS Km 6
RDF Dataset Management Framework for Data.go.th
RDF Dataset Management Framework for Data.go.th Pattama Krataithong 1,2, Marut Buranarach 1, and Thepchai Supnithi 1 1 Language and Semantic Technology Laboratory National Electronics and Computer Technology
Introduction to Ontologies
Technological challenges Introduction to Ontologies Combining relational databases and ontologies Author : Marc Lieber Date : 21-Jan-2014 BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR.
Managing enterprise applications as dynamic resources in corporate semantic webs an application scenario for semantic web services.
Managing enterprise applications as dynamic resources in corporate semantic webs an application scenario for semantic web services. Fabien Gandon, Moussa Lo, Olivier Corby, Rose Dieng-Kuntz ACACIA in short
SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) SERVICE-ORIENTED SOFTWARE ARCHITECTURE MODEL LANGUAGE SPECIFICATIONS
SERVICE-ORIENTED MODELING FRAMEWORK (SOMF ) VERSION 2.1 SERVICE-ORIENTED SOFTWARE ARCHITECTURE MODEL LANGUAGE SPECIFICATIONS 1 TABLE OF CONTENTS INTRODUCTION... 3 About The Service-Oriented Modeling Framework
Object-Oriented Systems Analysis and Design
Object-Oriented Systems Analysis and Design Noushin Ashrafi Professor of Information System University of Massachusetts-Boston Hessam Ashrafi Software Architect Pearson Education International CONTENTS
IAAA Grupo de Sistemas de Información Avanzados
Upgrading maps with Linked Data Lopez Pellicer Pellicer, Francisco J Lacasta, Javier Rentería, Walter, Universidad de Zaragoza Barrera, Jesús Lopez de Larrinzar, Juan Agudo, Jose M GeoSpatiumLab The Linked
DDI Lifecycle: Moving Forward Status of the Development of DDI 4. Joachim Wackerow Technical Committee, DDI Alliance
DDI Lifecycle: Moving Forward Status of the Development of DDI 4 Joachim Wackerow Technical Committee, DDI Alliance Should I Wait for DDI 4? No! DDI Lifecycle 4 is a long development process DDI Lifecycle
Cloud Computing and Advanced Relationship Analytics
Cloud Computing and Advanced Relationship Analytics Using Objectivity/DB to Discover the Relationships in your Data By Brian Clark Vice President, Product Management Objectivity, Inc. 408 992 7136 [email protected]
Supporting Change-Aware Semantic Web Services
Supporting Change-Aware Semantic Web Services Annika Hinze Department of Computer Science, University of Waikato, New Zealand [email protected] Abstract. The Semantic Web is not only evolving into
Deploying a Geospatial Cloud
Deploying a Geospatial Cloud Traditional Public Sector Computing Environment Traditional Computing Infrastructure Silos of dedicated hardware and software Single application per silo Expensive to size
DLT Solutions and Amazon Web Services
DLT Solutions and Amazon Web Services For a seamless, cost-effective migration to the cloud PREMIER CONSULTING PARTNER DLT Solutions 2411 Dulles Corner Park, Suite 800 Herndon, VA 20171 Duane Thorpe Phone:
Mining the Web of Linked Data with RapidMiner
Mining the Web of Linked Data with RapidMiner Petar Ristoski, Christian Bizer, and Heiko Paulheim University of Mannheim, Germany Data and Web Science Group {petar.ristoski,heiko,chris}@informatik.uni-mannheim.de
AN INTEGRATION APPROACH FOR THE STATISTICAL INFORMATION SYSTEM OF ISTAT USING SDMX STANDARDS
Distr. GENERAL Working Paper No.2 26 April 2007 ENGLISH ONLY UNITED NATIONS STATISTICAL COMMISSION and ECONOMIC COMMISSION FOR EUROPE CONFERENCE OF EUROPEAN STATISTICIANS EUROPEAN COMMISSION STATISTICAL
BUSINESS VALUE OF SEMANTIC TECHNOLOGY
BUSINESS VALUE OF SEMANTIC TECHNOLOGY Preliminary Findings Industry Advisory Council Emerging Technology (ET) SIG Information Sharing & Collaboration Committee July 15, 2005 Mills Davis Managing Director
D5.3.2b Automatic Rigorous Testing Components
ICT Seventh Framework Programme (ICT FP7) Grant Agreement No: 318497 Data Intensive Techniques to Boost the Real Time Performance of Global Agricultural Data Infrastructures D5.3.2b Automatic Rigorous
