Data collection architecture for Big Data a framework for a research agenda (Research in progress - ERP Sense Making of Big Data) Wout Hofman, May 2015, BDEI workshop
2 Big Data succes stories bias our thinking proprietary, closed solutions
3 Problem statement Large-scale, controller open implementation of data analytics/data innovation by organisations is lacking From offline to real-time Big Data versus data driven innovation - volume, variety, velocity, veracity(, value) Collection, homogenisation, and integration is time-consuming (Too) many (un)structured (linked) open data sets No clear data governance rules and data policies supported by interventions Unknown features of data sets (quality, etc.) Data with different technical formats (5-star model?) Embedded data semantics API based data sharing platforms Research focus on solving individual issues, lack of an architecture
From offline to real-time - impact on IT architecture Descriptive - what happened (also known as: supply chain visibility in logistics) Diagnostics - why did it happen (e.g. supply chain resilience) Predictive - what will happen (e.g. resilience in terms of too late, waiting queues, (Demanes case)) Prescriptive analytics - how can we make it happen (prevention, etc.) (Gartner) But also anomaly detection - combining the past with descriptive analytics (e.g. risk analysis) query evaluation - search and find appropriate data
5 The data value chain (Esmeijer, Bakker & Munck, 2015)
Processing is considered as a sequence of steps: Data generation and collection (inventory of data sources, quality features, etc.) Data preparation (filtering, cleaning, verification, annotation) Data integration Data storage (local databases, cloud storage,..) Data analytics (multi-view clustering, deep learning) Data visualisation Data driven action Data governance and security Lacking: data collection policy
Data generation and collection (Too) many (un)structured (linked) open data sets No clear data governance rules and data policies supported by interventions Data with different technical formats (5-star model?) Embedded data semantics API based data sharing platforms No standards for metadata > no (automatic) annotation: (taken from Zaveri et al.) Contextual (completeness, amount, relevancy) Trust (believability, verifiability, reputation, provenance, licensing) Representation (conciseness, consistency, understandability, interpretability, versatility) Intrinsic (accuracy, objectivity, validity, conciseness, interlinking, consistency) Dynamicity (timeliness, currency, volatility) Accessibility (availability, performance, security, response time)
Data preparation and - integration Data quality features: completeness, conciseness, correctness, and consistency Quality improvement annotation automatic detection and repair comparing data sets of different resources Homogenisation Matching and linking of data sets OWL is considered for semantics
9 Data governance and - policies Open data Community data Bilateral data Internal data Data ownership and -stewardship Applying privacy-enhanced technologies (e.g. IAA, attribute based access control, homomorphic encryption,...) (Eckartz, Hofman & van Veenstra, 2014)
Towards an architecture Data Usage (visualisation dashboard/analytics) data semantics source registry Data Collection subscripton Source Interface distributed (open) data sources
Modelling tools Data user (e.g. analytics, visualisation dashboard (complex) event processing Connectivity Adapter Interface support Query formulation Data Analytics Dashboard Data Workflow Semantic Model(s)! Subscription manage-ment Subscription registry! Data linking Data fusion Data manipulation Link evaluation Query decomposition Audit trail! Registry! Subscription protocol events (state changes) Transformation Source adapter Anonymization/ Filtering Data cleansing Source adapter Source adapter Temporary Store! Subscription manage-ment security APIs SPARQL endpoint Data Source Adapter Data Provision Provision adapters Source Registration Subscription registry! Identifica -tion & authentication Access Control Transformation Anonymization/ Filtering Audit trail! Data cleansing Data governance rules & interventions Source Annotation Profiling Data Source (open, closed, (un)structured) Data Analytics Dashboard
12 Research questions (rephrased) 1. How can privacy-enhanced technologies, semantics, and annotations of datasets improve large-scale, automatic data analytics? 2. What is the minimal required information to automatically integrate any dataset into a common format?
13 Privacy-enhanced technologies, semantics, and annotation to improve precisie and recall of datasets Annotation and metadata Semantics and technical representation of a dataset Privacy-enhanced technologies: data governance, - policies, and - semantics Data collection policy how to search and find appropriate data (appropriate: according semantics and metadata with particular quality features) query decomposition Automatic data workflow composition
14 Minimal required information to automatically transform and integrate datasets for analytics Syntax transformation Ontology learning text mining, NLP, etc. networked ontology construction Semantic transformation ontology matching and -linking
15 Thank your for your attention. Questions?