Big Data Architecture Patterns
|
|
|
- Carmel Garrison
- 10 years ago
- Views:
Transcription
1 Big Data Architecture Patterns Repeatable Approaches to Big Data Challenges for Optimal Decision Making By Gregory Harman Managing Partner BigR.io, LLC Abstract A number of architectural patterns are identified and applied to a case study involving the ingest, storage, and analysis of a number of disparate data feeds. Each of these patterns are explored to determine the target problem space for the pattern and pros and cons of the pattern. The purpose is to facilitate and optimize future Big Data architecture decision making. The patterns explored are: Lambda Data Lake Metadata Transform Data Lineage Feedback Cross Referencing
2 About BigR.io BigR.io is a technology consulting firm empowering data to drive analytics for revenue growth and operational efficiencies. Our teams deliver software solutions, data science strategies, enterprise infrastructure, and management consulting to the world s largest companies. We are an elite group with MIT roots, shining when tasked with complex missions: assembling mounds of data from a variety of sources, building high volume, highly available systems, and orchestrating analytics to transform technology into perceivable business value. With extensive domain knowledge, BigR.io has teams of architects and engineers that deliver best in class solutions across a variety of verticals. This diverse industry exposure and our constant run in with the cutting edge, empowers us with invaluable tools, tricks, and techniques. We bring knowledge and horsepower that consistently delivers innovative, cost conscious, and extensible results to complex software and data challenges. Learn more at Introduction Modern business problems require ever increasing amounts of data, and ever increasing variety in 1 the data that they ingest. Aphorisms such as the three V's have evolved to describe some of the high level challenges that Big Data solutions are intended to solve. An introductory article on the subject may conclude with a recommendation to consider a high level technology stack such as Hadoop and its associated ecosystem. While this sort of recommendation may be a good starting point, the business will inevitably find that there are complex data architecture challenges both with designing the new Big Data stack as well as with integrating it with existing transactional and warehousing technologies. This paper will examine a number of architectural patterns that can help solve common challenges within this space. These patterns do not rely on specific technology choices, though examples are given where they may help clarify the pattern, and are intended to act as templates that can be applied to actual scenarios that a data architect may encounter. The following case study will be used throughout this paper as context and motivation for application of these patterns: Alpha Trading, Inc. (ATI) is planning to launch a new quantitative fund. Their fund will be based on a proprietary trading strategy that combines real time market feed data with sentiment data gleaned from social media and blogs. They expect that the specific blogs and social media channels that will be most influential, and therefore most relevant, may change over time. ATI's other funds are run by pen, paper, and phone, and so for this new fund they start building their data processing infrastructure greenfield. 1 Volume, Velocity and Variety 2
3 Diagram 1: ATI Architecture Before Patterns Pattern 1: Lambda The first challenge that ATI faces is the timely processing of their real time (per tick) market feed data. While the most recent ticks are the most important, their strategy relies on a continual analysis of not just the most recent ticks, but of all historical ticks in their system. They accumulate approximately 2 5GB of tick data per day. Performing a batch analysis (e.g. with Hadoop) will take them an hour. This batch process gives them very good accuracy great for predicting the past, but problematic for executing near real time trades. Conversely, a streaming solution (e.g. Storm, Druid, Spark) can only accommodate the most recent data, and often uses approximating algorithms to keep up with the data flow. This loss of accuracy may generate false trading signals within ATI's algorithm. 2 The actual processing time would vary by any number of factors; an hour is chosen here for simplicity of explanation. 3
4 In order to combat this, the Lambda Pattern will be applied. Characteristics of this pattern are: The data stream is fed by the ingest system to both the batch and streaming analytics systems. The batch analytics system runs continually to update intermediate views that summarize all data up to the last cycle time one hour in this example. These views are considered to be very accurate, but stale. The streaming analytics system combines the most recent intermediate view with the data stream from the last batch cycle time (one hour) to produce the final view. While a small amount of accuracy is lost over the most recent data, this pattern provides a good compromise when recent data is important, but calculations must also take into account a larger historical data set. Thought must be given to the intermediate views in order to fit them naturally into the aggregated analysis with the streaming data. With this pattern applied, ATI can utilize the full backlog of historical tick data; their updated architecture is as such: Diagram 3: ATI Architecture with Lambda The Lambda Pattern described here is a subset and simplification of the Lambda Architecture 3 described in Marz/Warren. For more detailed considerations and examples of applying specific technologies, this book is recommended. Pattern 2: Data Lake ATI suspects that sentiment data analyzed from a number of blog and social media feeds will be important to their trading strategy. However, they aren't sure which specific blogs and feeds will be immediately useful, and they may change the active set of feeds over time. In order to determine the 3 Marz, Nathan, and James Warren. Big Data: Principles and Best Practices of Scalable Real time Data Systems. Shelter Island, NY: Manning,
5 active set, they will want to analyze the feeds' historical content. Not knowing which feeds might turn out to be useful, they have elected to ingest as many as they can find. They quickly realize that this mass ingest causes them difficulties in two areas: 1. Their production trading server is built with very robust (and therefore relatively expensive) hardware, and disk space is at a premium. It can handle those feeds that are being actively used, but all the speculative feeds consume copious amounts of storage space. 2. Each feed has its own semantics; most are semi structured or unstructured, and all are different. Each requires a normalization process (e.g. an ETL workflow) before it can be brought into the structured storage on the trading server. These normalization processes are labor intensive to build, and become a bottleneck to adding new feeds. These challenges can be addressed using a Data Lake Pattern. In this pattern, all potentially useful data sources are brought into a landing area that is designed to be cost effective for general storage. Technologies such as HDFS serve this purpose well. The landing area serves as a platform for initial exploration of the data, but notably does not incur the overhead of conditioning the data to fit the primary data warehouse or other analytics platform. This conditioning is conducted only after a data source has been identified of immediate use for the mainline analytics. Diagram 4: Data Lake Data Lakes provide a means for capturing and exploring potentially useful data without incurring the storage costs of transactional systems or the conditioning effort necessary to bring speculative sources into those transactional systems. Often all data may be brought into the Data Lake as an initial landing platform. However, this extra latency may result in potentially useful data becoming stale 5
6 if it is time sensitive, as with ATI's per tick market data feed. In this situation, it makes sense to create a second pathway for this data directly into the streaming or transactional system. It is often a good practice to also retain that data in the Data Lake as a complete archive and in case that data stream is removed from the transactional analysis in the future. Incorporating the Data Lake pattern into the ATI architecture results in the following: Diagram 5: ATI Architecture with Data Lake Pattern 3: Metadata Transform By this time, ATI has a number of data feeds incorporated into their analysis, but these feeds carry different formats, structures, and semantics. Even discounting the modeling and analysis of unstructured blog data, there are differences between well structured tick data feeds. For example, 4 consider the following two feeds showing stock prices from NASDAQ and the Tokyo Stock Exchange: Field NASDAQ Tokyo Stock Exchange Normalized? Date 01/11/ /01/ Time 10:00: :20: Control Number N/A NULL Field mismatch Price Currency Volume These feed formats and samples are taken from the quote file format definitions published by Tick Data, Inc. as of 6/28/15. Note that even coming from the same organizational source, these feeds have significant representational differences. 5 Only true for data after October 1, Previously millisecond data was not available, which may be considered a normalization difference. 6
7 Field NASDAQ Tokyo Stock Exchange Normalized? Exchange Code Q 11 Different encoding systems Sales 0 Different encoding systems Volume Flag N/A NULL Field mismatch Correction Indicator Sequence Number Trade Stop Indicator 00 N/A Field mismatch N/A Field mismatch NULL N/A Field mismatch Source of Trade N N/A Field mismatch Trade Reporting Facility Exclude Record Flag NULL N/A Field mismatch NULL NULL Encoding, usage (Nasdaq = exclusion flag; TSE = closing flag) Filtered Price NULL NULL Currency The diagram above reveals a number of formatting and semantic conflicts that may affect data analysis. Further, consider that the ordering of these fields in each file is different: NASDAQ: 01/11/2010,10:00:00.930,210.81,100,Q,@F,00,155401,,N,,, TSE: 10/01/2008,09:00:13.772,,0,172.0,7000,,11,, Typically, these normalization problems are solved with a fair amount of manual analysis of source and target formats implemented via scripting languages or ETL platforms. This becomes one of the most labor intensive (and therefore expensive and slow) steps within the data analysis lifecycle. Specific concerns include: Combination of knowledge needed: in order to perform this normalization, a developer must have or acquire, in addition to development skills: knowledge of the domain (e.g. trading data), specific knowledge of the source data format, and specific knowledge of the target data format. Fragility: any change (or intermittent errors or dirtiness!) in either the source or target data can break the normalization, requiring a complete rework. Redundancy: many sub patterns are implemented repeatedly for each instance this is low value (re implementing very similar logic) and duplicates the labor for each instance. 7
8 Intuitively the planning and analysis for this sort of work is done at the metadata level (i.e. working with a schema and data definition) while frequently validating definitions against actual sample data. Identified conflicts in representation are then manually coded into the transformation (the T in an ETL process, or the bulk of most scripts). Instead, the Metadata Transform Pattern proposes defining simple transformative building blocks. These blocks are defined in terms of metadata for example: perform a currency conversion between USD and JPY. Each block definition has attached runtime code a subroutine in the ETL/script but at data integration time, they are defined and manipulated solely within the metadata domain. Diagram 6: Metadata Domain Transform This approach allows a number of benefits at the cost of additional infrastructure complexity: Separation of expertise: Developers can code the blocks without specific knowledge of source or target data systems, while data owners/stewards on both the source and target side can define their particular formats without considering transformation logic. Code generation: defining transformations in terms of abstract building blocks provides opportunities for code generation infrastructure that can automate the creation of complex transformation logic by assembling these pre defined blocks. Robustness: These characteristics serve to increase the robustness of any transform. As long as the metadata definitions are kept current, transformations will also be maintained. The response time to changes in metadata definitions is greatly reduced. Documentation: This metadata mapping serves as intuitive documentation of the logical functionality of the underlying code. Applying the Metadata Transform to the ATI architecture streamlines the normalization concerns between the market data feeds illustrated above and additionally plays a significant role within the Data Lake. Given the extreme variety that is expected among Data Lake sources, normalization issues will arise whenever a new source is brought into the mainline analysis. Further, some preliminary normalization may be necessary simply to explore the Data Lake to identify currently useful data. Incorporating the Metadata Transform pattern into the ATI architecture results in the following: 8
9 Pattern 4: Data Lineage Not all of ATI's trades succeed as expected. These are carefully analyzed to determine whether the cause is simple bad luck, or an error in the strategy, the implementation of the strategy, or the data infrastructure. During this analysis process not only will the strategy's logic be examined, but also its assumptions: the data fed into that strategy logic. This data may be direct (via the normalization/etl process) from the source, or may be take from intermediate computations. In both cases, it is essential to understand exactly where each input to the strategy logic came from what data source supplied the raw inputs. The Data Lineage pattern is an application of metadata to all data items to track any upstream source data that contributed to that data's current value. Every data field and every transformative system (including both normalization/etl processes as well as any analysis systems that have produced an output) has a globally unique identifier associated with it as metadata. In addition, the data field will carry a list of its contributing data and systems. For example, consider the following diagram: 9
10 Note that the choice is left open whether each data item's metadata contains a complete system history back to original source data, or whether it contains only its direct ancestors. In the latter case, storage and network overhead is reduced at the cost of additional complexity when a complete lineage needs to be computed. This pattern may be implemented in a separate metadata documentation store to the effect of less impact on the mainline data processing systems, however this runs the risk of a divergence between documented metadata and actual data if extremely strict development processes are not adhered to. Alternately, a data structure that includes this metadata may be utilized at runtime in order to guarantee accurate lineage. For example, the following JSON structure contains this metadata while 6 still retaining all original feed data: { "data : { "field1" : "value1", "field2" : "value2" }, "metadata" : { "document_id" : "DEF456", "parent_document_id" : ["ABC123", "EFG789"] } } In this JSON structure the decision has been made to track lineage at the document level, but the same principal may be applied on an individual field level. In the latter case, it is generally worth tracking both the document lineage and the specific field(s) that sourced the field in question. In the case of ATI, all systems that consume and produce data will be required to provide this metadata, and with no additional components or pathways, the logical architecture diagram will not need to be altered. Pattern 5: Feedback Frequently, data is not analyzed in one monolithic step. Intermediate views and results are necessary in fact the Lambda Pattern depends on this, and the Lineage Pattern is designed to add accountability and transparency to these intermediate data sets. While these could be discarded or treated as special cases, additional value can be obtained by feeding these data sets back into the ingest system (e.g. for storage in the Data Lake). This gives the overall architecture a symmetry that ensures equal treatment of internally generated data. Furthermore, these intermediate data sets become available to those doing discovery and exploration within the Data Lake and may become valuable components to new analyses beyond their original intent. As higher order intermediate data sets are introduced into the Data Lake, it's role as data marketplace is enhanced increasing the value of that resource as well. 6 A careful reader will note that fewer fields are represented here than in the original feed example given earlier in this paper. This reflects that the feed data has been transformed since its original ingest, and fields deemed unnecessary to the final analysis have been dropp ed. 10
11 In addition to incremental storage and bandwidth costs, the Feedback Pattern increases the risk of increased data consanguinity, in which multiple, apparently different data fields are all derivatives of 7 the same original data item. Judicious application of the Lineage pattern may help to alleviate this risk. Diagram 10: ATI Architecture with Feedback 7 As an aside, this is perhaps similar to the risk models applied to sub prime mortgage derivatives. As new derivatives were created from aggregates of other derivatives, the consanguinity in which the seemingly uncorrelated risk/return models were actually all derived from the same underlying high risk mortgages caused a divergence of perceived risk from actual risk. 11
12 ATI will capture some of their intermediate results in the Data Lake, creating a new pathway in their data architecture. Pattern 6: Cross Referencing By this point, the ATI data architecture is fairly robust in terms of its internal data transformations and analyses. However, it is still dependent on the validity of the source data. While it is expected that validation rules will be implemented either as a part of ETL processes or as an additional step (e.g. via a commercial data quality solution), ATI has data from a large number of sources and has an opportunity to leverage any conceptual overlaps in these data sources to validate the incoming data. The same conceptual data may be available from multiple sources. For example, the opening price of SPY shares on 6/26/15 is likely to be available from numerous market data feeds, and should hold an identical value across all feeds (after normalization). If these values are ever detected to diverge, then that fact becomes a flag to indicate that there is a problem either with one of the data sources or with the ingest and conditioning logic. In order to take advantage of cross referencing validation, those semantic concepts must be identified which will serve as common reference points. This may imply a metadata modeling approach such as a Master Data Management solution, but this is beyond the scope of this paper. As with the Feedback Pattern, the Cross Referencing Pattern benefits from the inclusion of the Lineage Pattern. When relying on an agreement between multiple data sources as to the value of a particular field, it is important that the sources being cross referenced are sourced (directly or indirectly) from independent sources that do not carry correlation created by internal modeling. ATI will utilize a semantic dictionary as a part of the Metadata Transform Pattern described above. This dictionary, along with lineage data, will be utilized by a validation step introduced into the conditioning processes in the data architecture. Adding this cross referencing validation reveals the final state architecture: Diagram 11: ATI Architecture Validation 12
13 Conclusion This paper has examined a number patterns that can be applied to data architectures. These patterns should be viewed as templates for specific problem spaces of the overall data architecture, and can (and often should) be modified to fit the needs of specific projects. They do not require use of any particular commercial or open source technologies, though some common choices may seem like apparent fits to many implementations of a specific pattern. If you would like additional insight into designing a new data architecture or assessing, expanding, and improving an existing architecture, please contact BigR.io to explore how we might be able to add lift to your initiative. Learn more at Authored by: Gregory Harman, Managing Partner BigR.io, LLC 13
Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January 2015. Email: [email protected] Website: www.qburst.com
Lambda Architecture Near Real-Time Big Data Analytics Using Hadoop January 2015 Contents Overview... 3 Lambda Architecture: A Quick Introduction... 4 Batch Layer... 4 Serving Layer... 4 Speed Layer...
Chapter 6 Basics of Data Integration. Fundamentals of Business Analytics RN Prasad and Seema Acharya
Chapter 6 Basics of Data Integration Fundamentals of Business Analytics Learning Objectives and Learning Outcomes Learning Objectives 1. Concepts of data integration 2. Needs and advantages of using data
Big Data JAMES WARREN. Principles and best practices of NATHAN MARZ MANNING. scalable real-time data systems. Shelter Island
Big Data Principles and best practices of scalable real-time data systems NATHAN MARZ JAMES WARREN II MANNING Shelter Island contents preface xiii acknowledgments xv about this book xviii ~1 Anew paradigm
Big Data for Investment Research Management
IDT Partners www.idtpartners.com Big Data for Investment Research Management Discover how IDT Partners helps Financial Services, Market Research, and Investment Management firms turn big data into actionable
IBM Software Integrating and governing big data
IBM Software big data Does big data spell big trouble for integration? Not if you follow these best practices 1 2 3 4 5 Introduction Integration and governance requirements Best practices: Integrating
Beyond Lambda - how to get from logical to physical. Artur Borycki, Director International Technology & Innovations
Beyond Lambda - how to get from logical to physical Artur Borycki, Director International Technology & Innovations Simplification & Efficiency Teradata believe in the principles of self-service, automation
Trustworthiness of Big Data
Trustworthiness of Big Data International Journal of Computer Applications (0975 8887) Akhil Mittal Technical Test Lead Infosys Limited ABSTRACT Big data refers to large datasets that are challenging to
Knowledgent White Paper Series. Developing an MDM Strategy WHITE PAPER. Key Components for Success
Developing an MDM Strategy Key Components for Success WHITE PAPER Table of Contents Introduction... 2 Process Considerations... 3 Architecture Considerations... 5 Conclusion... 9 About Knowledgent... 10
MDM and Data Warehousing Complement Each Other
Master Management MDM and Warehousing Complement Each Other Greater business value from both 2011 IBM Corporation Executive Summary Master Management (MDM) and Warehousing (DW) complement each other There
Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes
Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes Highly competitive enterprises are increasingly finding ways to maximize and accelerate
Next Generation Business Performance Management Solution
Next Generation Business Performance Management Solution Why Existing Business Intelligence (BI) Products are Inadequate Changing Business Environment In the face of increased competition, complex customer
Testing Big data is one of the biggest
Infosys Labs Briefings VOL 11 NO 1 2013 Big Data: Testing Approach to Overcome Quality Challenges By Mahesh Gudipati, Shanthi Rao, Naju D. Mohan and Naveen Kumar Gajja Validate data quality by employing
Hadoop in the Hybrid Cloud
Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big
Big Data Architecture & Analytics A comprehensive approach to harness big data architecture and analytics for growth
MAKING BIG DATA COME ALIVE Big Data Architecture & Analytics A comprehensive approach to harness big data architecture and analytics for growth Steve Gonzales, Principal Manager [email protected]
Whitepaper Data Governance Roadmap for IT Executives Valeh Nazemoff
Whitepaper Data Governance Roadmap for IT Executives Valeh Nazemoff The Challenge IT Executives are challenged with issues around data, compliancy, regulation and making confident decisions on their business
CA Technologies Big Data Infrastructure Management Unified Management and Visibility of Big Data
Research Report CA Technologies Big Data Infrastructure Management Executive Summary CA Technologies recently exhibited new technology innovations, marking its entry into the Big Data marketplace with
ETL-EXTRACT, TRANSFORM & LOAD TESTING
ETL-EXTRACT, TRANSFORM & LOAD TESTING Rajesh Popli Manager (Quality), Nagarro Software Pvt. Ltd., Gurgaon, INDIA [email protected] ABSTRACT Data is most important part in any organization. Data
White Paper. How Streaming Data Analytics Enables Real-Time Decisions
White Paper How Streaming Data Analytics Enables Real-Time Decisions Contents Introduction... 1 What Is Streaming Analytics?... 1 How Does SAS Event Stream Processing Work?... 2 Overview...2 Event Stream
Traditional BI vs. Business Data Lake A comparison
Traditional BI vs. Business Data Lake A comparison The need for new thinking around data storage and analysis Traditional Business Intelligence (BI) systems provide various levels and kinds of analyses
Data Catalogs for Hadoop Achieving Shared Knowledge and Re-usable Data Prep. Neil Raden Hired Brains Research, LLC
Data Catalogs for Hadoop Achieving Shared Knowledge and Re-usable Data Prep Neil Raden Hired Brains Research, LLC Traditionally, the job of gathering and integrating data for analytics fell on data warehouses.
Enterprise Data Quality
Enterprise Data Quality An Approach to Improve the Trust Factor of Operational Data Sivaprakasam S.R. Given the poor quality of data, Communication Service Providers (CSPs) face challenges of order fallout,
Increase Agility and Reduce Costs with a Logical Data Warehouse. February 2014
Increase Agility and Reduce Costs with a Logical Data Warehouse February 2014 Table of Contents Summary... 3 Data Virtualization & the Logical Data Warehouse... 4 What is a Logical Data Warehouse?... 4
A Brief Introduction to Apache Tez
A Brief Introduction to Apache Tez Introduction It is a fact that data is basically the new currency of the modern business world. Companies that effectively maximize the value of their data (extract value
POLAR IT SERVICES. Business Intelligence Project Methodology
POLAR IT SERVICES Business Intelligence Project Methodology Table of Contents 1. Overview... 2 2. Visualize... 3 3. Planning and Architecture... 4 3.1 Define Requirements... 4 3.1.1 Define Attributes...
CDH AND BUSINESS CONTINUITY:
WHITE PAPER CDH AND BUSINESS CONTINUITY: An overview of the availability, data protection and disaster recovery features in Hadoop Abstract Using the sophisticated built-in capabilities of CDH for tunable
Big Data and Healthcare Payers WHITE PAPER
Knowledgent White Paper Series Big Data and Healthcare Payers WHITE PAPER Summary With the implementation of the Affordable Care Act, the transition to a more member-centric relationship model, and other
Agile Business Intelligence Data Lake Architecture
Agile Business Intelligence Data Lake Architecture TABLE OF CONTENTS Introduction... 2 Data Lake Architecture... 2 Step 1 Extract From Source Data... 5 Step 2 Register And Catalogue Data Sets... 5 Step
IBM AND NEXT GENERATION ARCHITECTURE FOR BIG DATA & ANALYTICS!
The Bloor Group IBM AND NEXT GENERATION ARCHITECTURE FOR BIG DATA & ANALYTICS VENDOR PROFILE The IBM Big Data Landscape IBM can legitimately claim to have been involved in Big Data and to have a much broader
High-Volume Data Warehousing in Centerprise. Product Datasheet
High-Volume Data Warehousing in Centerprise Product Datasheet Table of Contents Overview 3 Data Complexity 3 Data Quality 3 Speed and Scalability 3 Centerprise Data Warehouse Features 4 ETL in a Unified
An Enterprise Data Hub, the Next Gen Operational Data Store
An Enterprise Data Hub, the Next Gen Operational Data Store Version: 101 Table of Contents Summary 3 The ODS in Practice 4 Drawbacks of the ODS Today 5 The Case for ODS on an EDH 5 Conclusion 6 About the
Data Quality Assessment. Approach
Approach Prepared By: Sanjay Seth Data Quality Assessment Approach-Review.doc Page 1 of 15 Introduction Data quality is crucial to the success of Business Intelligence initiatives. Unless data in source
W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract
W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the
How to Run a Successful Big Data POC in 6 Weeks
Executive Summary How to Run a Successful Big Data POC in 6 Weeks A Practical Workbook to Deploy Your First Proof of Concept and Avoid Early Failure Executive Summary As big data technologies move into
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
Data Mining in the Swamp
WHITE PAPER Page 1 of 8 Data Mining in the Swamp Taming Unruly Data with Cloud Computing By John Brothers Business Intelligence is all about making better decisions from the data you have. However, all
The Future of Business Analytics is Now! 2013 IBM Corporation
The Future of Business Analytics is Now! 1 The pressures on organizations are at a point where analytics has evolved from a business initiative to a BUSINESS IMPERATIVE More organization are using analytics
Big Data for the Rest of Us Technical White Paper
Big Data for the Rest of Us Technical White Paper Treasure Data - Big Data for the Rest of Us 1 Introduction The importance of data warehousing and analytics has increased as companies seek to gain competitive
Effecting Data Quality Improvement through Data Virtualization
Effecting Data Quality Improvement through Data Virtualization Prepared for Composite Software by: David Loshin Knowledge Integrity, Inc. June, 2010 2010 Knowledge Integrity, Inc. Page 1 Introduction The
Managing Third Party Databases and Building Your Data Warehouse
Managing Third Party Databases and Building Your Data Warehouse By Gary Smith Software Consultant Embarcadero Technologies Tech Note INTRODUCTION It s a recurring theme. Companies are continually faced
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2008 Vol. 7, No. 8, November-December 2008 What s Your Information Agenda? Mahesh H. Dodani,
Big Data for Investment Research Management
IDT Partners www.idtpartners.com Big Data for Investment Research Management Discover how IDT Partners helps Financial Services, Market Research, and Investment firms turn big data into actionable research
A discussion of information integration solutions November 2005. Deploying a Center of Excellence for data integration.
A discussion of information integration solutions November 2005 Deploying a Center of Excellence for data integration. Page 1 Contents Summary This paper describes: 1 Summary 1 Introduction 2 Mastering
SimCorp Solution Guide
SimCorp Solution Guide Data Warehouse Manager For all your reporting and analytics tasks, you need a central data repository regardless of source. SimCorp s Data Warehouse Manager gives you a comprehensive,
Building Confidence in Big Data Innovations in Information Integration & Governance for Big Data
Building Confidence in Big Data Innovations in Information Integration & Governance for Big Data IBM Software Group Important Disclaimer THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL
Why enterprise data archiving is critical in a changing landscape
Why enterprise data archiving is critical in a changing landscape Ovum white paper for Informatica SUMMARY Catalyst Ovum view The most successful enterprises manage data as strategic asset. They have complete
Deploying an Operational Data Store Designed for Big Data
Deploying an Operational Data Store Designed for Big Data A fast, secure, and scalable data staging environment with no data volume or variety constraints Sponsored by: Version: 102 Table of Contents Introduction
Big Data Analytics - Accelerated. stream-horizon.com
Big Data Analytics - Accelerated stream-horizon.com Legacy ETL platforms & conventional Data Integration approach Unable to meet latency & data throughput demands of Big Data integration challenges Based
Architecting an Industrial Sensor Data Platform for Big Data Analytics: Continued
Architecting an Industrial Sensor Data Platform for Big Data Analytics: Continued 2 8 10 Issue 1 Welcome From the Gartner Files: Blueprint for Architecting Sensor Data for Big Data Analytics About OSIsoft,
Foundations of Business Intelligence: Databases and Information Management
Foundations of Business Intelligence: Databases and Information Management Problem: HP s numerous systems unable to deliver the information needed for a complete picture of business operations, lack of
BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON
BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON Overview * Introduction * Multiple faces of Big Data * Challenges of Big Data * Cloud Computing
Addressing Risk Data Aggregation and Risk Reporting Ben Sharma, CEO. Big Data Everywhere Conference, NYC November 2015
Addressing Risk Data Aggregation and Risk Reporting Ben Sharma, CEO Big Data Everywhere Conference, NYC November 2015 Agenda 1. Challenges with Risk Data Aggregation and Risk Reporting (RDARR) 2. How a
Big Data Analytics Platform @ Nokia
Big Data Analytics Platform @ Nokia 1 Selecting the Right Tool for the Right Workload Yekesa Kosuru Nokia Location & Commerce Strata + Hadoop World NY - Oct 25, 2012 Agenda Big Data Analytics Platform
IBM BigInsights Has Potential If It Lives Up To Its Promise. InfoSphere BigInsights A Closer Look
IBM BigInsights Has Potential If It Lives Up To Its Promise By Prakash Sukumar, Principal Consultant at iolap, Inc. IBM released Hadoop-based InfoSphere BigInsights in May 2013. There are already Hadoop-based
IBM WebSphere DataStage Online training from Yes-M Systems
Yes-M Systems offers the unique opportunity to aspiring fresher s and experienced professionals to get real time experience in ETL Data warehouse tool IBM DataStage. Course Description With this training
An Integrated Customer View. Ample White Paper
Health Insurance Platform to Create Enterprise 360-DegreeCustomer View An Integrated Customer View The Need for a 360-DegreeCustomer View How often have we heard that system data is incomplete or inconsistent,
Build a Streamlined Data Refinery. An enterprise solution for blended data that is governed, analytics-ready, and on-demand
Build a Streamlined Data Refinery An enterprise solution for blended data that is governed, analytics-ready, and on-demand Introduction As the volume and variety of data has exploded in recent years, putting
White Paper. Unified Data Integration Across Big Data Platforms
White Paper Unified Data Integration Across Big Data Platforms Contents Business Problem... 2 Unified Big Data Integration... 3 Diyotta Solution Overview... 4 Data Warehouse Project Implementation using
Unified Data Integration Across Big Data Platforms
Unified Data Integration Across Big Data Platforms Contents Business Problem... 2 Unified Big Data Integration... 3 Diyotta Solution Overview... 4 Data Warehouse Project Implementation using ELT... 6 Diyotta
Offload Enterprise Data Warehouse (EDW) to Big Data Lake. Ample White Paper
Offload Enterprise Data Warehouse (EDW) to Big Data Lake Oracle Exadata, Teradata, Netezza and SQL Server Ample White Paper EDW (Enterprise Data Warehouse) Offloads The EDW (Enterprise Data Warehouse)
Oracle Big Data Spatial & Graph Social Network Analysis - Case Study
Oracle Big Data Spatial & Graph Social Network Analysis - Case Study Mark Rittman, CTO, Rittman Mead OTN EMEA Tour, May 2016 [email protected] www.rittmanmead.com @rittmanmead About the Speaker Mark
Custom Software Development Approach
Custom Software Development Approach Our approach to custom software development combines benefits from several standard development process models. We tend to have a well-defined, predictable and highly
Are You Big Data Ready?
ACS 2015 Annual Canberra Conference Are You Big Data Ready? Vladimir Videnovic Business Solutions Director Oracle Big Data and Analytics Introduction Introduction What is Big Data? If you can't explain
The Internet of Things and Big Data: Intro
The Internet of Things and Big Data: Intro John Berns, Solutions Architect, APAC - MapR Technologies April 22 nd, 2014 1 What This Is; What This Is Not It s not specific to IoT It s not about any specific
The Principles of the Business Data Lake
The Principles of the Business Data Lake The Business Data Lake Culture eats Strategy for Breakfast, so said Peter Drucker, elegantly making the point that the hardest thing to change in any organization
IBM SECURITY QRADAR INCIDENT FORENSICS
IBM SECURITY QRADAR INCIDENT FORENSICS DELIVERING CLARITY TO CYBER SECURITY INVESTIGATIONS Gyenese Péter Channel Sales Leader, CEE IBM Security Systems 12014 IBM Corporation Harsh realities for many enterprise
Dell Cloudera Syncsort Data Warehouse Optimization ETL Offload
Dell Cloudera Syncsort Data Warehouse Optimization ETL Offload Drive operational efficiency and lower data transformation costs with a Reference Architecture for an end-to-end optimization and offload
Test Data Management Concepts
Test Data Management Concepts BIZDATAX IS AN EKOBIT BRAND Executive Summary Test Data Management (TDM), as a part of the quality assurance (QA) process is more than ever in the focus among IT organizations
Data Vault and The Truth about the Enterprise Data Warehouse
Data Vault and The Truth about the Enterprise Data Warehouse Roelant Vos 04-05-2012 Brisbane, Australia Introduction More often than not, when discussion about data modeling and information architecture
LEADing Practice: Artifact Description: Business, Information & Data Object Modelling. Relating Objects
LEADing Practice: Artifact Description: Business, Information & Data Object Modelling Relating Objects 1 Table of Contents 1.1 The Way of Thinking with Objects... 3 1.2 The Way of Working with Objects...
From Lab to Factory: The Big Data Management Workbook
Executive Summary From Lab to Factory: The Big Data Management Workbook How to Operationalize Big Data Experiments in a Repeatable Way and Avoid Failures Executive Summary Businesses looking to uncover
Data Warehouse and Business Intelligence Testing: Challenges, Best Practices & the Solution
Warehouse and Business Intelligence : Challenges, Best Practices & the Solution Prepared by datagaps http://www.datagaps.com http://www.youtube.com/datagaps http://www.twitter.com/datagaps Contact [email protected]
Oracle Big Data SQL Technical Update
Oracle Big Data SQL Technical Update Jean-Pierre Dijcks Oracle Redwood City, CA, USA Keywords: Big Data, Hadoop, NoSQL Databases, Relational Databases, SQL, Security, Performance Introduction This technical
The Future of Data Management
The Future of Data Management with Hadoop and the Enterprise Data Hub Amr Awadallah (@awadallah) Cofounder and CTO Cloudera Snapshot Founded 2008, by former employees of Employees Today ~ 800 World Class
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
Using distributed technologies to analyze Big Data
Using distributed technologies to analyze Big Data Abhijit Sharma Innovation Lab BMC Software 1 Data Explosion in Data Center Performance / Time Series Data Incoming data rates ~Millions of data points/
Architecting an Industrial Sensor Data Platform for Big Data Analytics
Architecting an Industrial Sensor Data Platform for Big Data Analytics 1 Welcome For decades, organizations have been evolving best practices for IT (Information Technology) and OT (Operation Technology).
Information Management & Data Governance
Data governance is a means to define the policies, standards, and data management services to be employed by the organization. Information Management & Data Governance OVERVIEW A thorough Data Governance
VIEWPOINT. High Performance Analytics. Industry Context and Trends
VIEWPOINT High Performance Analytics Industry Context and Trends In the digital age of social media and connected devices, enterprises have a plethora of data that they can mine, to discover hidden correlations
RS MDM. Integration Guide. Riversand
RS MDM 2009 Integration Guide This document provides the details about RS MDMCenter integration module and provides details about the overall architecture and principles of integration with the system.
SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box)
SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box) SQL Server White Paper Published: January 2012 Applies to: SQL Server 2012 Summary: This paper explains the different ways in which databases
Designing a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
OPEN DATA CENTER ALLIANCE Usage Model: Guide to Interoperability Across Clouds
sm OPEN DATA CENTER ALLIANCE Usage Model: Guide to Interoperability Across Clouds SM Table of Contents Legal Notice... 3 Executive Summary... 4 Purpose... 5 Overview... 5 Interoperability... 6 Service
Enterprise Storage Manager. Managing Video As A Strategic Asset November 2009
Enterprise Storage Manager Managing Video As A Strategic Asset November 2009 Table of Contents About Verint Video Intelligence Solutions... 1 About Verint Systems... 1 Preface: The Significance of Video
Cisco Data Preparation
Data Sheet Cisco Data Preparation Unleash your business analysts to develop the insights that drive better business outcomes, sooner, from all your data. As self-service business intelligence (BI) and
Comprehensive Analytics on the Hortonworks Data Platform
Comprehensive Analytics on the Hortonworks Data Platform We do Hadoop. Page 1 Page 2 Back to 2005 Page 3 Vertical Scaling Page 4 Vertical Scaling Page 5 Vertical Scaling Page 6 Horizontal Scaling Page
Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database
Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica
IBM Software A Journey to Adaptive MDM
IBM Software A Journey to Adaptive MDM What is Master Data? Why is it Important? A Journey to Adaptive MDM Contents 2 MDM Business Drivers and Business Value 4 MDM is a Journey 7 IBM MDM Portfolio An Adaptive
The Data Warehouse ETL Toolkit
2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. The Data Warehouse ETL Toolkit Practical Techniques for Extracting,
IT Operations Management: A Service Delivery Primer
IT Operations Management: A Service Delivery Primer Agile Service Delivery Creates Business Value Today, IT has to innovate at an ever- increasing pace to meet accelerating business demands. Rapid service
Advanced Big Data Analytics with R and Hadoop
REVOLUTION ANALYTICS WHITE PAPER Advanced Big Data Analytics with R and Hadoop 'Big Data' Analytics as a Competitive Advantage Big Analytics delivers competitive advantage in two ways compared to the traditional
Before You Buy: A Checklist for Evaluating Your Analytics Vendor
Executive Report Before You Buy: A Checklist for Evaluating Your Analytics Vendor By Dale Sanders Sr. Vice President Health Catalyst Embarking on an assessment with the knowledge of key, general criteria
EMC ADVERTISING ANALYTICS SERVICE FOR MEDIA & ENTERTAINMENT
EMC ADVERTISING ANALYTICS SERVICE FOR MEDIA & ENTERTAINMENT Leveraging analytics for actionable insight ESSENTIALS Put your Big Data to work for you Pick the best-fit, priority business opportunity and
KNOWLEDGENT REPORT. 2015 Big Data Survey: Current Implementation Challenges
KNOWLEDGENT REPORT 2015 Big Data Survey: Current Implementation Challenges INTRODUCTION The amount of data in both the private and public domain is experiencing exponential growth. Mobile devices, sensors,
Descriptive to Predictive to Prescriptive Analytics: Move Up the Value Chain. Suren Nathan CTO
Descriptive to Predictive to Prescriptive Analytics: Move Up the Value Chain Suren Nathan CTO What We Do Deliver cloud based predictive analytics solutions to the communications industry to help streamline
Datenverwaltung im Wandel - Building an Enterprise Data Hub with
Datenverwaltung im Wandel - Building an Enterprise Data Hub with Cloudera Bernard Doering Regional Director, Central EMEA, Cloudera Cloudera Your Hadoop Experts Founded 2008, by former employees of Employees
Automated Data Ingestion. Bernhard Disselhoff Enterprise Sales Engineer
Automated Data Ingestion Bernhard Disselhoff Enterprise Sales Engineer Agenda Pentaho Overview Templated dynamic ETL workflows Pentaho Data Integration (PDI) Use Cases Pentaho Overview Overview What we
Big Data 101: Harvest Real Value & Avoid Hollow Hype
Big Data 101: Harvest Real Value & Avoid Hollow Hype 2 Executive Summary Odds are you are hearing the growing hype around the potential for big data to revolutionize our ability to assimilate and act on
