SciDB: an open-source, array-oriented database management system



Similar documents
Big Data and Big Analytics

One-Size-Fits-All: A DBMS Idea Whose Time has Come and Gone. Michael Stonebraker December, 2008

Arrays in database systems, the next frontier?

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

A Comparison of Approaches to Large-Scale Data Analysis

Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park

ASTERIX: An Open Source System for Big Data Management and Analysis (Demo) :: Presenter :: Yassmeen Abu Hasson

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

In Memory Accelerator for MongoDB

low-level storage structures e.g. partitions underpinning the warehouse logical table structures

Architectures for Big Data Analytics A database perspective

Review. Data Warehousing. Today. Star schema. Star join indexes. Dimension hierarchies

High Performance Spatial Queries and Analytics for Spatial Big Data. Fusheng Wang. Department of Biomedical Informatics Emory University

In-Memory Data Management for Enterprise Applications

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Apache MRQL (incubating): Advanced Query Processing for Complex, Large-Scale Data Analysis

Big Data With Hadoop

Data Warehouse: Introduction

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.

Oracle8i Spatial: Experiences with Extensible Databases

The Cubetree Storage Organization

Geodatabase Programming with SQL

Parquet. Columnar storage for the people

Graph Mining on Big Data System. Presented by Hefu Chai, Rui Zhang, Jian Fang

Impala: A Modern, Open-Source SQL Engine for Hadoop. Marcel Kornacker Cloudera, Inc.

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Integrating Apache Spark with an Enterprise Data Warehouse

Big Data Mining Services and Knowledge Discovery Applications on Clouds

Introduction to LSST Data Management. Jeffrey Kantor Data Management Project Manager

A Novel Cloud Based Elastic Framework for Big Data Preprocessing

SAP HANA SPS 09 - What s New? SAP HANA Scalability

<Insert Picture Here> Oracle SQL Developer 3.0: Overview and New Features

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

Telemetry Database Query Performance Review

Integrating Big Data into the Computing Curricula

How To Use Hadoop For Gis

Where We Are. References. Cloud Computing. Levels of Service. Cloud Computing History. Introduction to Data Management CSE 344

Data Mining in the Swamp

Bringing Big Data Modelling into the Hands of Domain Experts

Similarity Search in a Very Large Scale Using Hadoop and HBase

Agile Analytics on Extreme-Size Earth Science Data

Big Data and Analytics: A Conceptual Overview. Mike Park Erik Hoel

In-Memory Columnar Databases HyPer. Arto Kärki University of Helsinki

A SciDB-based Framework for Efficient Satellite Data Storage and Query based on Dynamic Atmospheric Event Trajectory

Data-Intensive Science and Scientific Data Infrastructure

Scaling Out With Apache Spark. DTL Meeting Slides based on

Lecture Data Warehouse Systems

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example

How To Scale Out Of A Nosql Database

Chapter 1: Introduction

White Paper February IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario

Indexing Techniques in Data Warehousing Environment The UB-Tree Algorithm

Multi-dimensional index structures Part I: motivation

James Serra Sr BI Architect

2 Associating Facts with Time

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011

Cloud Computing at Google. Architecture

Big Data looks Tiny from the Stratosphere

Manifest for Big Data Pig, Hive & Jaql

Daniel J. Adabi. Workshop presentation by Lukas Probst

Big Data and Scripting map/reduce in Hadoop

Facebook s Petabyte Scale Data Warehouse using Hive and Hadoop

Clustering & Visualization

In-Memory Computing for Iterative CPU-intensive Calculations in Financial Industry In-Memory Computing Summit 2015

Big Data: Opportunities & Challenges, Myths & Truths 資 料 來 源 : 台 大 廖 世 偉 教 授 課 程 資 料

Oracle Spatial and Graph. Jayant Sharma Director, Product Management

Data warehousing with PostgreSQL

Big Data Patterns. Ron Bodkin Founder and President, Think Big

Apache Flink. Fast and Reliable Large-Scale Data Processing

A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data

Can the Elephants Handle the NoSQL Onslaught?

OLAP. Business Intelligence OLAP definition & application Multidimensional data representation

DATA WAREHOUSING AND OLAP TECHNOLOGY

SQL Server 2008 Performance and Scale

Analysis of Web Archives. Vinay Goel Senior Data Engineer

Transcription:

SciDB: an open-source, array-oriented database management system Philippe Cudré-Mauroux Massachusetts Institute of Technology / University of Fribourg, Switzerland & the SciDB Team October 29, 2010 CIS, Itaipava, Brazil

Outline I. Context Exascale Data Deluge XLDB Workshops The SciDB Consortium II. System Overview Data Model Storage Model Operators Optimal Chunking Array Versioning SS-DB: a Science DBMS benchmark

Exascale Data Deluge Science Biology Astronomy Remote Sensing New data formats Peta & exa-scale data sets Web companies Ebay Yahoo Financial services, retail companies governments, etc. Wired 2009

Obsolete Infrastructures Data management infrastructures (DBMSs) were not prepared for this deluge One application One data type One user One CPU IBM Infrastructures collapse

Why is evolving DBMSs so hard? Two fundamental problems to tackle Obsolete physical model (impedance mismatch) N-ary storage, relational tuples B/R-trees SPJ queries Catastrophic performance in new application domains Impractical logical guarantees ACID properties Brewer s conjecture (CAP theorem) Inherent difficulty to scale-out

New Era of Data Management Users have gone away of DBMSs... File-centric solutions Procedural processing chains... and have come back Data abstractions (physical & logical) Strong consistency guarantees

XLDB Workshops Extremely Large Databases Workshops practical issues related to extremely large databases (1PB+) focus on escience: radio-astronomy, geoscience, genomics, particle physics invitation-only organized since 2007 by Becla & Lim (SLAC) Invited database researchers from the start consensus on array data model SciDB was born Open-source, array-oriented database system CIDR paper 2009 [S09]

The SciDB Consortium Distributed R&D Team Scientific Advisory Board Start-up (Zetics) Marylin Matz (CEO) & Paul Brown (CTO) Beta version demoed at VLDB 2009 [CM09] First public release (V0.5) available now

II. System Overview

SciDB Philosophy Array-Oriented Declarative Extensible Shared-Nothing Open-Source

High-Level Architecture!"#$%&'(()#"*+,- '(()#"*+,-&.*716.*-/0*/1&!(1"#2"&34 =0-+>1&!0(16?#<,6!16?16&.*716 50167&4-8169*"1&& *-:&;*6<16 ;)*-&@1-16*8,6 A,:1&B A,:1&C A,:1&G!8,6*/1&.*716.,"*)&DE1"08,6!8,6*/1&F*-*/16

Data Model Nested, multidimensional arrays as first-class citizens Basic data types User-Defined Types Dimension 2 a 1 = 2 a 2 = 3.2f a 3 = (5,2) (0,0) Dimension 1

Storage Model (1/3) Vertical Partitioning Dimension 2 a 1 = 2 a 2 = 3.2f a 3 = (5,2) (0,0) Dimension 1 Dimension 2 Dimension 2 (0,0) (0,1) (0,2) (0,3) (0,0) Dimension 1 (0,0) Dimension 1 a1 a2 a3...

Storage Model (2/3) Co-location of values Adaptive chunking Dense-packing Chunks co-location Compression Dimension 2 (0,0) Dimension 1 Two on-disk representations Dense arrays extremely compact, no position, offsetting Sparse arrays compactly stores positions + values in array order

Storage Model (3/3) Co-location of nested arrays Nested array of rank M from parent array of rank N stored as array of rank (M+N) Dimension 2 (0,0) a 3 Dimension 1 Redundancy Array replication Chunking w/ overlap Parallelism overlap 1 chunk 1 overlap 2 chunk 2

Operators (1/2) Declarative system supporting an array-oriented query language (AQL) Three types of operators Structural (subsample, add dimension, concatenate etc.) Content-dependent (filter, select etc.) Both structural & content-dependent (joins)

Operators (2/2) Scatter/gather Generic operator to distribute & aggregate array chunks & values in distributed settings

The Array Query Language (AQL) CREATE ARRAY Test_Array! < A: integer NULLS,! B: double,! C: USER_DEFINED_TYPE >! [I=0:99999,1000, 10, J=0:99999,1000, 10 ]! PARTITION OVER ( Node1, Node2, Node3 )! USING block_cyclic();! attribute names index names chunk size overlap A, B, C I, J 1000 10

The Array Query Language (AQL) SELECT Geo-Mean ( T.B )! FROM Test_Array T! WHERE! T.I BETWEEN :C1 AND :C2! AND T.J BETWEEN :C3 AND :C4! AND T.A = 10! GROUP BY T.I;! User-defined aggregate on an attribute B in array T Subsample Filter Group-by So far as SELECT / FROM / WHERE / GROUP BY queries are concerned, there is little logical difference between AQL and SQL

User-Defined Functions (UDFs) Extensibility through Postgres-style UDFs Encodes all domain-specific operations Coded in C++ Logical definition 1-N arrays-in, 1-N arrays-out Physical implementation chunk-granularity receives chunks from local query executor creates cell iterators, consumes values returns chunks to local executor

Parallel UDFs Regrid 2::1 UDF Worker 1 input pipeline... (16,0) (8,0) (4,0) Worker 2 output pipeline... (2,0) (0,0) (12,0) (6,0)

Demo preview of V0.75 4 node installation

Optimal Chunking How to chunk and index very large arrays? Chunk sizes & index size directly impact all array operations most operations highly IO-bound Multifaceted problem Chunks must be relatively large to amortize disk seeks, but not too large to fit in main memory an to avoid reading unnecessary data Data can be highly skewed Queries can be highly skewed Index must be compact and easily updatable

Spatial Partitioning (1/2) Cost-based, optimal spatial partitioning Efficient, hierarchical partitioning 24

Spatial Partitioning (2/2) Basic idea Cost-model for query execution times based on #cells accessed Optimal quadtree construction based on cost-model, query workload, local density & page size cellsize opt (Q, D, pagesize) [CM10a] (TrajStore) Optimal balance between Oversized cells potentially retrieves data that is not queried Undersized cells seek not amortized if too little data read unnecessary seeks if dense data and relatively large query 25

Versioning N-dimensional array objects Frequent insertions of new versions New observations New simulations Arbitrary queries over version history Q1 T1 T2 T3 Q2 T1 T2 T3

Problems w/ Versioning Systems Current Versioning Systems (e.g., SVN, GIT) are Slow for large / numerous binary files Catastrophic for spatial queries over series of versions No spatial chunking No co-location of versions?? T1 T2 T3

AVS Algorithms Given a workload of frequent / typical queries: Optimal co-location of (N+1) dimensional chunks on disk Optimal materialization / Δ-compression of versions to minimize Storage space Query execution time Δ M M Orders of magnitude faster than svn / git on scientific workloads

The SS-DB Benchmark Three main ways of building a scientific DBMS array simulation layer on top of relational DBMS MonetDB array executor on top of blobs Rasdaman native array system SciDB SS-DB [CM10b] Suggest realistic workload for escience Queries on raw data, derived data and ensemble data Understand the performance differences bw models 29

Contents Receive raw imagery data + metadata 2D images Load dataset Cook the data to find observations stars Group observations follow stars over space & time Execute a few queries on raw data as well as derived data 30

Query Workload Q1: Aggregation Q2: Recooking Q3: Regridding Q4: Aggregation Q5: Polygons Q6: Density Imagery Extract user-specified part of image Detect observations that match userspecifed criteria Q7: Group Centroids Q8 & Q9: Trajectories 31

Current Results 32

SciDB Roadmap R0.5 available: very early stage work-in-progress Basic functionality iquery interpreter Internal query language - AIL Small set of operators - create, aggregate, join, filter, subsample Minimal wiki-style documentation Breathes, crawls, hiccups R0.75 targeted for end-of-year AQL - the SQL-like array query language Error handling Scalable math operations Better documentation & stability R1.0 beta in April 2011 More functionally complete (UDFs, uncertainty, provenance et al) Robust, high performance 33

Research Agenda for XI Array Versioning Tests on very large data sets Forks Distributed versioning Data Lineage (Semantic Information) Arbitrary information stored in a triple-store Integration / performance? Automatic creation of workflow metadata Arbitrary AQL queries using metadata 34

References SciDB website: http://scidb.org/ SSDB paper: http://people.csail.mit.edu/pcm/ssdb.pdf [S09] Requirements for Science Data Bases and SciDB. Stonebraker, Becla, DeWitt, Lim, Maier, Ratzesberger, Zdonik. CIDR. 2009 [CM09] A Demonstration of SciDB: A Science-Oriented DBMS. Cudré-Mauroux, Kimura, Lim, Rogers, Simakov, Soroush, Velikhov, Wang, Balazinska, Becla, DeWitt, Heath, Maier, Madden, Stonebraker, Zdonik. PVLDB. 2009. [CM10a] TrajStore: An Adaptive Storage System for Very Large Trajectory Data Sets. Cudré-Mauroux, Wu, Madden. ICDE. 2010. [CM10b] SS-DB: A Standard Science DBMS Benchmark. Cudré-Mauroux, Kimura, Lim, Rogers, Madden, Stonebraker, Zdonik. Submitted for publication. 35