Cloud DBMS: An Overview. Shan-Hung Wu, NetDB CS, NTHU Spring, 2015



Similar documents
High Availability for Database Systems in Cloud Computing Environments. Ashraf Aboulnaga University of Waterloo

Database Replication with MySQL and PostgreSQL

DATABASE REPLICATION A TALE OF RESEARCH ACROSS COMMUNITIES

Database Replication with Oracle 11g and MS SQL Server 2008

High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper

Flash Databases: High Performance and High Availability

Divy Agrawal and Amr El Abbadi Department of Computer Science University of California at Santa Barbara

Data Distribution with SQL Server Replication

Where We Are. References. Cloud Computing. Levels of Service. Cloud Computing History. Introduction to Data Management CSE 344

Tushar Joshi Turtle Networks Ltd

MySQL High-Availability and Scale-Out architectures

New method for data replication in distributed heterogeneous database systems

Introduction to NOSQL

Report Data Management in the Cloud: Limitations and Opportunities

Transactions and ACID in MongoDB

In Memory Accelerator for MongoDB

Daniel J. Adabi. Workshop presentation by Lukas Probst

Data Management in the Cloud

Informix Dynamic Server May Availability Solutions with Informix Dynamic Server 11

SQL Server 2014 New Features/In- Memory Store. Juergen Thomas Microsoft Corporation

F1: A Distributed SQL Database That Scales. Presentation by: Alex Degtiar (adegtiar@cmu.edu) /21/2013

Scalability of web applications. CSCI 470: Web Science Keith Vertanen

Big Data Database Revenue and Market Forecast,

SCALABILITY AND AVAILABILITY

Cluster Computing. ! Fault tolerance. ! Stateless. ! Throughput. ! Stateful. ! Response time. Architectures. Stateless vs. Stateful.

Appendix A Core Concepts in SQL Server High Availability and Replication

High Availability Using MySQL in the Cloud:

Distributed Data Management

Cloud Computing Is In Your Future

<Insert Picture Here> Oracle NoSQL Database A Distributed Key-Value Store

High Availability Solutions for the MariaDB and MySQL Database

Solving Large-Scale Database Administration with Tungsten

Geo-Replication in Large-Scale Cloud Computing Applications

Database Scalability {Patterns} / Robert Treat

Module 14: Scalability and High Availability

Massive Data Storage

Overview of Databases On MacOS. Karl Kuehn Automation Engineer RethinkDB

Technical Challenges for Big Health Care Data. Donald Kossmann Systems Group Department of Computer Science ETH Zurich

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Top 10 Reasons why MySQL Experts Switch to SchoonerSQL - Solving the common problems users face with MySQL

Survey on Comparative Analysis of Database Replication Techniques

Availability Digest. MySQL Clusters Go Active/Active. December 2006

MySQL. Leveraging. Features for Availability & Scalability ABSTRACT: By Srinivasa Krishna Mamillapalli

Active/Active DB2 Clusters for HA and Scalability

Future-Proofing MySQL for the Worldwide Data Revolution

Practical Cassandra. Vitalii

<Insert Picture Here> Real-time database replication

Cloud Based Application Architectures using Smart Computing

MakeMyTrip CUSTOMER SUCCESS STORY

Web Application Deployment in the Cloud Using Amazon Web Services From Infancy to Maturity

Chapter 3 - Data Replication and Materialized Integration

Distributed Databases. Concepts. Why distributed databases? Distributed Databases Basic Concepts

A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES

The Future of PostgreSQL High Availability Robert Hodges - Continuent, Inc. Simon Riggs - 2ndQuadrant

Amr El Abbadi. Computer Science, UC Santa Barbara

Distributed Database Management Systems

be architected pool of servers reliability and

Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved.

The Vertica Analytic Database Technical Overview White Paper. A DBMS Architecture Optimized for Next-Generation Data Warehousing

Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale

High Availability Database Solutions. for PostgreSQL & Postgres Plus

Benchmarking and Analysis of NoSQL Technologies

Contents. SnapComms Data Protection Recommendations

Cloud Computing with Microsoft Azure

ScaleArc for SQL Server

Big Data With Hadoop

geo-distributed storage in data centers marcos k. aguilera microsoft research silicon valley


On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

References. Introduction to Database Systems CSE 444. Motivation. Basic Features. Outline: Database in the Cloud. Outline

Introduction to Database Systems CSE 444

ScaleArc idb Solution for SQL Server Deployments

Lecture 7: Concurrency control. Rasmus Pagh

HDMQ :Towards In-Order and Exactly-Once Delivery using Hierarchical Distributed Message Queues. Dharmit Patel Faraj Khasib Shiva Srivastava

Berkeley Ninja Architecture

Scalable Architecture on Amazon AWS Cloud

One-Size-Fits-All: A DBMS Idea Whose Time has Come and Gone. Michael Stonebraker December, 2008

Big Data Management and NoSQL Databases

Distributed Databases

SQL Server Virtualization

Technical Overview: Anatomy of the Cloudant DBaaS

The NoSQL Ecosystem, Relaxed Consistency, and Snoop Dogg. Adam Marcus MIT CSAIL

Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier

Distributed Data Stores

Distributed Databases

Deploying BDR. Simon Riggs CTO, 2ndQuadrant & Major Developer, PostgreSQL. February 2015

Eloquence Training What s new in Eloquence B.08.00

bigdata Managing Scale in Ontological Systems

Putting Apache Kafka to Use!

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

How to Move Your Business to Big Data: The Next Generation Enterprise Architecture

Distributed Software Development with Perforce Perforce Consulting Guide

Goals. Managing Multi-User Databases. Database Administration. DBA Tasks. (Kroenke, Chapter 9) Database Administration. Concurrency Control

Transcription:

Cloud DBMS: An Overview Shan-Hung Wu, NetDB CS, NTHU Spring, 2015

Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational vs. analytic workloads The end of one-size-fits all era 2

Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational vs. analytic workloads The end of one-size-fits all era 3

Definition A cloud DBMS is a DBMS designed to run in the cloud Machines could be either physical or virtual In particular, some manages data of tremendous applications (called tenants) A.k.a. multi-tenant DBMS Is MySQL a cloud database? I can run MySQL in a Amazon EC2 VM instance No 4

What s the Difference Ideally, in addition to all features provided by a traditional database, a cloud database should ensure SAE: high Scalability High max. throughput (measured by Tx/Query per second/minute) Horizontal, using commodity machines high Availability Stay on all the time, despite of machines/network/datacenter failure Elasticity Add/shutdown machines and re-distribute data on-the-fly based on the current workload 5

What do we have now? 6

7

Why So Complicated? Full DB functions + SAE = a goal no one can achieve (currently) Even with Oracle 11g + SPARC SuperCluster 30,249,688 TPC-C transactions per minute $30+ million USD You loss elasticity! 8

Wait, what do you mean full DB functions + SAE? 9

DB Functions Expressive relational data model Almost all kinds of data can be structured as a collection of tables Complex but fast queries Across multiple tables, grouping, nesting, etc. Query plan optimization, indexing, etc. Transactions with ACID Your data are always correct and durable 10

List of Desired Features Feature Relational model Why? Models (almost) all data, flexible queries Short latency 100ms more response time = significant loss of users (Google) Tx and ACID Scalability Availability Elasticity Save $ Keeps data correct and durable You are (or will be) big No availability, no service! 11

Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational vs. analytic workloads The end of one-size-fits all era 12

Scalability (OLTP Workloads) Max tx throughput (contention footprint * tx conflict rate) -1 A tx holding locks for a long time blocks all conflicting txs and reduces throughput Contention footprint increases when accessing hot tables (due to I/O bottleneck) How to improve the throughput? 13

Scalability and Partitioning Partition your hot tables Either horizontally or vertically Distribute read/write load to different servers Too hot! Partition1 Patition2 f1 f2 f1 f2 f1 f2 14

Complications in Distributed DBs Records spread among partitions on different servers How? T 3 : write(user T 2 : write(user 596, 596) 4) T 3 : write(user 596, user 4) T 1 : write(user 1) T 1 : write(user 1) T 2 : write(user 596) Node 1 (User 1~1000) Node 1 (User 1~500) ACID? Node 2 (User 501~1000) 15

Complications in Distributed DBs Records spread among partitions on different servers Distributed metadata manager Distributed query processor Best global-plan and its local-plans? Distributed transactions ACID of a global-transaction T and its localtransactions {Ti}? 16

Isolation Revisited Requires a distributed CC manager For 2PL Dedicated lock server, or Primary server for each lock object For timestamp and optimistic CC The problem is how to generate the global unique timestamps E.g., local_counter@server_id To prevent one server counts faster, each server increments its own counter upon receiving a timestamp from others 17

Atomicity Revisited Committing T means committing all localtransactions If any local-transaction rolls back, then T should roll back When will this happen? ACID violation (e.g., in OCC) Node failure (detected by some other nodes such as replica) 18

One-Phase Commit Commit T T1 T2 T3 If T2 rolls back (due to ACID violation or failure), then T is partially executed, violating atomicity The effect of T1 and T3 cannot be erased due to durability 19

Two-Phase Commit Phase 1 Commit T? Phase 2 Rollback T T1 T2 T3 Drawback: long delay T blocks all conflicting txs and reduces throughput Partition helps only when the overhead of communication (2PC) < overhead of slow I/Os 20

Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational vs. analytic workloads The end of one-size-fits all era 21

Availability Replicate all tables across servers If servers in one region fails, we have spare replicas Ideally, across geographically-separated regions To deal with disaster LA Taiwan Partition1 f1 f2 Patition2 f1 f2 Partition1 f1 f2 Patition2 f1 f2 22

Consistency Revisited Consistency Txs do not result in violation of rules you set In distributed environments, consistency also means All replicas remain the same after executing a tx Changes made by a tx on a replica need to be propagated to other replicas Tx reads local, writes all (R1WA) Side-benefit: a read-only tx can be on any replica When? Eager vs. Lazy By whom? Master/Slave vs. Multi-Master 23

Eager Replication Each write operation must completes on all replicas before a tx commits 2PC required Failure on any replica causes tx s rollback Slow tx, but strong consistency Partition1 f1 f2 A1 B1 LA A2 B2 rw-tx R(A) W(A) W(B) Commit E.g., 2PL (synchronous) 2PC Taiwan Partition1 f1 f2 A1 B1 A2 B2 r-tx R(A) R(B) R(C) 24

Lazy Replication Writes complete locally, but are propagated to remote replicas in the background (with a lag) Usually by shipping a batch of logs Fast tx, but eventual consistency Partition1 f1 f2 A1 B1 LA A2 B2 Tx1 R(A) W(A) W(B) Commit synchronous asynchronous (log shipping) Taiwan Partition1 f1 f2 A1 B1 A2 B2 r-tx R(A) R(B) R(C) 25

Who Writes? Master/Slave replication Writes of a record are routed to a specific (called master) replica Reads to others (slave replicas) Multi-Master replication Writes of a record can be routed to any replica I.e., two writes of the same records may be handled by different replicas 26

The Score Sheet Eager MM Lazy M/S Lazy MM Consistency Strong Eventual Eventual Latency High Low Low Throughput Low High High Availability upon failure Read/write Read-only Read/write Data loss upon failure None Some Some Reconciliation No need No need User or rules Bottleneck, SPF None Master None Eager M/S and MM make no much difference with 2PL + 2PC Lazy MM needs reconciliation of conflicting writes Either by user or rules, e.g., last-write-wins 27

Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational vs. analytic workloads The end of one-size-fits all era 28

A Real-World Distributed DBMS (DDBMS) Lazy M/S rules the conventional DDBMS! r-tx (sync, single-datacenter) Request Dispatcher Request rw-tx (sync) A read-only tx can be placed anywhere in a serial execution (of read-write txs) (with hot standby node monitoring the heartbeats) Datacenter Read Web Server Web Server Create/Update/Delete Web Server Datacenter Web Server (for partition/sharding) Read Dispatcher Dispatcher Create/Update/Delete Dispatcher Dispatcher Read Create/Update/Delete partition for write throughput DB Slave DB Slave DB Slave DB Slave DB Master Replicate (for scalability) replication for read throughput (async) DB Master DB Slave DB Slave DB Slave DB Slave DB Master DB Master partition for short delay (locality for single-datacenter queries) Replicate (for availability) replication for availability 29 (async)

What's the problem? 30

S only with High-End Machines Data partition is too costly High latency due to 2PC (even within a datacenter) Every rw-tx routes to the masters, a potential performance bottleneck Solution: careful partitions + super-powerful masters Reduces distributed txs (especially those across regions) as many as possible Reduces the number of masters Scale up (rather than out) = exponentially increase in $ 31

Without Failure: Trade-Off between C and Latency Lazy M/S replication: trade C for short latency A read-only tx is routed to slaves Reads an inconsistent snapshot of data if spanning across partitions To support full C, clients are allowed to require a read-only tx to go through the masters But you need more powerful masters (and more $) 32

With Failure: Trade-Off between C and Av, Plus No D Machine failure: Each mater has few hot-stand-by servers Placed nearby to reduce overhead Datacenter failure: Due to power shortage, disasters, etc. Trade-off: read-only during failover (no Av) or allow inconsistency for read-write txs (no C) Either way, no D: lazy M/S causes data loss 33

Data model Tx boundary Consistency Latency Throughput Bottleneck/SPF Consistency (F) Availability (F) Data loss (F) Reconciliation Elasticity DDBMS Relational Unlimited W strong, R eventual Low (local partition only) High (scale-up) Masters W strong, R eventual Read-only Some Not needed No DDBMS venders: If you are welling to pay a lot and sacrifice C and D, I can offer S and limited Av. 34

Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational vs. analytic workloads The end of one-size-fits all era 35

Do you really need Feature Relational model full DB functions + SAE? Why? Models (almost) all data, flexible queries Short latency 100ms more response time = significant loss of users (Google) Tx and ACID Scalability Availability Elasticity Save $ Keeps data correct and durable You are (or will be) big No availability, no service! 36

DB Workloads (1/3) Operational workloads For daily business operations A.k.a. OLTP (On-line Transaction Processing) Analytic workloads For data analysis and decision support Online (OLAP) or offline 37

DB Workloads (2/3) Complex operations Analytic Simple operations Operational Write-focus Read-focus 38

DB Workloads (3/3) Operational Analytic Data purpose Latest snapshot Historical or consolidated from multiple op sources Data volume < few tens of TBs hundreds TBs to hundreds PBs, or more R/W ratio 60/40 to 98/2 Mostly reads Query complexity Short reads, or short reads + writes Complex reads, batch writes Delay tolerance < few seconds Minutes, hours, days, or more Tx + ACID Required Not really matters 39

Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational vs. analytic workloads The end of one-size-fits all era 40

Old Players OLTP players Oracle (incl. MySQL), IBM DB2, PostgreSQL, MS SQL Server, etc. OLAP players IBM Infosphere, Teredata, Vertica, Oracle, etc. Based on star-schema In the era of one-size-fits-all 41

Today s Challenges SAE is a must! E also means automation Unhappy OLTP users: I don t want to pay a lot Data may not be well-partitionable, even with few masters Consistency is desirable in many cases Unhappy analytic users: My data is at web scale Data are ill-formatted and heterogeneous Schema changes over time 42

43