OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni



Similar documents
Preview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.

Safe Harbor Statement

Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier

Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

Oracle Architecture, Concepts & Facilities

Automatic Data Optimization

Inge Os Sales Consulting Manager Oracle Norway

Database Scalability and Oracle 12c

Oracle DBA Course Contents

Optimize Oracle Business Intelligence Analytics with Oracle 12c In-Memory Database Option

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*

DB2 for Linux, UNIX, and Windows Performance Tuning and Monitoring Workshop

Oracle Database - Engineered for Innovation. Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya

Maximum performance, minimal risk for data warehousing

2009 Oracle Corporation 1

So What s the Big Deal?

The Methodology Behind the Dell SQL Server Advisor Tool

<Insert Picture Here> Oracle Database Directions Fred Louis Principal Sales Consultant Ohio Valley Region

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011

Oracle Database In-Memory The Next Big Thing

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

Database as a Service (DaaS) Version 1.02

<Insert Picture Here>

News and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software


IBM DB2: LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs

Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture

Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale

ORACLE DATABASE 11G: COMPLETE

Benchmarking Cassandra on Violin

<Insert Picture Here> RMAN Configuration and Performance Tuning Best Practices

Expert Oracle. Database Architecture. Techniques and Solutions. 10gr, and 11g Programming. Oracle Database 9/, Second Edition.

SQL Server 2014 New Features/In- Memory Store. Juergen Thomas Microsoft Corporation

Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers

news from Tom Bacon about Monday's lecture

Rackspace Cloud Databases and Container-based Virtualization

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework

Tushar Joshi Turtle Networks Ltd

Oracle Database 12c: Performance Management and Tuning NEW

SAP HANA PLATFORM Top Ten Questions for Choosing In-Memory Databases. Start Here

Applying traditional DBA skills to Oracle Exadata. Marc Fielding March 2013

<Insert Picture Here> Best Practices for Extreme Performance with Data Warehousing on Oracle Database

MySQL és Hadoop mint Big Data platform (SQL + NoSQL = MySQL Cluster?!)

Flash Use Cases Traditional Infrastructure vs Hyperscale

Why Not Oracle Standard Edition? A Dbvisit White Paper By Anton Els

Oracle Database 10g: Administration Workshop II Release 2

Scaling Database Performance in Azure

Summary: This paper examines the performance of an XtremIO All Flash array in an I/O intensive BI environment.

An Oracle White Paper May Exadata Smart Flash Cache and the Oracle Exadata Database Machine

NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB

Cloud Computing at Google. Architecture

Oracle Database In-Memory A Practical Solution

Introduction to Database as a Service

<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

Oracle Database 12c Plug In. Switch On. Get SMART.

Database System Architecture & System Catalog Instructor: Mourad Benchikh Text Books: Elmasri & Navathe Chap. 17 Silberschatz & Korth Chap.

IncidentMonitor Server Specification Datasheet

White Paper. Optimizing the Performance Of MySQL Cluster

DBMS / Business Intelligence, SQL Server

Oracle. Brief Course Content This course can be done in modular form as per the detail below. ORA-1 Oracle Database 10g: SQL 4 Weeks 4000/-

Oracle Aware Flash: Maximizing Performance and Availability for your Database

Application-Tier In-Memory Analytics Best Practices and Use Cases

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Capacity Management for Oracle Database Machine Exadata v2

MySQL Cluster New Features. Johan Andersson MySQL Cluster Consulting johan.andersson@sun.com

Top 10 Performance Tips for OBI-EE

ORACLE 11g RDBMS Features: Oracle Total Recall Oracle FLEXCUBE Enterprise Limits and Collateral Management Release 12.1 [December] [2014]

An Oracle White Paper December Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine

Oracle Database 12c Built for Data Warehousing O R A C L E W H I T E P A P E R F E B R U A R Y

Ultimate Guide to Oracle Storage

Putting Apache Kafka to Use!

The Revival of Direct Attached Storage for Oracle Databases

Oracle Database 12c. Peter Schmidt Systemberater Oracle Deutschland BV & CO KG

Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture

MS SQL Performance (Tuning) Best Practices:

How to Choose Between Hadoop, NoSQL and RDBMS

Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle

F1: A Distributed SQL Database That Scales. Presentation by: Alex Degtiar (adegtiar@cmu.edu) /21/2013

Maximum Availability Architecture

Software-defined Storage Architecture for Analytics Computing

Oracle Database: SQL and PL/SQL Fundamentals NEW

One-Size-Fits-All: A DBMS Idea Whose Time has Come and Gone. Michael Stonebraker December, 2008

Einsatzfelder von IBM PureData Systems und Ihre Vorteile.

Oracle 11g DBA Training Course Content

How To Use Big Data For Telco (For A Telco)

UNIVERSITY AUTHORISED EDUCATION PARTNER (WDP)

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, SIGMOD, New York, June 2013

DB2 LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

NoSQL for SQL Professionals William McKnight

Transcription:

OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni

Agenda Database trends for the past 10 years Era of Big Data and Cloud Challenges and Options Upcoming database trends Q&A

Scope OLTP Very Large Databases (VLDB) Databases running custom applications at web scale Oracle database focused

Database trends for the past 10 years Database sizes are now around 1 to 5 TB SGA memory size are typically 10G to 50GB Redo generation is reaching rates of 1MB to 5MB per second Executions per second trending from 5K to 20K DML insert rate per second from 100 to 500

Growth pattern of key DB load profile metrics Database size doubling every 2 years System memory doubling every 2 to 3 years Faster and more cores per processor every year Hard disk and flash capacity doubling every 2 years Executions per second doubling every 2 years

Era of Big Data and Cloud Velocity, volume, and variety of data led to Bigdata and distributed database systems (NoSQL and NewSQL) Cloud properties like linear scaling with commodity hardware and 99.99+ availability with self service capabilities propped up NoSQL and NewSQL innovations Simplified and append only schema tightly coupled with application Security, data management, and governance also tightly coupled with application

Challenges Data size Data lifecycle management (DLM) as best effort option Consolidated application use cases need longer data retention, leads to less accountability Trade off in performance vs DLM in defining table partition strategy Archiving is not transparent to application Poor compression ratio for row format data Poor DML scalability for column format compressed data

Options Data size Define and enforce DLM policies Isolate use cases for the same data, rely on event based processing Implement batched delete jobs when partitioning is not an option Instead-Of trigger based views and DB links for transparent archiving of selective tables With all DDL commands being online, reorganize table/index periodically for better compression Transparent compression also available at storage layer

Challenges Memory Most data is not in memory, memory to disk access ratio over 1000 Longer cache warmup time after instance restart Bigger SGA means longer startup time Multiple copies of same data blocks in buffer cache lead to less hit ratio Any full memory scan operations take longer, i.e. datafile offline, tablespace read only, table read only operations, etc. Database system level diagnostic operations take longer, i.e. system state dump almost impossible to get on large memory, busy databases

Options Memory Performance tuning and effective data model is still important Cache at storage layer to compensate for DB instance cache warmup Avoid transaction commit delays to reduce multiple copies of data block Test and plan effectively for datafile and tablespace level operations Limit dump file sizes and use faster diagnostic operations like short_stack

Challenges Redo Higher redo rate means higher possibility of intermittent slowdowns Higher redo rate means longer crash recovery times Full supplemental logging for Goldengate generates more redo Online object reorganzation generates more redo, interfering with OLTP transactions Limited scalability of logical replication NOLOGGING operations require significant manual intervention

Options Redo Optimize DML s and indexes to reduce redo generation Define recovery target parameters for predictable crash recovery times Limit full supplemental logging tables Determine redo impact before online reorg of large objects Test and determine latency impact on logical replication prior to bulk DML/DDL operations Isolate and limit NOLOGGING operations

Challenges - Executions Reparse storm after any cursor invalidations due to DDL Possible mutex contention due to higher executions per cursor Index contention due to higher DML execution rate LOB contention due to higher DML execution rate Reaching limit on number of sessions per instance Unable to perform DDL on busy tables because of high execution rates

Options - Executions Avoid any retry and limit number of sessions for reducing reparse storm Create multiple versions of the same SQL (add a /* <n> */ ) or spread it across the cluster Rely on hash and other partitioning options to avoid index contention Rely on partitioning and optimize LOB DML s to reduce LOB contention Investigate Shared Server (MTS), DRCP, or custom connection pool to multiplex query executions across processes

Challenges - IOPS Design for lower IOPS is tradeoff with DML scalability, i.e. global indexes RAID amplifies write overhead, i.e. RAID5 convert one write to five writes Predictable response time Optimizing crash recovery time is tradeoff with write IOPS Chatty neighbor (bulk loads on new DB) can impact production DB

Options - IOPS Limit global indexes and reverse key indexes to lower IOPS Use RAID 10 for critical databases and RAID5 for archive and FRA Tweak storage features using techniques such as dynamic tiering for consistent performance Test for IOPS impact and arrive at acceptable crash recovery times Test and determine accept level of bulk loads for sharing workloads on SAN

Future (5 to 10 years) Database sizes will reach over 1PB Flash will replace hard disk for all OLTP workloads With Intel 3D cross point, memory to processor ratio to go over 10TB Dual format (row and column) for all data in one system Transparent archival of older data with no changes to application Sharding and cloud compatibility for all database systems Real time analytics and transactions in one database system (e.g. In Memory Oracle Database 12c option) Over 75% convergence across NoSQL, NewSQL, and big RDBMS vendors

Summary Brace for continued tremendous innovation in database space Consolidation of NoSQL and NewSQL vendors within 3 years Multi database vendor (RDBMS and NoSQL/NewSQL) here to stay Big RDBMS vendors continue to lead in security, maturity and integration NoSSQL and NewSQL vendors will continue to lead in agility, economics at scale, and tighter integration with application