Enkitec Exadata Storage Layout



Similar documents
Exadata for Oracle DBAs. Longtime Oracle DBA

Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers

Inge Os Sales Consulting Manager Oracle Norway

Ultimate Guide to Oracle Storage

Exadata and Database Machine Administration Seminar

Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture

Main Memory Data Warehouses

2009 Oracle Corporation 1

Applying traditional DBA skills to Oracle Exadata. Marc Fielding March 2013

An Oracle White Paper May Exadata Smart Flash Cache and the Oracle Exadata Database Machine

An Oracle White Paper June A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server

Expert Oracle Exadata

Exadata Database Machine

Exadata Database Machine Administration Workshop NEW

Oracle Maximum Availability Architecture with Exadata Database Machine. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska

An Oracle White Paper December A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server

Exadata Performance, Yes You Still Need to Tune Kathy Gibbs Senior Database Administrator, CONFIO Software

An Oracle White Paper October A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server

Exadata: from Beginner to Advanced in 3 Hours. Arup Nanda Longtime Oracle DBA (and now DMA)

Novinky v Oracle Exadata Database Machine

SUN ORACLE DATABASE MACHINE

An Oracle White Paper April A Technical Overview of the Sun Oracle Database Machine and Exadata Storage Server

SUN ORACLE DATABASE MACHINE

Top 10 Things You Always Wanted to Know About Automatic Storage Management But Were Afraid to Ask

MACHINE X2-8 ORACLE EXADATA DATABASE ORACLE DATA SHEET FEATURES AND FACTS FEATURES

How To Use Exadata

MACHINE X2-22 ORACLE EXADATA DATABASE ORACLE DATA SHEET FEATURES AND FACTS FEATURES

ORACLE EXADATA DATABASE MACHINE X2-8

An Oracle White Paper September A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine

<Insert Picture Here> Oracle Exadata Database Machine Overview

Capacity Management for Oracle Database Machine Exadata v2

SUN ORACLE EXADATA STORAGE SERVER

High Availability Databases based on Oracle 10g RAC on Linux

The Revival of Direct Attached Storage for Oracle Databases

How To Build An Exadata Database Machine X2-8 Full Rack For A Large Database Server

ORACLE EXADATA STORAGE SERVER X2-2

<Insert Picture Here> Oracle Cloud Storage. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Optimizing Storage for Oracle ASM with Oracle Flash-Optimized SAN Storage

ORACLE CORE DBA ONLINE TRAINING

Scalable NAS for Oracle: Gateway to the (NFS) future

Oracle Database In-Memory The Next Big Thing

Oracle Database 10g: Performance Tuning 12-1

Oracle Storage Options

ORACLE DATA SHEET RELATED PRODUCTS AND SERVICES RELATED PRODUCTS

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

Oracle Database - Engineered for Innovation. Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya

ORACLE EXADATA STORAGE SERVER X4-2

Best Practices for Oracle Automatic Storage Management (ASM) on Hitachi Dynamic Provisioning(HDP)

Overview: X5 Generation Database Machines

REDUCING DATABASE TOTAL COST OF OWNERSHIP WITH FLASH

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Expert Oracle Exadata

Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle

Virtualized Oracle 11g/R2 RAC Database on Oracle VM: Methods/Tips Kai Yu Oracle Solutions Engineering Dell Inc

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015

NEXTGEN v5.8 HARDWARE VERIFICATION GUIDE CLIENT HOSTED OR THIRD PARTY SERVERS

HP Oracle Database Platform / Exadata Appliance Extreme Data Warehousing

Areas Covered. Chapter 1 Features (Overview/Note) Chapter 2 How to Use WebBIOS. Chapter 3 Installing Global Array Manager (GAM)

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Solution Brief July All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

EMC Unified Storage for Microsoft SQL Server 2008

Preview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.

Oracle Database Public Cloud Services

Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices

<Insert Picture Here> Exadata Support Model and Best Practices

Software-defined Storage Architecture for Analytics Computing

Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager

Oracle Aware Flash: Maximizing Performance and Availability for your Database

About the Author About the Technical Contributors About the Technical Reviewers Acknowledgments. How to Use This Book

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Building Oracle Grid with Oracle VM on Blade Servers and iscsi Storage. Kai Yu Dell Oracle Solutions Engineering

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

An Oracle White Paper December Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine

MACHINE X3-8 ORACLE DATA SHEET ORACLE EXADATA DATABASE FEATURES AND FACTS FEATURES

IT CHANGE MANAGEMENT & THE ORACLE EXADATA DATABASE MACHINE

ORACLE DATABASE HIGH AVAILABILITY STRATEGY, ARCHITECTURE AND SOLUTIONS

Accelerating Applications and File Systems with Solid State Storage. Jacob Farmer, Cambridge Computer

1 Storage Devices Summary

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Software-defined Storage at the Speed of Flash

Introduction to Cloud Computing

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

TUT NoSQL Seminar (Oracle) Big Data

High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

ORACLE EXADATA STORAGE SERVER X3-2

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

<Insert Picture Here> Best Practices for Extreme Performance with Data Warehousing on Oracle Database

Benchmarking Cassandra on Violin

<Insert Picture Here>

Windows Server 2012 R2 Hyper-V: Designing for the Real World

Preparing a SQL Server for EmpowerID installation

PureSystems: Changing The Economics And Experience Of IT

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated R2

Transcription:

Enkitec Exadata Storage Layout 1

Randy Johnson Principal Consultant, Enkitec LP. 20 or so years in the IT industry Began working with Oracle RDBMS in 1992 at the launch of Oracle 7 Main areas of interest are HA, Backup & Recovery Working exclusively with Exadata for over a year now. Numerous Exadata implementations Numerous POC s & workshops 2

Agenda 1. What is Exadata 2. What Is ASM 3. Storage Cell Disks 4. Creating and Allocating Storage at the Cell 5. Demo Creating & Presenting Grid Disks to ASM 6. Q&A 3

What Is Exadata? 4 Database Servers 2-8 Servers, per rack 24-96 CPU Cores 96-768 Gigabytes Memory Storage Cells 3-14 Storage Cells per rack 72-336 CPU Cores 72-336 Gigabytes Memory 1.2-5.3 Terabytes Flash Cache 72-336 Terabytes Raw Disk Storage Infiniband Switches 2 Per rack + 1 spine switch for expansion 40Gb/s - 5 gigabytes per database server / storage cell Expandable up to 8 racks - more with additional spine switches

A Data Center In A Can ` Database Servers Sun Fire 5670, 1U 2, Hex-Core CPU s, 2.93GHz 96G Memory OEL5, 64 bit Infiniband Network Storage/Cell Servers SunFire X4270 2U 2, Hex-core CPU s, 2.93GHz 24G Memory 384GB Flash Cache OEL5, 64 bit 12 Drives per Storage Server 2TB SAS - High Cap 600GB SAS - High Perf. 5.4GB / Sec Transfer Rate Redundant 40Gb Switches Unified Internal Network Used by both DB and Storage Can connect externally 5

Exadata Configurations Full Rack 8 Database Servers 14 Storage Cells Half Rack 4 Database Servers 7 Storage Cells Quarter Rack 2 Database Servers 3 Storage Cells 6

How do the Components Work? Exadata Database Servers Exadata Storage Servers Compute Intensive Processing Complex Joins Aggregation Functions Data Conversion I/O Over Infiniband Using idb Protocol I/O Intensive Processing Index / Table Scans HCC Uncompress Functions Primitive Joins Predicate Filtering Column Projections Storage Index Processing 7

What is ASM? Database Volume Management Dynamic provisioning Disk redundancy ** IO Spread across all disks Clustered Storage (using RAC) Database File System File and directory names File management 8

S.A.M.E. ASM is the Implementation of S.A.M.E. The goal of S.A.M.E. is to optimize storage I/O utilization Bandwidth Sequential and random access Workload types 9

Implementing Database Storage Without ASM Design DB layout strategy How many file systems, data files Where data files reside Inventory of files: 2 control files, 2 log files, multiple datafiles per tablespace, backup files, temp files, etc 100 s or 1000 s of files to create, name, manage and not make mistakes! Multiplied by n number of Databases What about tuning, expanding/contracting your DB? 10

11 File Layouts Getting a Little Complicated?!?

Provisioning with File Systems Current data is what end users are usually interested in. So as space is added I/O tends to follow the new disks, creating hot spots. 2009 2010 2011 /u01/../marketing_01.dbf /u01/../marketing_02.dbf /u01/../marketing_03.dbf /u01/../marketing_04.dbf /u02/../marketing_05.dbf /u02/../marketing_06.dbf /u02/../marketing_07.dbf /u02/../marketing_08.dbf /u03/../marketing_09.dbf /u03/../marketing_10.dbf /u01 /u02 /u03 Most Recently Added Storage 12

Virtually Eliminates I/O Tuning Data automatically striped and mirrored I/O automatically spread across all disks I/O is balanced because data is balanced, eliminating hot spots When disks are added, data is rebalanced When disks are removed, data is rebalanced 13

Data Redundancy ASM provides redundancy using failure groups Failure groups are user defined units of failure Singular disk Group of disks (SAN, or Exadata Storage Cell) Failure groups ensure mirror copies of data are not stored on the same disks (or groups of disks) 14

Exadata Storage Server Config How Exadata Storage Server Redundancy Works 15

Exadata Storage Configuration Disk Configuration Overview The CellCLI program is used on the Storage Servers to configure disks for presentation to ASM Physical - Physical Disk LUN - OS Block Device Cell Disk - Used by Cell Server to manage disks Grid Disk - Presented to the DB as ASM disks 16

Exadata Storage Server Config ASM View of Grid Disks SYS:+ASM1> select path, total_mb, failgroup from v$asm_disk order by failgroup, path; Non-Exadata PATH TOTAL_MB FAILGROUP --------------- ---------- ---------- /dev/rdsk/ 11444 DATA01 /dev/rdsk/ 11444 DATA02 Exadata PATH TOTAL_MB FAILGROUP -------------------------------- -------- --------- o/192.168.12.3/data_cd_00_cell01 1313600 CELL01 o/192.168.12.3/data_cd_01_cell01 1313600 CELL01 o/192.168.12.3/data_cd_02_cell01 1313600 CELL01 17

Exadata Storage Server Configuration Data Redundancy - Failure Groups / Mirroring Grid Disks are combined into ASM Disk Groups across Cells Failure Groups ensure that mirrored data is physically separated 18

Exadata Storage Server Configuration Typical Exadata Storage Configuration ASM Disk Groups DATA_DG RECO_DG SYSTEM_DG CELL01 CELL03 CELL01 CELL03 CELL01 CELL03 CELL02 CELL02 CELL02 Grid Disks SYSTEM_DG Storage Cell 1 Failure Group: CELL01 DATA_DG RECO_DG Cell Disk 1 Cell Disk 2 Cell Disk 3 Cell Disk 4 Cell Disk 10 Cell Disk 11 Cell Disk 12 19

Grid Disk Names Grid Disk Names Exadata PATH TOTAL_MB FAILGROUP -------------------------------- -------- --------- o/192.168.12.3/data_cd_00_cell01 1313600 CELL01 o/192.168.12.3/data_cd_01_cell01 1313600 CELL01 o/192.168.12.3/data_cd_02_cell01 1313600 CELL01 20

Creating Grid Disks There are two ways to create grid disks 1. Create a single grid disk by fully qualifying the name. CellCLI> create griddisk TEST_DG_CD_00_cell03 celldisk='cd_00_cell03', size=100m GridDisk TEST_DG_CD_00_cell03 successfully created CellCLI> list griddisk attributes name, celldisk, size where name='test_dg_cd_00_cell03' TEST_DG_CD_00_cell03 CD_00_cell03 96M 21

Creating Grid Disks There are two ways to create grid disks 2. Create a collection of grid disks using the all parameter specify a name prefix Grid disks are named as follows: {prefix}_{celldisk_name} CellCLI> create griddisk all harddisk prefix='test', size=100m GridDisk TEST_CD_00_cell03 successfully created GridDisk TEST_CD_01_cell03 successfully created GridDisk TEST_CD_10_cell03 successfully created GridDisk TEST_CD_11_cell03 successfully created CellCLI> list griddisk attributes name, celldisk, disktype, size - where name like 'TEST_.*' TEST_CD_00_cell03 FD_00_cell03 HardDisk 96M TEST_CD_10_cell03 FD_10_cell03 HardDisk 96M TEST_CD_11_cell03 FD_11_cell03 HardDisk 96M 22

Grid Disk Sizing Dealing with the short disks [enkcel03:root] /root > fdisk -lu /dev/sda Device Boot Start End Blocks Id System /dev/sda1 * 63 240974 120456 fd Linux raid autodetect /dev/sda2 240975 257039 8032+ 83 Linux /dev/sda3 257040 3843486989 1921614975 83 Linux /dev/sda4 3843486990 3904293014 30403012+ f W95 Ext'd (LBA) /dev/sda5 3843487053 3864451814 10482381 fd Linux raid autodetect /dev/sda10 3897995598 3899425319 714861 fd Linux raid autodetect /dev/sda11 3899425383 3904293014 2433816 fd Linux raid autodetect 23

Grid Disk Sizing Dealing with the short disks CellCLI> list celldisk attributes name, devicepartition, size where disktype = 'HardDisk' CD_00_cell03 /dev/sda3 1832.59375G CD_01_cell03 /dev/sdb3 1832.59375G CD_02_cell03 /dev/sdc 1861.703125G CD_03_cell03 /dev/sdd 1861.703125G... CD_07_cell03 /dev/sdh 1861.703125G CD_08_cell03 /dev/sdi 1861.703125G CD_09_cell03 /dev/sdj 1861.703125G CD_10_cell03 /dev/sdk 1861.703125G CD_11_cell03 /dev/sdl 1861.703125G 24

Grid Disk Sizing The size parameter determines the size of each grid disk. If no size is specified then all remaining cell disk space is used for the grid disk. CellCLI> create griddisk all prefix='reco_dg' GridDisk RECO_DG_CD_00_cell03 successfully created GridDisk RECO_DG_CD_01_cell03 successfully created GridDisk RECO_DG_CD_10_cell03 successfully created GridDisk RECO_DG_CD_11_cell03 successfully created CellCLI> list griddisk attributes name, celldisk, disktype, size where name like 'RECO.*' RECO_CD_00_cell01 CD_00_cell01 HardDisk 91.265625G RECO_CD_01_cell01 CD_01_cell01 HardDisk 91.265625G RECO_CD_02_cell01 CD_02_cell01 HardDisk 120.375G RECO_CD_03_cell01 CD_03_cell01 HardDisk 120.375G RECO_CD_04_cell01 CD_04_cell01 HardDisk 120.375G 25

Storage Cell Provisioning Storage Cells may be allocated to specific database servers, creating multiple storage grids Multiple RAC Clusters Production Cluster Test Cluster Dev Cluster Caution This reduces IO bandwidth Storage Cells need not be carved up to support multiple database servers. 26

The Cellinit.ora File /opt/oracle/cell11.2.2.2.0_linux.x64_101206.2/cellsrv/deploy/config/cellinit.ora #CELL Initialization Parameters version=0.0 DEPLOYED=TRUE SSL_PORT=23943 JMS_PORT=9127 ipaddress1=192.168.12.9/24 -- Storage Grid address of HTTP_PORT=8888 the storage cell. BMC_SNMP_PORT=162 (on the Infiniband switch) RMI_PORT=23791 27

The Cellip.ora File /etc/oracle/cell/network-config/cellip.ora Production Database Servers, 1-6 cell="192.168.12.9" cell="192.168.12.10" cell="192.168.12.11" cell="192.168.12.12" cell="192.168.12.13" cell="192.168.12.14" cell="192.168.12.15" cell="192.168.12.16" cell="192.168.12.17" cell="192.168.12.18" cell="192.168.12.19" Production Storage Cells, 1-11 dm01cel01 192.168.12.9 dm01cel02 192.168.12.10 dm01cel03 192.168.12.11 dm01cel04 192.168.12.12 dm01cel05 192.168.12.13 dm01cel06 192.168.12.14 dm01cel07 192.168.12.15 dm01cel08 192.168.12.16 dm01cel09 192.168.12.17 dm01cel10 192.168.12.18 dm01cel11 192.168.12.19 Test Database Servers, 7-8 cell="192.168.12.20" cell="192.168.12.21" cell="192.168.12.22" Test Storage Cells, 12-14 dm01cel12 192.168.12.20 dm01cel13 192.168.12.21 dm01cel14 192.168.12.22 28

Exadata Full Rack: Default Configuration Oracle Cluster: PROD_CLUSTER dbm1 dbm5 dbm2 dbm6 dbm3 dbm7 dbm4 dbm8 DATA_DG RECO_DG SYSTEM_DG 14 Storage Cells, 168 Cell disks, 3 ASM Disk Groups 29

30 Exadata Full Rack: 8 non-rac Databases

31 Exadata Full Rack: 3 RAC Clusters

Real World Example: 2 Exadata Half Racks Exadata Rack #1 Exadata Rack #2 32

33

34 randy.johnson@enkitec.com