Optimizing Storage for Oracle ASM with Oracle Flash-Optimized SAN Storage Simon Towers Architect Flash Storage Systems October 02, 2014
Program Agenda 1 2 3 4 5 6 Goals Flash Storage System Environment setup Exec summary: best practice configuration settings Detailed findings and recommendations Conclusions/summary 2
Goals of this Session Best practices configuration settings for the Database 12c, ASM, Linux and FS1 combination Details of the environmental setup 3
Program Agenda 1 2 3 4 5 6 Goals Flash Storage System Experimental setup Exec summary: best practice configuration settings Detailed findings and recommendations Conclusions/summary 4
Oracle s Complete Storage Portfolio Engineered for Data Centers. Optimized for Oracle Software Engineered Systems NAS Storage SAN Storage Tape and Virtual Tape Exadata Exalogic SPARC SuperCluster Big Data Appliance ZFS Storage Appliances Oracle FS1 Pillar Axiom 600 SL8500 LTO T9840 SL3000 T10K SL150 VSM Cloud storage Deployment Options: Private, Public, Hybrid Services: IaaS, PaaS, SaaS Consumption Options: Build, Manage, Subscribe Storage Software Storage Management: FS MaxMan, OEM, ASM, Storage Analytics, ACSLS Automated Tiering: FS1 QoS Plus, DB Partitions, SAM QFS, VSM Data Reduction: 11g ACO, HCC, RMAN, ZFS Storage Appliance Dedup/Comp Data Protection: FS1 MaxRep, FS1 Data Protection Manager, Data Guard, RMAN, OSB Security/Encryption: ASO, Oracle Key Manager, Disk/Tape Encryption 5
Cost-Performance of Storage Technology Order of magnitude difference must be exploited to optimize solution $/IOP 12 10 8 6 4 2 0 Cap HDD 0.25, $/IOP =10.00 Perf HDD Optimal performance at lowest possible cost 1.00, $/IOP =3.00 4.12, $/IOP =0.31 7.50, $/IOP =0.13 0 1 2 3 4 5 6 7 8 $/GB As of January 2014, List prices, approximate Net values You cannot afford Flash if you don t need the performance. Auto-Tiering Cap SSD You cannot find a better technology than Flash if you need performance. Perf SSD 6
Oracle FS1: QoS Plus Set QoS by Business Priority CPU Archive Priority High Priority Low Priority Cache Medium Priority Heat Maps Fine-Grain Auto Tiering Performance Tuning Flash Premium parameters for the volumes you create Priority Access Frequency Read / Write Bias QoS Plus Storage Domains Physical data isolation Random / Sequential Bias Capacity Flash Performance Disk Capacity Disk Storage Domains = Secure Multi-Tenancy 7
Program Agenda 1 2 3 4 5 6 Goals Flash Storage System Environment setup Exec summary: best practice configuration settings Detailed findings and recommendations Conclusions/summary 8
Hardware Physical Setup Logical Setup Workload Generator IP Load Balancing Database 12c ASM Disk Group ASM Disk Group ASM Disk Group FS1-2 16Gbps FC switch Sun Server X4-2 Workload Generator Server FS1-2 Controller Controller Perf SSD Cap SSD Perf HDD Cap HDD 9
Software 10
Software: Swingbench Load generator designed to stress test Oracle DBs Consists of a load generator, a coordinator and a cluster overview Includes four benchmarks, OrderEntry, SalesHistory, CallingCircle and StressTest 11
Software: Orion A tool for predicting the performance of an Oracle DB without having to install Oracle or create a DB. Designed for simulating Oracle DB IO workloads using the same IO software stack as Oracle. Can also simulate the effect of striping performed by ASM Can run tests using different IO loads to measure performance metrics such as MBPS, IOPS, and IO latency 12
Swingbench Orion IP Load Balancing 12c ASM Disk Group ASM Disk Group ASM Disk Group ASM Disk Group FS1-2 Controller Controller FS1-2 Controller Controller Perf SSD Cap SSD Perf HDD Cap HDD Perf SSD Cap SSD Perf HDD Cap HDD 13
Program Agenda 1 2 3 4 5 6 Goals Flash Storage System Environment setup Exec summary: best practice configuration settings Detailed findings and recommendations Conclusions/summary 14
Exec summary: configuring ASM, Linux and FS1 for 12c ASM Disk Groups 3 ASM disk groups: +DATA, +REDO, +FRA 2 LUNs per ASM disk group Storage QoS Plus +DATA multiple storage tiers; OLTP: Raid 10; +DSS: Raid 5 +REDO: Performance Disk, Raid 10 +FRA: Capacity Disk, Raid 6 Linux IO Scheduler Enable large IOs Change from default scheduler for SSDs Storage Domains and Auto-Tiering Isolate ASM disk groups into separate storage domains Let auto-tiering work its magic 15
Program Agenda 1 2 3 4 5 6 Goals Flash Storage System Environment setup Exec summary: best practice configuration settings Detailed findings and recommendations Conclusions/summary 16
ASM Disk Groups: Configuration How many disk groups? Standard Oracle recommendation is two: +DATA and +FRA How many disks per disk group? Standard Oracle recommendation for normal and high redundancy is 4 * number of active IO paths 17
But when using a high-end storage controller with varying storage tiers and QoS settings Create disk groups that map to very different IO workloads to avoid disk contention +DATA: For OLTP this is mainly small random writes For DSS this is large sequential reads +FRA Large sequential read/write +REDO Small sequential read/write ASM Disk group File types 12c Parameter +DATA Data, temp DB_CREATE_FILE_DEST +REDO +FRA Redo logs, control DB_CREATE_ONLINE_LOG_ DEST_1 Archive logs & backup sets DB_RECOVERY_FILE_DEST Make sure your ASM disk groups are set for External Redundancy 18
But when using a high-end storage controller with varying storage tiers and QoS settings 2 LUNs (or multiples of two) balanced across the two FS1-2 controllers 2, 4, 8 LUNs 1 LUN 19
20
Storage QoS Plus Match the storage QoS Plus settings to the ASM disk groups and their IO workloads Storage Profile Name Raid level Read ahead Priority Stripe width Writes Preferred storage classes ASM_DATA_OLTP Mirrored Conservative High Auto-select Back Perf Disk, Perf SSD ASM_DATA_DSS Single parity Aggressive High Auto-select Back Perf Disk, Cap SSD ASM_REDO Single parity Normal Premium All Back Perf Disk ASM_FRA Double parity Aggressive Archive Auto-select Back Cap Disk 21
22
Linux IO Scheduler Linux uses IO scheduling to control the order of block IOs submitted to/from storage Goals: Reorder IOs to minimize disk seek times Balance IO bandwidth amongst processes Ensure IOs meet deadlines Keep the HBA IO queues full Operating System Storage Applications feed IOs into OS IO queue 23
BUT for SAN controllers with lots of cache memory and SSD drives need to push IOs to storage as quickly as possible Linux changes /sys/block/dm-*/queue noop Change scheduler echo noop > scheduler Enable large IOs echo 4096 > max_sectors_kb Make these changes permanent by creating rules in /etc/grub.conf 24
Storage Domains and Auto-Tiering RG RG RG Storage Domains RAID Groups in Drive Enclosures RG Perf SSD RAID Groups RG RG Capacity SSD RAID Groups RG Perf HDD RAID Groups (10K-rpm) RG Cap HDD RAID Grps. (7.2K-rpm) RG Storage Domains: FS1 software that isolates data in storage containers Domains are composed of RAID Groups within Drive Enclosures RAID Groups can be SSD or HDD or any combination thereof Domains physically segregate data, avoiding data co-mingling QoS Plus and all major FS software operates uniquely on each Storage Domain. Neither data nor data services can cross a domain boundary. Up to 64 Storage Domains per FS1 Online reallocation of physical storage to domains 25
Performance HDDs Capacity HDDs Separate your ASM disk groups into different storage domains Performance SSDs Capacity SSDs Performance HDDs 26
Diminishing Returns When to stop buying flash! A Rule of Thumb: Sum of the % of IOPS and % of capacity = 1 A Little Flash Goes a Long Way! High marginal Value Diminishing marginal Value 27
Program Agenda 1 2 3 4 5 6 Goals Flash Storage System Environment setup Exec summary: best practice configuration settings Detailed findings and recommendations Conclusions/summary 28
Exec summary: configuring ASM, Linux and FS1 for 12c ASM Disk Groups 3 ASM disk groups: +DATA, +REDO, +FRA 2 LUNs per ASM disk group Storage QoS Plus +DATA multiple storage tiers; OLTP: Raid 10; +DSS: Raid 5 +REDO: Performance Disk, Raid 10 +FRA: Capacity Disk, Raid 6 Linux IO Scheduler Enable large IOs Change from default scheduler for SSDs Storage Domains and Auto-Tiering Isolate ASM disk groups into separate storage domains Let auto-tiering work its magic 29
Oracle Open World 2014 FS1 Sessions Session ID: CON7789 Optimizing Oracle Data Stores in Virtualized Environments Date and Time: 9/30/14, 10:45-11:30 Venue / Room: Intercontinental - Intercontinental C Session ID: CON7830 Solving Data Skew in Oracle Business Applications with Oracle s Flash-Optimized SAN Storage Date and Time: 9/30/14, 15:45-16:30 Venue / Room: Intercontinental - Intercontinental C Session ID: CON7792 Optimizing Oracle Data Stores with Oracle Flash-Optimized SAN Storage Date and Time: 9/30/14, 17:00-17:45 Venue / Room: Intercontinental - Intercontinental C Session ID: CON7832 Leveraging Oracle s Flash-Optimized SAN Storage in a Cloud Deployment Date and Time: 10/1/14, 12:45-13:30 Venue / Room: Intercontinental - Intercontinental C Session ID: CON7841 Maximizing Oracle Database 12c with Oracle's Flash-Optimized SAN Storage Date and Time: 10/2/14, 12:00-12:45 Venue / Room: Intercontinental - Union Square Session ID: CON7831 Optimizing Storage for Oracle ASM with an Oracle Flash-Optimized SAN Date and Time: 10/2/14, 2:30-3:15 Venue / Room: Intercontinental - Union Square 31
Oracle Open World 2014 FS1 DemoPods and HOL DemoPods: DemoID:3691 Leveraging Flash to Improve Latency of Multiple Database Instances, Location: SC-117 DemoID:3713 Quality of Service-Driven Autotiering, Location: SC-132 DemoID:3711 Maximizing Database Performance: Data Tiering vs Oracle HCC vs Deduplication, Location: SC-161 DemoID:3695 Simplifying storage management with Oracle Enterprise Manager, Location: SC-162 DemoID:4766 Hardware Showcase : Oracle FS1 Flash Storage System, Location: SC-133 Hands On Lab (HOL) : Session ID: HOL8687 Oracle Storage System GUI: Faster Database Performance with QoS Enhancements Date and Time: 9/30/14, 18:45-19:45 Venue / Room: Hotel Nikko - Nikko Ballroom I 32