Nimble Storage for Oracle Database on OL6 & RHEL6 with Fibre Channel or iscsi



Similar documents
TECHNICAL REPORT. Nimble Storage Oracle Backup and Recovery Guide

Nimble Storage VDI Solution for VMware Horizon (with View)

Oracle Disaster Recovery on Nimble Storage

Nimble Storage for VMware View VDI

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Nimble Storage Best Practices for Microsoft Exchange

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage

Pure Storage Reference Architecture for Oracle Databases

Best Practices for Architecting Storage in Virtualized Environments

Best Practices for Oracle on Pure Storage. The First All-Flash Enterprise Storage Array

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Technical Paper. Performance and Tuning Considerations for SAS on Pure Storage FA-420 Flash Array

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Ultimate Guide to Oracle Storage

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database 11g Release 2 Performance: Protocol Comparison Using Clustered Data ONTAP 8.1.1

Nimble Storage Replication

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Transforming your Datacenter with Nimble Storage. Stephen D Amore, Senior Systems Engineer Nordics

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Analysis of VDI Storage Performance During Bootstorm

Nimble Storage + OpenStack 打 造 最 佳 企 業 專 屬 雲 端 平 台. Nimble Storage Brian Chen, Solution Architect Jay Wang, Principal Software Engineer

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

High Performance Tier Implementation Guideline

White Paper. Dell Reference Configuration

Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

June Blade.org 2009 ALL RIGHTS RESERVED

Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

Technical Marketing Solution Guide. Nimble Storage Solution Guide For Server Message Block (SMB) File Sharing

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

HGST Virident Solutions 2.0

What s New with VMware Virtual Infrastructure

A Breakthrough Approach to Storage and Backup

The functionality and advantages of a high-availability file server system

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

Technical Paper. Performance and Tuning Considerations for SAS on the EMC XtremIO TM All-Flash Array

Boost Database Performance with the Cisco UCS Storage Accelerator

SAN Conceptual and Design Basics

ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 3.0

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Sizing and Best Practices for Deploying Oracle 11g Transaction Processing Databases on Dell EqualLogic Storage A Dell Technical Whitepaper

Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Efficient and Adaptive Flash-Optimized Storage for Virtual Desktop Infrastructure

EMC Unified Storage for Microsoft SQL Server 2008

Best Practices Guide for Exchange 2010 and Tegile Systems Zebi Hybrid Storage Array

Best Practices for Deploying & Tuning Oracle Database 12c on RHEL6

BEST PRACTICES GUIDE: VMware on Nimble Storage

Copyright 2012 EMC Corporation. All rights reserved.

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Oracle Acceleration with the SanDisk ION Accelerator Solution

Virtual SAN Design and Deployment Guide

The Revival of Direct Attached Storage for Oracle Databases

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

The Methodology Behind the Dell SQL Server Advisor Tool

Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle

Dell High Availability Solutions Guide for Microsoft Hyper-V

Redefining Microsoft SQL Server Data Management. PAS Specification

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

Violin Memory Arrays With IBM System Storage SAN Volume Control

VMware Best Practice and Integration Guide

Transforming your Datacenter with Nimble Storage. Stephen D Amore, Senior Systems Engineer Nordics

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

Evaluation of Enterprise Data Protection using SEP Software

Frequently Asked Questions: EMC UnityVSA

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

VMware vsphere Data Protection 6.0

SAN Implementation Course SANIW; 3 Days, Instructor-led

Intel RAID SSD Cache Controller RCS25ZB040

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

VERITAS Storage Foundation 4.3 for Windows

SQL Server Storage Best Practice Discussion Dell EqualLogic

Solution Brief July All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

Nimble Storage Best Practices for Microsoft SQL Server

Chip Coldwell Senior Software Engineer, Red Hat

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Microsoft SQL Server 2014 Fast Track

PrimaryIO Application Performance Acceleration Date: July 2015 Author: Tony Palmer, Senior Lab Analyst

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Nimble Storage Exchange ,000-Mailbox Resiliency Storage Solution

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Transcription:

BEST PRACTICES GUIDE Nimble Storage for Oracle Database on OL6 & RHEL6 with Fibre Channel or iscsi B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 1

Document Revision Table 1. 1 Date Revision Description 1/9/2012 1.0 Initial Draft 7/2/2013 1.1 Revised 3/12/2014 1.2 Revised iscsi Setting 5/8/2014 1.3 Added ASM AU Size 11/10/2014 1.4 Updated iscsi and Multipath 1/30/2015 1.5 Added Fibre Channel THIS TECHNICAL TIP IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCUURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. Nimble Storage: All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Nimble is strictly prohibited. B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 2

Table of Contents Introduction... 4 Audience... 4 Scope... 4 Nimble Storage Features... 5 Oracle Database on Oracle Linux with Nimble Storage... 6 Performance Settings... 7 Fibre Channel Recommended Settings... 7 iscsi Recommended Settings... 8 Linux Host Recommended Settings for both Fibre Channel and iscsi...10 Oracle OLTP with ASM...12 Oracle OLTP with EXT4 File System...13 Oracle DSS with ASM...15 Oracle DSS with EXT4 File System...16 B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 3

Introduction The purpose of this technical white paper is to walk through the step-by-step for tuning Oracle databases on Nimble Storage running on Oracle Linux or Red Hat Linux operating system. Audience This guide is intended for Oracle database solution architects, storage engineers, system administrators and IT managers who analyze, design and maintain a robust database environment on Nimble Storage. It is assumed that the reader has a working knowledge of iscsi/fc SAN network design, and basic Nimble Storage operations. Knowledge of Oracle Linux operating system, Oracle Clusterware, and Oracle database is also required. Scope During the design phase for a new Oracle database implementation, DBAs and Storage Administrators often times work together to come up with the best storage needs. They have to consider many storage configuration options to facilitate high performance and high availability. In order to protect data against failures of disk drives, host bus adapters (HBAs), and switches, they need to consider using different RAID levels and multiple paths. When you have different RAID levels come into play for performance, TCO tends to increase as well. For example, in order to sustain a certain number of IOPS with low latency for an OLTP workload, DBAs would require a certain number of 15K disk drives with RAID 10. The higher the number of required IOPS, the more 15K drives are needed. The reason is because mechanical disk drives have seek times and transfer rate, therefore, you would need more of them to handle the required IOPS with acceptable latency. This will increase the TCO tremendously over time. Not to mention that if the database is small in capacity but the required IOPS is high, you would end up with a lot of wasted space in your SAN. This white paper explains the Nimble technology and how it can lower the TCO of your Oracle environment and still achieve the performance required. This paper also discusses the best practices for implementing Oracle databases on Nimble Storage. B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 4

Nimble Storage Features Cache Accelerated Sequential Layout (CASL ) Nimble Storage solutions are built on its patented Cache Accelerated Sequential Layout (CASL ) architecture. CASL leverages the unique properties of flash and disk to deliver high performance and capacity all within a dramatically small footprint. CASL and InfoSight form the foundation of the Adaptive Flash platform, which allows for the dynamic and intelligent deployment of storage resources to meet the growing demands of business-critical applications. Dynamic Flash-Based Read Caching CASL caches "hot" active data onto SSD in real time without the need to set complex policies. This way it can instantly respond to read requests as much as 10X faster than traditional bolt-on or tiered approach to flash. Write-Optimized Data Layout CASL collects or coalesces random writes, compresses them, and writes them sequentially to disks. This results in write operations that are as much as 100x faster than traditional disk-based storage. Inline Compression CASL compresses data as it is written to the array with no performance impact. It takes advantage of efficient variable block compression and multicore processors. A recent measurement of our installed base shows average compression rates from 30 to 75 percent for a variety of workloads. Scale-to-Fit Flexibility CASL allows for the non-disruptive and independent scaling of performance and capacity. This is accomplished by either upgrading the storage controller (compute) for higher throughput, moving to larger flash SSD (cache) to accommodate more active data, or by adding storage shelves to boost capacity. This flexible scaling eliminates the need for disruptive forklift upgrades. Snapshots and Integrated Data Protection CASL can take thousands of point-in-time instant snapshots of volumes by creating a copy of the volumes' indices. Any updates to existing data or new data written to a volume are redirected to free space (optimized by CASL's unique data layout). This means there is no performance impact due to snapshots and snapshots take little incremental space as only changes are maintained. This also simplifies restoring snapshots, as no data needs to be copied. Efficient Integrated Replication Nimble Storage efficiently replicates data to another array by transferring compressed, block-level changes only. These remote copies can be made active if the primary array becomes unavailable. This makes deploying B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 5

disaster data recovery easy and affordable especially over a WAN to a remote array where bandwidth is limited. Zero-Copy Clones Nimble Storage arrays can create snapshot-based read/writeable clones of existing volumes instantly. These clones benefit from fast read and write performance, making them ideal for demanding applications such as VDI or database test/development. InfoSight InfoSight leverages the power of deep-data analytics and cloud-based management to deliver true operational efficiency across all storage activities. It ensures the peak health of storage infrastructure by identifying problems, and offering solutions, in real time. InfoSight provides expert guidance for deploying the right balance of storage resources dynamically and intelligently to satisfy the changing demands of business-critical applications. Oracle Database on Oracle Linux with Nimble Storage Oracle RAC Database Nimble Storage CS-Series Boot volumes (1 per server) Oracle and Grid Infrastructure Software (1 per server) Oracle ASM Disk groups (# of volumes) OCR / Voting disk group (1) Data Disk group (8) Flash recovery area disk group (4) B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 6

When considering best practices for running Oracle databases including RAC on Oracle Linux, the areas to consider include performance, data protection and efficiency especially as it related to test and development. This document covers the best practices including performance setting and volume setup with Oracle ASM. Performance Settings When running Oracle database on Linux, there are many operating system settings that need to be tweaked to get the best performance and uptime. However, not all settings will make the Oracle database perform better. For an optimal performing database, there are many factors that need to be looked at. Such factors include, but not limited to: How the application was written to access the database data? Are the queries optimal? Are the logical database structures layout optimal for the workload (i.e. indexes, table partitioning)? What is the Server CPUs and memory profile? What type of IO Scheduler being used in Linux? What is the Queue depth setting? What File system is being used? What is the IO size chosen? How many Volumes/LUNs are created on storage? What is the number of IO paths to storage? Fibre Channel Recommended Settings Nimble OS should be at least 2.2.3 8Gb or 16Gb Brocade or Cisco MDS switches Dual fabric for HA Multipath 8Gb or 16Gb Qlogic or Emulex HBA Qlogic HBA Settings o qlport_down_retry = "0" o ql2xmaxqdepth = "32" Figure 1: Example of a Dual Fabric B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 7

Multipath Settings s for Fibre Channel The multipath parameters in the /etc/multipath.conf file should be set as follow in order to sustain a failover and performance. Nimble recommends the use of aliases for mapped LUNs. defaults { user_friendly_names yes find_multipaths yes } devices { device { vendor "Nimble" product "Server" prio "alua" path_grouping_policy group_by_prio path_checker tur features "1 queue_if_no_path" rr_weight priorities rr_min_io_rq 1 failback immediate path_selector "round-robin 0" dev_loss_tmo infinity fast_io_fail_tmo 1 } }multipaths { multipath { wwid 20694551e4841f4386c9ce900dcc2bd34 alias ocr } } iscsi Recommended Settings Nimble OS should be at least 2.1.4 Dual10GbE iscsi Data Network Subnet iscsi Timeout and Performance Settings Understanding the meaning of these iscsi timeouts allows administrators to set these timeouts appropriately. These iscsi timeouts parameters in the /etc/iscsi/iscsid.conf file should be set as follow: node.session.timeo.replacement_timeout = 120 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 10 node.session.nr_sessions = 4 node.session.cmds_max = 2048 node.session.queue_depth = 1024 = = = NOP-Out Out Interval/Timeout = = = node.conn[0].timeo.noop_out_timeout = [ value ] iscsi layer sends a NOP-Out request to each target. If a NOP-Out request times out (default - 10 seconds), the iscsi layer responds by failing any running commands and instructing the SCSI layer to requeue those commands when possible. If dm-multipath is being used, the SCSI layer will fail those running commands and defer them to the multipath layer. The mulitpath layer then retries those commands on another path. If dm-multipath is not being used, those commands are retried five times (node.conn[0].timeo.noop_out_interval) before failing altogether. B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 8

node.conn[0].timeo.noop_out_interval [ value ] Once set, the iscsi layer will send a NOP-Out request to each target every [ interval value ] seconds. = = = SCSI Error Handler = = = If the SCSI Error Handler is running, running commands on a path will not be failed immediately when a NOP-Out request times out on that path. Instead, those commands will be failed after replacement_timeout seconds. node.session.timeo.replacement_timeout = [ value ] Important: Controls how long the iscsi layer should wait for a timed-out path/session to reestablish itself before failing any commands on it. The recommended setting of 120 seconds above allows ample time for controller failover. Default is 120 seconds. Note: If set to 120 seconds, IO will be queued for 2 minutes before it can resume. The 1 1 queue_if_no_path option in /etc/multipath.conf sets iscsi timers to immediately defer commands to the multipath layer. This setting prevents IO errors from propagating to the application; because of this, you can set replacement_timeout to 60-120 seconds. Note: Nimble Storage strongly recommends using dm-multipath for all volumes. Multipath Settings for iscsi The multipath parameters in the /etc/multipath.conf file should be set as follow in order to sustain a failover and performance. Nimble recommends the use of aliases for mapped LUNs. defaults { user_friendly_names yes find_multipaths yes } devices { device { vendor "Nimble" product "Server" path_grouping_policy group_by_serial path_selector "round-robin 0" features "1 queue_if_no_path" path_checker tur rr_min_io_rq 10 rr_weight priorities failback immediate } } multipaths { multipath { wwid 20694551e4841f4386c9ce900dcc2bd34 alias ocr } } B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 9

iscsi Data Network Nimble recommends using 10GbE iscsi for all databases. 2 separate subnets 2 x 10GbE iscsi NICs Use jumbo frames (MTU 9000) for iscsi networks (Strongly Recommended) Example of MTU setting for eth1: DEVICE=eth1 HWADDR=00:25:B5:00:00:BE TYPE=Ethernet UUID=31bf296f-5d6a-4caf-8858-88887e883edc ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static IPADDR=172.18.127.134 NETMASK=255.255.255.0 MTU=9000 To change MTU on an already running interface: [root@bigdata1 ~]# ifconfig eth1 mtu 9000 /etc/sysctl.conf net.core.wmem_max = 16780000 net.core.rmem_max = 16780000 net.ipv4.tcp_rmem = 10240 87380 16780000 net.ipv4.tcp_wmem = 10240 87380 16780000 Run sysctl p command after editing the /etc/sysctl.conf file. Linux Host Recommended Settings for both Fibre Channel and iscsi max_sectors_kb Change max_sectors_kb on all volumes to 1024 (default 512). To change max_sectors_kb to 1024 for a single volume: [root@bigdata1 ~]# echo 1024 > /sys/block/sd?/queue/max_sectors_kb Change all volumes: multipath -ll grep sd awk -F":" '{print $4}' awk '{print $2}' while read LUN do echo 1024 > /sys/block/${lun}/queue/max_sectors_kb done B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 10

Note: To make this change persistent after reboot, add the commands in /etc/rc.local file. VM dirty writeback and expire Change vm dirty writeback and expire to 100 (default 500 and 3000 respectively) To change vm dirty writeback and expire: [root@bigdata1 ~]# echo 100 > /proc/sys/vm/dirty_writeback_centisecs [root@bigdata1 ~]# echo 100 > /proc/sys/vm/dirty_expire_centisecs Note: To make this change persistent after reboot, add the commands in /etc/rc.local file. CPU Scaling Governor CPU Scaling Governor needs to be set at performance To set the CPU scaling governor, run the below command. [root@mktg04 ~]# for a in $(ls -ld /sys/devices/system/cpu/cpu[0-9]* awk '{print $NF}') ; do echo performance > $a/cpufreq/scaling_governor ; done Note: The setting above is not persistence after a reboot; hence the command needs to be executed when the server comes back online. To avoid running the command after a reboot, place the command in the /etc/rc.local file. Disk IO Scheduler IO Scheduler needs to be set at noop To set IO Scheduler for all LUNs online, run the below command. Note: multipath must be setup first before running this command. Any additional LUNs added or server reboot will not automatically change to this parameter. Run the same command again if new LUNs are added or a server reboot. [root@mktg04 ~]# multipath -ll grep sd awk -F":" '{print $4}' awk '{print $2}' while read LUN; do echo noop > /sys/block/${lun}/queue/scheduler ; done To set this parameter automatically, append the below syntax to /etc/grub.conf file under the kernel line. elevator=noop B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 11

Oracle OLTP with ASM Recommended Nimble Volumes for Oracle ASM Table1. Nimble Volume Role Recommended Number of Volumes Nimble Storage Volume Block Size Caching Policy (Nimble Storage) DATADG 4 Database server with 8 cores or less Yes Normal 8K 8 Database server with more than 16 cores LOGDG 4 Yes Normal 4K FRADG 4 No 32K Oracle Recommended Settings Table2. Settings Values DB Block Size 8KB ASM Allocation Unit (AU) for diskgroups 64MB ASM Diskgroup Redundancy External # of db_writer_processes 4 DB server with 4 ASM disks for DATADG 8 DB server with 8 ASM disks for DATADG log_buffer size ~1.6MB _disk_sector_size_override TRUE Create online redo log files with block size 4KB filesystemio_options setall B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 12

Example of creating new log files: ALTER DATABASE ADD LOGFILE GROUP 5 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 6 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 7 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 8 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; Oracle OLTP with EXT4 File System Recommended Nimble Volumes for Oracle with EXT4 Table3. Nimble Volume Role Recommended Number of Volumes Nimble Storage Volume Block Size Caching Policy (Nimble Storage) DATA LVM VG 4 Database server with 8 cores or less Yes Normal 8K 8 Database server with more than 16 cores LOG LVM VG 4 Yes Normal 4K FRA LVM VG 4 No 32K When creating an EXT file system on a logical volume, the stride and stripe-width options must be used. For example: stride=2,stripe-width=16 (for Nimble performance policy 8KB block size with 8 volumes) stride=4,stripe-width=32 (for Nimble performance policy 16KB block size with 8 volumes) stride=8,stripe-width=64 (for Nimble performance policy 32KB block size with 8 volumes) Note: The stripe-width value depends on the number of volumes, and the stride size. The calculator can be found here http://busybox.net/~aldot/mkfs_stride.html For example: If there is one Nimble volume with 8KB block size performance policy, then it should look like this. B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 13

Examples of LVM & EXT Setup: Create Volume Groups [root@mktg04 ~]# vgcreate vgextdata /dev/mapper/extdata[1-8] [root@mktg04 ~]# vgcreate vgextlog /dev/mapper/extlog[1-4] [root@mktg04 ~]# vgcreate vgextarch /dev/mapper/extarch[1-4] Create Logical Volume [root@mktg04 ~]# lvcreate -l <# of extents> -i 8 -I 4096 -n vol1 vgextdata [root@mktg04 ~]# lvcreate -l <# of extents> -i 4 -I 4096 -n vol1 vgextlog [root@mktg04 ~]# lvcreate -l <# of extents> -i 4 -I 4096 -n vol1 vgextarch Create EXT file system [root@mktg04 ~]# mkfs.ext4 /dev/vgextdata/vol1 -b 4096 -E stride=2,stripe-width=16 [root@mktg04 ~]# mkfs.ext4 /dev/vgextlog/vol1 -b 4096 [root@mktg04 ~]# mkfs.ext4 /dev/vgextarch/vol1 -b 4096 -E stride=8,stripe-width=32 Mount options in /etc/fstab file for iscsi /dev/vgextdata/vol1 /u01/app/extdata ext4 _netdev,noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextlog/vol1 /u01/app/extlog ext4 _netdev,noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextarch/vol1 /u01/app/extarch ext4 _netdev,noatime,nodiratime,discard,barrier=0 0 0 Mount options in /etc/fstab file for Fibre Channel /dev/vgextdata/vol1 /u01/app/extdata ext4 noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextlog/vol1 /u01/app/extlog ext4 noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextarch/vol1 /u01/app/extarch ext4 noatime,nodiratime,discard,barrier=0 0 0 Oracle Recommended Settings Table4. Settings Values DB Block Size 8KB B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 14

# of db_writer_processes 4 DB server with 4 ASM disks for DATADG 8 DB server with 8 ASM disks for DATADG log_buffer size ~1.6MB _disk_sector_size_override TRUE Create online redo log files with block size 4KB filesystemio_options setall Example of creating new log files: ALTER DATABASE ADD LOGFILE GROUP 5 ( '/u01/app/extlog/log5 ) SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 6 ( '/u01/app/extlog/log6 ) SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 7 ( '/u01/app/extlog/log7 ) SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 8 ( '/u01/app/extlog/log8 ) SIZE 4096M BLOCKSIZE 4K; Oracle DSS with ASM Recommended Nimble Volumes for Oracle ASM Table5. Nimble Volume Role Recommended Number of Volumes Nimble Storage Volume Block Size Caching Policy (Nimble Storage) DATADG 4 Database server with 8 cores or less Yes Normal 32K 8 Database server with more than 16 cores LOGDG 4 Yes Normal 4K FRADG 4 No 32K B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 15

Oracle Recommended Settings Table6. Settings Values DB Block Size 32KB ASM Allocation Unit (AU) for diskgroups 64MB ASM Diskgroup Redundancy External # of db_writer_processes 4 DB server with 4 ASM disks for DATADG 8 DB server with 8 ASM disks for DATADG log_buffer size ~1.6MB _disk_sector_size_override TRUE Create online redo log files with block size 4KB filesystemio_options setall Example of creating new log files: ALTER DATABASE ADD LOGFILE GROUP 5 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 6 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 7 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 8 ( '+LOGDG') SIZE 4096M BLOCKSIZE 4K; Oracle DSS with EXT4 File System Recommended Nimble Volumes for Oracle with EXT4 Table7. Nimble Volume Role Recommended Number of Volumes Nimble Storage Volume Block Size Caching Policy (Nimble Storage) B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 16

DATA LVM VG 4 Database server with 8 cores or less Yes Normal 32K 8 Database server with more than 16 cores LOG LVM VG 4 Yes Normal 4K FRA LVM VG 4 No 32K When creating an EXT file system on a logical volume, the stride and stripe-width options must be used. For example: stride=2,stripe-width=16 (for Nimble performance policy 8KB block size with 8 volumes) stride=4,stripe-width=32 (for Nimble performance policy 16KB block size with 8 volumes) stride=8,stripe-width=64 (for Nimble performance policy 32KB block size with 8 volumes) Note: The stripe-width value depends on the number of volumes, and the stride size. The calculator can be found here http://busybox.net/~aldot/mkfs_stride.html For example: If there is one Nimble volume with 8KB block size performance policy, then it should look like this. Examples of LVM & EXT Setup: Create Volume Groups [root@mktg04 ~]# vgcreate vgextdata /dev/mapper/extdata[1-8] [root@mktg04 ~]# vgcreate vgextlog /dev/mapper/extlog[1-4] [root@mktg04 ~]# vgcreate vgextarch /dev/mapper/extarch[1-4] Create Logical Volume [root@mktg04 ~]# lvcreate -l <# of extents> -i 8 -I 4096 -n vol1 vgextdata [root@mktg04 ~]# lvcreate -l <# of extents> -i 4 -I 4096 -n vol1 vgextlog [root@mktg04 ~]# lvcreate -l <# of extents> -i 4 -I 4096 -n vol1 vgextarch B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 17

Create EXT file system [root@mktg04 ~]# mkfs.ext4 /dev/vgextdata/vol1 -b 4096 -E stride=2,stripe-width=16 [root@mktg04 ~]# mkfs.ext4 /dev/vgextlog/vol1 -b 4096 [root@mktg04 ~]# mkfs.ext4 /dev/vgextarch/vol1 -b 4096 -E stride=8,stripe-width=32 Mount options in /etc/fstab file for iscsi /dev/vgextdata/vol1 /u01/app/extdata ext4 _netdev,noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextlog/vol1 /u01/app/extlog ext4 _netdev,noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextarch/vol1 /u01/app/extarch ext4 _netdev,noatime,nodiratime,discard,barrier=0 0 0 Mount options in /etc/fstab file for Fibre Channel /dev/vgextdata/vol1 /u01/app/extdata ext4 noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextlog/vol1 /u01/app/extlog ext4 noatime,nodiratime,discard,barrier=0 0 0 /dev/vgextarch/vol1 /u01/app/extarch ext4 noatime,nodiratime,discard,barrier=0 0 0 Oracle Recommended Settings Table8. Settings Values DB Block Size 32KB # of db_writer_processes 4 DB server with 4 ASM disks for DATADG 8 DB server with 8 ASM disks for DATADG log_buffer size ~1.6MB _disk_sector_size_override TRUE Create online redo log files with block size 4KB filesystemio_options setall Example of creating new log files: ALTER DATABASE ADD LOGFILE GROUP 5 ( '/u01/app/extlog/log5 ) SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 6 ( '/u01/app/extlog/log6 ) SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 7 ( '/u01/app/extlog/log7 ) SIZE 4096M BLOCKSIZE 4K; ALTER DATABASE ADD LOGFILE GROUP 8 ( '/u01/app/extlog/log8 ) SIZE 4096M BLOCKSIZE 4K; B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 18

Nimble Storage, Inc. 211 River Oaks Parkway, San Jose, CA 95134 Tel: 877-364-6253) www.nimblestorage.com info@nimblestorage.com 2015 Nimble Storage, Inc. Nimble Storage, InfoSight, SmartStack, NimbleConnect, and CASL are trademarks or registered trademarks of Nimble Storage, Inc. All other trademarks are the property of their respective owners. BPG-ORACLE-0215 B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 19