Best Practices for Oracle on Pure Storage. The First All-Flash Enterprise Storage Array
|
|
|
- Lee Webster
- 9 years ago
- Views:
Transcription
1 Best Practices for Oracle on Pure Storage The First All-Flash Enterprise Storage Array
2 Overview The principle difference between configuring database storage on a Pure Storage FlashArray instead of spinning disks is that virtually all of your architecture choices are centered around manageability, not performance. Specifically, none of the following factors are relevant on a Pure Storage array: Stripe width and depth RAID level (mirroring) Intelligent Data Placement (short stroking) O/S and database block size ASM vs. File System Striping refers to distributing files across multiple hard drives to enable parallelized access and to maximize IOPS. A Pure Storage array consists of 22 solid state disks per shelf, and the Purity Operating Environment automatically distributes data across all drives in the array using an algorithm designed to optimize performance and provide redundancy. In other words, the striping is automatic. The Pure Storage redundancy technology is called RAID-3D, and it is designed specifically to protect against the 3 failure modes specific to flash storage: device failure, bit errors, and performance variability. No other form of RAID protection is needed so you don t need to compromise capacity or performance for data protection. RAID is automatic. Just as striping and mirroring are irrelevant on a Pure Storage array, so is block size. Pure Storage is based on a fine-grained 512-byte geometry, so there are no block alignment issues as you might encounter in arrays designed with, for example, a 4KB geometry. Another benefit is a substantially higher deduplication rate than seen on other arrays offering data reduction. Other flash vendors have architected their solution around the new Advanced Format (AF) Technology which allows for 4KB physical sector sizes instead of the traditional 512B sector size. But since solidstate disks don t have sectors or cylinders or spindles, The Purity Operating Environment is designed from the ground up to take advantage of flash s unique capabilities. The benefit to the user is that you can realize the benefits of flash performance without being shackled to any of the constraints of the disk paradigm. In this document we provide information to help you to optimize the Pure Storage FlashArray for your Oracle database workload. Please note that these are general guidelines that are appropriate for many workloads, but as with all guidelines you should verify that they are appropriate for your specific environment. Pure Storage
3 Operating System Recommendations Pure Storage has operating system recommendations that apply to all deployments: databases, VDI, etc. These recommendations apply whether you are using Oracle Automatic Storage Management (ASM), raw devices or a file system for your database storage. Multipath Configuration Always use multipathing on a Pure Storage FlashArray. Our all-purpose configuration (tested on RHEL 6.3 and Ubuntu 11.04) is as follows: /etc/multipath.conf : defaults { polling_interval 1 } devices { device { vendor "PURE" path_selector "round-robin 0" path_grouping_policy multibus rr_min_io 1 path_checker tur fast_io_fail_tmo 10 dev_loss_tmo 30 } } The settings assigned to PURE devices are explained here: Parameter polling_interval 1 vendor PURE path_selector round-robin 0 path_grouping_policy multibus rr_min_io 1 path_checker tur fast_io_fail_tmo 10 dev_loss_tmo 30 Meaning Specifies the frequency (in seconds) to check that each path is alive. Applies these settings to PURE devices only. Loop through every viable path. This setting does not consider path queue length or service time. Places all paths in a single priority group. This setting ensures that all paths are in use at all times, preventing a latent path problem from going unnoticed. Sends 1 I/O to each patch before switching to the next path. Using higher numbers can result in inferior performance, as I/O bursts are sent down each path instead of spreading the load evenly. tur uses the SCSI command Test Unit Ready to determine if a path is working. This is different from the default RHEL setting of readsector0/direction. When a Pure Storage array is failing over read operations will not be serviced, but we should continue to respond to Test Unit Ready. This setting keeps multipath from propagating an I/O error up to the application even beyond the SCSI device timeout. Specifies the number of seconds the SCSI layer will wait after a problem has been detected on a fibre channel port before failing I/O to devices on that remote port. This setting should be less than dev_loss_tmo. Specifies the number of seconds the SCSI layer will wait after a problem has been detected on a fibre channel remote port before removing it from the system. Pure Storage
4 SCSI Device Settings For optimum performance, we also recommend the following device settings. 1. Set the block device scheduler to [noop]. This setting avoids wasting CPU resources on I/O scheduling. The default value is [cfq] (completely fair queuing). 2. Set rq_affinity to 2. This setting avoids interrupt contention on a particular CPU by scheduling I/O on the core that originated the process. The default behavior is to schedule processes on the first core in a package unit which tends to lock cross calls on 1 or 2 cores (equal to the number of sockets in your system). 3. Set add_random to 0. This setting reduces CPU overhead due to entropy collection (for activities such as generating ssh keys). You can effect these settings either manually (not recommended but ok if LUNS are already in use with different settings) or automatically. Manually set the scheduler: for disk in `lsscsi grep PURE awk '{ print $6 }'` do echo noop > /sys/block/${disk##/dev/}/queue/scheduler done Manually set rq_affinity: for disk in `lsscsi grep PURE awk '{ print $6 }'` do echo 2 > /sys/block/${disk##/dev/}/queue/rq_affinity done Manually set add_random: for disk in `lsscsi grep PURE awk '{ print $6 }'` do echo 0 > /sys/block/${disk##/dev/}/queue/add_random done We recommend using udev to make these settings automatic across reboots and any other udev triggering event, such as a drive replacement. The recommended rules file is as follows: # Recommended settings for Pure Storage FlashArray. # Use noop scheduler for high-performance solid-state storage ACTION=="add change", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop" # Reduce CPU overhead due to entropy collection ACTION=="add change", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0" Pure Storage
5 # Schedule I/O on the core that initiated the process ACTION=="add change", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2" Save this file as /etc/udev/rules.d/99-pure-storage.rules [RHEL 6.x] /lib/udev/rules.d/99-pure-storage-rules [Ubuntu] You can activate the udev rules with: udevadm trigger Process Prioritization and Pinning The log writer ( ora_lgwr_{oracle_sid} ) is often a bottleneck for extremely heavy OLTP workloads since it is a single process and it must persist every transaction. A typical AWR Top 5 Timed Foreground Events report might look like this: If your AWR report shows high waits on LOG_FILE_SYNC or LOG_FILE_PARALLEL_WRITE, you can consider making these adjustments. However, do not do so unless your system has eight or more cores. To increase log writer process priority: Use renice E.G. if log writer process id is 27250: renice n Probably not adviseable if your system has few than eight cores To pin log writer to a given core: Use taskset E.G. if log writer process id is 27250: taskset p Probably not adviseable if you have fewer than 12 cores We have found that the Pure Storage FlashArray can sustain redo log write rates over 100MB/s. Pure Storage
6 Provisioning Storage on the Pure Storage FlashArray You can provision storage from either the Pure Storage GUI management tool, or the command line, as illustrated here. 1. Create a volume using either the GUI or CLI Command line equivalent: purevol create size 250G oravol10 Pure Storage
7 2. Connect the volume to the host Command line equivalent: purevol connect --host warthog oradata10 3. Scan the new volume on the database server (as root): # rescan-scsi-bus.sh -i r 4. Flush any unused multipath device maps (as root): # multipath F Pure Storage
8 5. Detect and map the new volume with multipath (as root): # multipath v2 Note the new volume s unique device identifier (UUID) which is the same as the Serial number seen in the Details section of the GUI. In this case it s 3624a9370bb22e766dd a At this point, the newly provisioned storage is ready to use. You can either create a file system on it, or use it as an Automatic Storage Management (ASM) disk. Pure Storage
9 ASM versus File System On a Pure Storage FlashArray, there is no significant performance benefit to using ASM over a traditional file system, so the decision can be driven by your operational policies and guidelines. Whichever storage mechanism you choose will perform well. From a DBA s perspective, ASM does offer additional flexibility not found with file systems, such as the ability to move ASM disks from one disk group to another, resizing disks, and adding volumes dynamically. Recommendations Common to ASM and File System Unlike traditional storage, IOPS are not a function of LUN count. In other words, you get the same IOPS capacity with 1 LUN as you do with 100. However, since it is often convenient to monitor database performance by I/O type (for example, LGWR, DBWR, TEMP), we recommend creating ASM disk groups or file systems dedicated to these individual work loads. This strategy allows you to observe the characteristics of each I/O type either at the command line with tools like iostat and pureadm, or with the Pure Storage GUI. In addition to isolating I/O types to individual disk groups, you should also locate the flash recovery area (FRA). We also recommend opting for a few large LUN s per disk group. If performance is a critical concern, we also recommend that you do not multiplex redo logs. It is not necessary for redundancy since RAID-3D provides protection against media failures. Multiplexing redo logs introduces a performance impact of up to approximately 10% for a heavy OLTP workload. If your operations policy requires you to multiplex redo logs, we recommend placing the group members in Pure Storage
10 separate disk groups or file systems. For example, you can create 2 disk groups, REDOSSD1 and REDOSSD2 and multiplex across them: alter database add logfile group 1 ('+REDOSSD1', '+REDOSSD2' ) size 2g; SELECT FROM WHERE AND ORDER BY g.name groupname, d.path, g.sector_size, g.block_size, g.state v$asm_diskgroup g, v$asm_disk d d.group_number = g.group_number g.name LIKE '%REDOSSD%' g.name; Sector Block Group Name Path Size Size STATE REDOSSD1 /dev/dm CONNECTED REDOSSD2 /dev/dm CONNECTED 2 rows selected. Finally, while some flash storage vendors recommend a 4K block size for both redo logs and the database itself (to avoid block misalignment issues), Pure Storage does not. Since the Pure Storage FlashArray is designed on a 512 byte geometry, we never have block alignment issues. Performance is completely independent of block size. ASM Specific Recommendations In Oracle 11gR3 the default striping for ONLINELOG template changed from FINE to COARSE. In OLTP workload testing we found that the COARSE setting for redo logs performs about 20% better. Since the Pure Storage FlashArray includes RAID-3D protection, you can safely use External Redundancy for ASM diskgroups. Other factors such as sector size and AU size do not have a significant bearing on performance. ASM Disk Group Recommendations Disk Group Sector Size Strip AU Size Redundancy Notes ORACRS 512 COARSE External Small disk group for CRS ORADATA 512 COARSE External Database segments ORAREDO 512 COARSE External Redo logs ORAFRA 512 COARSE External Flash Recovery Area Pure Storage
11 ASM Space Reclamation As you drop, truncate or resize database objects in an ASM environment, the space metrics reported by the data dictionary (DBA_FREE_SPACE, V$ASM_DISKGROUP, V$DATAFILE, etc.) will reflect your changes as expected. However, these actions may not always trim (free) space on the array immediately. Oracle provides a utility called ASM Storage Reclamation Utility (ASRU) which expedites the trim operation. For example, after dropping 1.4TB of tablespaces and data files, Oracle reports the newly available space in V$ASM_DISKGROUP, but puredb list space still considers the space to be allocated. Consider the case when we drop the 190GB tablespace ASRUDEMO which is in the ORADATA disk group: Before dropping the tablespace: SELECT FROM name, total_mb/(1024) total_gig, free_mb/(1024) free_gig, (total_mb-free_mb)/1024 used_gig v$asm_diskgroup; Group Group Group Total Free Used NAME GB GB GB ORADATA 2, , ORAFRA 1, , ORAGRID REDOSSD REDOSSD rows selected. And on the storage array: Pure Storage
12 After we drop the ASRUDEMO tablespace, v$asm_diskgroup updates the available space as expected: drop tablespace tpchtab including contents and datafiles; Tablespace dropped. SELECT FROM name, total_mb/(1024) total_gig, free_mb/(1024) free_gig, total_mb-free_mb)/1024 used_gig v$asm_diskgroup; Group Group Group Total Free Used NAME GB GB GB ORADATA 2, , ORAFRA 1, , ORAGRID REDOSSD REDOSSD rows selected. However, we don t see the space recovered on the storage array: Pure Storage
13 Although the array s garbage collection will free the space eventually, we can use the ASRU utility (under the grid O/S account on the database server) to trim the space immediately: After the ASRU run, the recovered space is visible in the Physical Space column of the puredb list space report: Pure Storage
14 ASMLib and Alternatives ASMLib is an Oracle-provided utility that allows you to configure block devices for use with ASM. Specifically, it marks devices as ASM disks, and sets their permissions so that the o/s account that runs ASM (typically either grid or oracle) can manipulate these devices. For example, to create an ASM disk named MYASMDISK backed by /dev/dm-2 you would issue the command: /etc/init.d/oracleasm createdisk MYASMDISK /dev/dm-2 Afterwards, /dev/dm-2 would appear still have the same ownership and permissions, but ASMLib will create a file /dev/oracleasm/disks/myasmdisk owned by the O/S user and group the is identified in /etc/sysconfig/oracleasm. You tell the ASM instance to look for potential disks in this directory though the asm_diskstring initialization parameter. The problem with ASMLib is that Oracle stopped providing it with RHEL 6.3 and beyond, although it is still provided for Oracle Unbreakable Linux. Fortunately, there are simpler alternatives that work on RHEL (as well as Oracle Unbreakable Linux). ASMLib Alternative 1: udev On RHEL 6.x you can use udev to present devices to ASM. Consider the device mpathbk created above. The device is created as /dev/mapper/mpathbk, linked to /dev/dm-4 and owned by root: Perform the following steps to change the device ownership to grid:asadmin. 1. Create an entry for the device in the udev rules file /etc/udev/rules.d/12-dmpermissions.rules as follows 2. Use udevadm to trigger udev events and confirm the change in ownership for the block device: Pure Storage
15 Note the change in ownership to grid:asmadmin. 3. Use sqlplus or asmca to create a new disk group or to put the new device in an existing disk group. Since the Purity Operating Environment provides RAID-3D you can safely use External Redundancy for the disk group. Note that your asm_diskstring (discovery path) should be /dev/dm* After clicking OK the device will be added to the disk group and an ASM rebalance operation will execute automatically. We recommend using the same size disk for all members of a disk group for ease of rebalancing operations. Pure Storage
16 ASMLib Alternative 2: multipath The udev rules described above do not work on Linux 5.7, but you can effect the ownerships required for ASM through the mulitpaths stanza of the /etc/multipath.conf file: defaults { polling_interval 1 } devices { device { vendor "PURE" path_selector "round-robin 0" path_grouping_policy multibus rr_min_io 1 path_checker tur fast_io_fail_tmo 10 dev_loss_tmo 30 } } multipaths { multipath { wwid 3624a937034fedb8251d5dcae e alias ASMDISK01 uid grid gid asmadmin mode 660 } multipath { wwid 3624a937034fedb8251d5dcae f alias ASMDISK02 uid grid gid asmadmin mode 660 } multipath { wwid 3624a937034fedb8251d5dcae alias ASMDISK03 uid grid gid asmadmin mode 660 } } These entries will create devices with the proper ownership and permissions in /dev/mapper. In order for ASM to recognize these devices, the asm_diskstring should be set to /dev/mapper. Pure Storage
17 File System Recommendations There is no significant performance penalty for using a file system instead of ASM. As with ASM, we recommend placing data, redo, and the flash recovery area (FRA) onto separate volumes for ease of administration. We also recommend using the ext4 file system and mount it with discard and noatime options. Below is a sample /etc/fstab file showing mount points /u01 (for oracle binaries, trace files, etc.), /oradata (for datafiles) and /oraredo (for online redo logs). The man page for mount describes the /discard flag as follows: discard/nodiscard Controls whether ext4 should issue discard/trim commands to the underlying block device when blocks are freed. This is useful for SSD devices and sparse/thinly-provisioned LUNs, but it is off by default until sufficient testing has been done. Mounting the ext4 file system with the discard flag causes freed space to be trimmed immediately, just as the ASRU utility trims the storage behind ASM disk groups. Pure Storage
18 Oracle Settings For the most part, you don t need to make changes to your Oracle configuration in order to realize immediate performance benefits on a Pure Storage FlashArray. However, if you have an extremely heavy OLTP workload, there are a few tweaks you can make that will help you squeeze the most I/O out of your system. In our testing, we found that the following settings increased performance by about 5% init.ora settings _high_priority_processes='lms* LGWR PMON' Sets processes scheduling priority to RR Minimizes need to wake LGWR Underscore parameter: consult oracle support filesystemio_options = SETALL Allows asynch i/o log_buffer = {at least 15MB} Values over 100MB are not uncommon Use the CALIBRATE_IO Utility Oracle provides a built in package dbms_resource_manager.calibrate_io which, like the ORION tool, generates workload on the I/O subsystem. However, unlike ORION, it works with your running Oracle database, and it generates statistics for the optimizer. Therefore, you should run calibrate_io and gather statistics for your application schema at least once before launching your application. The calibrate_io script as provided in the Oracle documentation and presented here using our recommended values for <DISKS> and <MAX_LATENCY>. SET SERVEROUTPUT ON DECLARE lat INTEGER; iops INTEGER; mbps INTEGER; BEGIN -- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (<DISKS>, <MAX_LATENCY>, iops, mbps, lat); DBMS_RESOURCE_MANAGER.CALIBRATE_IO (1000, 10, iops, mbps, lat); DBMS_OUTPUT.PUT_LINE ('max_iops = ' iops); DBMS_OUTPUT.PUT_LINE ('latency = ' lat); dbms_output.put_line('max_mbps = ' mbps); end; / Typically you will see output similar to: max_iops = latency = 0 max_mbps = 1516 Pure Storage
19 Conclusion Many of the traditional architecture decisions and compromises you have had to make with traditional storage are not relevant on a Pure Storage FlashArray. You do not need to sacrifice performance to gain resiliency, nor do you need to change existing policies that you may already have in place. In other words, there is no wrong way to deploy Oracle on Pure Storage; you can expect performance benefits out of the box. That said, there are some configuration choices you can make to increase flexibility and maximize performance: Use the Pure Storage recommended multipath.conf settings Set the scheduler, rq_affinity, and entropy for the Pure Storage devices Separate differate I/O work loads to dedicated LUNS for enhanced visibility If you use a file system for data files, use ext4 Always run calibrate_io Feel free to contact Chas. Dye, Database Solutions Architect ([email protected]) if you have questions or if you would like to discuss the suitability of Pure Storage in you environment. Pure Storage
20 Pure Storage
Pure Storage Reference Architecture for Oracle Databases
Pure Storage Reference Architecture for Oracle Databases Overview This document provides a reference architecture for deploying Oracle databases on the Pure Storage FlashArray. Pure Storage has validated
Oracle Database 10g: Performance Tuning 12-1
Oracle Database 10g: Performance Tuning 12-1 Oracle Database 10g: Performance Tuning 12-2 I/O Architecture The Oracle database uses a logical storage container called a tablespace to store all permanent
Nimble Storage for Oracle Database on OL6 & RHEL6 with Fibre Channel or iscsi
BEST PRACTICES GUIDE Nimble Storage for Oracle Database on OL6 & RHEL6 with Fibre Channel or iscsi B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 1 Document Revision
The Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
Best Practices for Deploying & Tuning Oracle Database 12c on RHEL6
Best Practices for Deploying & Tuning Oracle Database 12c on RHEL6 Roger Lopez, Principal Software Engineer, Red Hat Sanjay Rao, Principal Performance Engineer, Red Hat April, 2014 Agenda Agenda Deploying
High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper
High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
Technical Paper. Performance and Tuning Considerations for SAS on Pure Storage FA-420 Flash Array
Technical Paper Performance and Tuning Considerations for SAS on Pure Storage FA-420 Flash Array Release Information Content Version: 1.0 August 2014. Trademarks and Patents SAS Institute Inc., SAS Campus
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
Running Oracle 11g RAC on Violin
Technical White Paper Report Technical Report Running Oracle 11g RAC on Violin Installation Best Practices for Oracle 11gR2 RAC and Linux 5.x Version 1.0 Abstract This technical report describes the process
Software-defined Storage at the Speed of Flash
TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing
Top 10 Things You Always Wanted to Know About Automatic Storage Management But Were Afraid to Ask
Top 10 Things You Always Wanted to Know About Automatic Storage Management But Were Afraid to Ask Nitin Vengurlekar RAC/ASM Engineering Grid Product Strategy Agenda ASM Overview 2
An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
Configuring Linux to Enable Multipath I/O
Configuring Linux to Enable Multipath I/O Storage is an essential data center component, and storage area networks can provide an excellent way to help ensure high availability and load balancing over
Oracle DBA Course Contents
Oracle DBA Course Contents Overview of Oracle DBA tasks: Oracle as a flexible, complex & robust RDBMS The evolution of hardware and the relation to Oracle Different DBA job roles(vp of DBA, developer DBA,production
Q & A From Hitachi Data Systems WebTech Presentation:
Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,
Ultimate Guide to Oracle Storage
Ultimate Guide to Oracle Storage Presented by George Trujillo [email protected] George Trujillo Twenty two years IT experience with 19 years Oracle experience. Advanced database solutions such
DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT
DEPLOYMENT BEST PRACTICE FOR ORACLE DATABASE WITH VMAX 3 SERVICE LEVEL OBJECTIVE MANAGEMENT EMC VMAX Engineering White Paper ABSTRACT With the introduction of the third generation VMAX disk arrays, Oracle
Technical Paper. Performance and Tuning Considerations for SAS on the EMC XtremIO TM All-Flash Array
Technical Paper Performance and Tuning Considerations for SAS on the EMC XtremIO TM All-Flash Array Release Information Content Version: 1.0 October 2014. Trademarks and Patents SAS Institute Inc., SAS
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
Chip Coldwell Senior Software Engineer, Red Hat
Chip Coldwell Senior Software Engineer, Red Hat Classical Storage Stack: the top layer File system examples: ext3, NFS, GFS built on Linux VFS ( Virtual File System abstraction) defines on-disk structure:
Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
Using Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
HP Smart Array Controllers and basic RAID performance factors
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
Optimizing Oracle Database 11gR2 with Automatic Storage Management on Hitachi Virtual Storage Platform
1 Optimizing Oracle Database 11gR2 with Automatic Storage Management on Hitachi Virtual Storage Platform Best Practices Guide By Anantha Adiga and Yichun Chu April 2011 Month Year Feedback Hitachi Data
Oracle 11g Database Administration
Oracle 11g Database Administration Part 1: Oracle 11g Administration Workshop I A. Exploring the Oracle Database Architecture 1. Oracle Database Architecture Overview 2. Interacting with an Oracle Database
How to Replace ASMlib with the udev Device Manager for New and Existing Oracle Instances
How to Replace ASMlib with the udev Device Manager for New and Existing Oracle Instances Author: Jason Ganovsky Editor: Allison Pranger 05/11/2011 PURPOSE AND OVERVIEW Oracle recently announced that ASMlib
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
EMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver
Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage
Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage White Paper July, 2011 Deploying Citrix XenDesktop on NexentaStor Open Storage Table of Contents The Challenges of VDI Storage
Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle
Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle Storage and Database Performance Benchware Performance Suite Release 8.5 (Build 131015) November 2013 Contents 1 System Configuration
Lessons Learned while Pushing the Limits of SecureFile LOBs. by Jacco H. Landlust. zondag 3 maart 13
Lessons Learned while Pushing the Limits of SecureFile LOBs @ by Jacco H. Landlust Jacco H. Landlust 36 years old Deventer, the Netherlands 2 Jacco H. Landlust / idba Degree in Business Informatics and
About the Author About the Technical Contributors About the Technical Reviewers Acknowledgments. How to Use This Book
About the Author p. xv About the Technical Contributors p. xvi About the Technical Reviewers p. xvi Acknowledgments p. xix Preface p. xxiii About This Book p. xxiii How to Use This Book p. xxiv Appendices
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i. Performance Report
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i Performance Report V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1
ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 3.0
ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 3.0 Consolidation Oracle Production/Dev/Test workloads in physical and virtual environments Performance Consistent low-latency
White Paper. Dell Reference Configuration
White Paper Dell Reference Configuration Deploying Oracle Database 11g R1 Enterprise Edition Real Application Clusters with Red Hat Enterprise Linux 5.1 and Oracle Enterprise Linux 5.1 On Dell PowerEdge
ORACLE CORE DBA ONLINE TRAINING
ORACLE CORE DBA ONLINE TRAINING ORACLE CORE DBA THIS ORACLE DBA TRAINING COURSE IS DESIGNED TO PROVIDE ORACLE PROFESSIONALS WITH AN IN-DEPTH UNDERSTANDING OF THE DBA FEATURES OF ORACLE, SPECIFIC ORACLE
Enkitec Exadata Storage Layout
Enkitec Exadata Storage Layout 1 Randy Johnson Principal Consultant, Enkitec LP. 20 or so years in the IT industry Began working with Oracle RDBMS in 1992 at the launch of Oracle 7 Main areas of interest
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions
REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN
White Paper REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN EMC SOLUTIONS GROUP Abstract This white paper describes how a 12 TB Oracle data warehouse was transported from
Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015
Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Table of Contents 0 Introduction 1 The Test Environment 1 Best
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
ASM and for 3rd Party Snapshot Solutions - for Offhost. Duane Smith Nitin Vengurlekar RACPACK
ASM and for 3rd Party Snapshot Solutions - for Offhost backup Duane Smith Nitin Vengurlekar RACPACK POINT-IN-TIME COPY TECHNOLOGIES POINT-IN-TIME COPY TECHNOLOGIES Generic guidelines & best practices for
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Oracle Backup and Recovery Best Practices Dell Compellent Storage Center. Dell Compellent Technical Best Practices
Oracle Backup and Recovery Best Practices Dell Compellent Storage Center Dell Compellent Technical Best Practices ii Document Revision Table 1. Revision History Date Revision Description 6/15/2011 A Initial
AirWave 7.7. Server Sizing Guide
AirWave 7.7 Server Sizing Guide Copyright 2013 Aruba Networks, Inc. Aruba Networks trademarks include, Aruba Networks, Aruba Wireless Networks, the registered Aruba the Mobile Edge Company logo, Aruba
<Insert Picture Here> RMAN Configuration and Performance Tuning Best Practices
1 RMAN Configuration and Performance Tuning Best Practices Timothy Chien Principal Product Manager Oracle Database High Availability [email protected] Agenda Recovery Manager
Deploying Database-as-a-Service with Pure Storage and VMware WHITE PAPER
Deploying Database-as-a-Service with Pure Storage and VMware WHITE PAPER Table of Contents Introduction.... 3 About This Test Plan.... 3 Business Problem.... 3 Database-as-a-Service Definition.... 3 Benefits
EMC XTREMIO EXECUTIVE OVERVIEW
EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying
Oracle Architecture. Overview
Oracle Architecture Overview The Oracle Server Oracle ser ver Instance Architecture Instance SGA Shared pool Database Cache Redo Log Library Cache Data Dictionary Cache DBWR LGWR SMON PMON ARCn RECO CKPT
Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated R2
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated R2 Duration: 5 Days What you will learn This Oracle 11g: RAC and Grid Infrastructure Administration Accelerated training teaches you about
Oracle Database Disaster Recovery Using Dell Storage Replication Solutions
Oracle Database Disaster Recovery Using Dell Storage Replication Solutions This paper describes how to leverage Dell storage replication technologies to build a comprehensive disaster recovery solution
Violin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
A Survey of Shared File Systems
Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Backup and Recovery Best Practices With CommVault Simpana Software
TECHNICAL WHITE PAPER Backup and Recovery Best Practices With CommVault Simpana Software www.tintri.com Contents Intended Audience....1 Introduction....1 Consolidated list of practices...............................
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
Oracle9i Release 2 Database Architecture on Windows. An Oracle Technical White Paper April 2003
Oracle9i Release 2 Database Architecture on Windows An Oracle Technical White Paper April 2003 Oracle9i Release 2 Database Architecture on Windows Executive Overview... 3 Introduction... 3 Oracle9i Release
Xanadu 130. Business Class Storage Solution. 8G FC Host Connectivity and 6G SAS Backplane. 2U 12-Bay 3.5 Form Factor
RAID Inc Xanadu 200 100 Series Storage Systems Xanadu 130 Business Class Storage Solution 8G FC Host Connectivity and 6G SAS Backplane 2U 12-Bay 3.5 Form Factor Highlights Highest levels of data integrity
EqualLogic PS Series Load Balancers and Tiering, a Look Under the Covers. Keith Swindell Dell Storage Product Planning Manager
EqualLogic PS Series Load Balancers and Tiering, a Look Under the Covers Keith Swindell Dell Storage Product Planning Manager Topics Guiding principles Network load balancing MPIO Capacity load balancing
Dell EqualLogic Best Practices Series
Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Oracle 11g Release 2 Based Decision Support Systems with Dell EqualLogic 10GbE iscsi SAN A Dell Technical Whitepaper Storage
Using HP StoreOnce Backup systems for Oracle database backups
Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce
Running Highly Available, High Performance Databases in a SAN-Free Environment
TECHNICAL BRIEF:........................................ Running Highly Available, High Performance Databases in a SAN-Free Environment Who should read this paper Architects, application owners and database
Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2
Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2 An Oracle White Paper January 2004 Linux Filesystem Performance Comparison
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
High Performance Tier Implementation Guideline
High Performance Tier Implementation Guideline A Dell Technical White Paper PowerVault MD32 and MD32i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
Software-defined Storage Architecture for Analytics Computing
Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture
BEST PRACTICES GUIDE: VMware on Nimble Storage
BEST PRACTICES GUIDE: VMware on Nimble Storage Summary Nimble Storage iscsi arrays provide a complete application-aware data storage solution that includes primary storage, intelligent caching, instant
Best Practices for Oracle Automatic Storage Management (ASM) on Hitachi Dynamic Provisioning(HDP)
Best Practices for Oracle Automatic Storage Management (ASM) on Hitachi Dynamic Provisioning(HDP) Rev. 0 March 2008 1 Introduction Until now, when storage was introduced, generally future capacity requirements
An Oracle White Paper January 2013. A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c
An Oracle White Paper January 2013 A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c TABLE OF CONTENTS Introduction 2 ASM Overview 2 Total Storage Management
Deep Dive: Maximizing EC2 & EBS Performance
Deep Dive: Maximizing EC2 & EBS Performance Tom Maddox, Solutions Architect 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved What we ll cover Amazon EBS overview Volumes Snapshots
<Insert Picture Here>
1 Session 254 Installing and Tuning Oracle 11.2.0.3 on RedHat 6 on Linux on IBM System z Collaborate13 April 7-11 2013, Denver, Colorado Damian Gallagher Senior Technical Lead, Linux
How To Fix A Fault Fault Fault Management In A Vsphere 5 Vsphe5 Vsphee5 V2.5.5 (Vmfs) Vspheron 5 (Vsphere5) (Vmf5) V
VMware Storage Best Practices Patrick Carmichael Escalation Engineer, Global Support Services. 2011 VMware Inc. All rights reserved Theme Just because you COULD, doesn t mean you SHOULD. Lessons learned
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib Sayan Saha, Sue Denham & Lars Herrmann 05/02/2011 On 22 March 2011, Oracle posted the following
Parallels Virtuozzo Containers 4.7 for Linux
Parallels Virtuozzo Containers 4.7 for Linux Deploying Clusters in Parallels-Based Systems Copyright 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd.
An Oracle White Paper December 2013. Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine
An Oracle White Paper December 2013 Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine Flash Technology and the Exadata Database Machine... 2 Oracle Database 11g: The First Flash
An Oracle and 3PAR White Paper April 2010. Maintaining High Storage Utilization with Oracle ASM Storage Reclamation Utility and 3PAR Thin Persistence
An Oracle and 3PAR White Paper April 2010 Maintaining High Storage Utilization with Oracle ASM Storage Reclamation Utility and 3PAR Thin Persistence Introduction... 2 Problem: Preserving High Storage Utilization
Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*
Oracle Database 11 g Performance Tuning Recipes Sam R. Alapati Darl Kuhn Bill Padfield Apress* Contents About the Authors About the Technical Reviewer Acknowledgments xvi xvii xviii Chapter 1: Optimizing
Optimizing Storage for Oracle ASM with Oracle Flash-Optimized SAN Storage
Optimizing Storage for Oracle ASM with Oracle Flash-Optimized SAN Storage Simon Towers Architect Flash Storage Systems October 02, 2014 Program Agenda 1 2 3 4 5 6 Goals Flash Storage System Environment
InfoScale Storage & Media Server Workloads
InfoScale Storage & Media Server Workloads Maximise Performance when Storing and Retrieving Large Amounts of Unstructured Data Carlos Carrero Colin Eldridge Shrinivas Chandukar 1 Table of Contents 01 Introduction
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper
Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper Application Note Abstract This document describes how to enable multi-pathing configuration using the Device Mapper service
IBRIX Fusion 3.1 Release Notes
Release Date April 2009 Version IBRIX Fusion Version 3.1 Release 46 Compatibility New Features Version 3.1 CLI Changes RHEL 5 Update 3 is supported for Segment Servers and IBRIX Clients RHEL 5 Update 2
Calsoft Webinar - Debunking QA myths for Flash- Based Arrays
Most Trusted Names in Data Centre Products Rely on Calsoft! September 2015 Calsoft Webinar - Debunking QA myths for Flash- Based Arrays Agenda Introduction to Types of Flash-Based Arrays Challenges in
OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 [email protected] Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect
High-Performance SSD-Based RAID Storage Madhukar Gunjan Chakhaiyar Product Test Architect 1 Agenda HDD based RAID Performance-HDD based RAID Storage Dynamics driving to SSD based RAID Storage Evolution
WHITEPAPER: Understanding Pillar Axiom Data Protection Options
WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases
Preparing a SQL Server for EmpowerID installation
Preparing a SQL Server for EmpowerID installation By: Jamis Eichenauer Last Updated: October 7, 2014 Contents Hardware preparation... 3 Software preparation... 3 SQL Server preparation... 4 Full-Text Search
