EMC Celerra Unified Storage Platforms

Similar documents
EMC Virtual Infrastructure for Microsoft SQL Server

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

Virtualized Exchange 2007 Local Continuous Replication

EMC Unified Storage for Microsoft SQL Server 2008

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture

Virtualized Exchange 2007 Archiving with EMC Xtender/DiskXtender to EMC Centera

EMC Celerra Unified Storage Platforms

EMC Backup and Recovery for Microsoft SQL Server

EMC Business Continuity for Microsoft SQL Server 2008

EMC Backup and Recovery for Microsoft SQL Server

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

EMC Integrated Infrastructure for VMware

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Configuring Celerra for Security Information Management with Network Intelligence s envision

EMC CLARiiON CX3 Series FCP

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure Components of the VMware infrastructure...2

EMC Integrated Infrastructure for VMware

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Dell EqualLogic Best Practices Series

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Virtualizing Oracle Database 10g/11g on VMware Infrastructure

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

Ultimate Guide to Oracle Storage

Quantum StorNext. Product Brief: Distributed LAN Client

NONDISRUPTIVE BACKUP OF ORACLE DATABASE WITH EMC UNIFIED STORAGE

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

PARALLELS CLOUD STORAGE

Oracle Database Scalability in VMware ESX VMware ESX 3.5

The functionality and advantages of a high-availability file server system

Frequently Asked Questions: EMC UnityVSA

AX4 5 Series Software Overview

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

High Performance Tier Implementation Guideline

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Virtual SAN Design and Deployment Guide

Microsoft SQL Server 2005 on Windows Server 2003

White Paper. Dell Reference Configuration

Distribution One Server Requirements

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA

SAN Conceptual and Design Basics

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Deploying Exchange Server 2007 SP1 on Windows Server 2008

The Methodology Behind the Dell SQL Server Advisor Tool

Optimizing Large Arrays with StoneFly Storage Concentrators

ADVANCED NETWORK CONFIGURATION GUIDE

GIVE YOUR ORACLE DBAs THE BACKUPS THEY REALLY WANT

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Deployments and Tests in an iscsi SAN

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series 2 Applied Technology

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2

If you already have your SAN infrastructure in place, you can skip this section.

New!! - Higher performance for Windows and UNIX environments

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

By the Citrix Publications Department. Citrix Systems, Inc.

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

The Revival of Direct Attached Storage for Oracle Databases

Dell High Availability Solutions Guide for Microsoft Hyper-V

Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle

Avid ISIS

Accelerating Network Attached Storage with iscsi

Brocade Solution for EMC VSPEX Server Virtualization

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

Virtuoso and Database Scalability

HP and Mimosa Systems A system for archiving, recovery, and storage optimization white paper

New Features in SANsymphony -V10 Storage Virtualization Software

PERFORMANCE TUNING ORACLE RAC ON LINUX

Esri ArcGIS Server 10 for VMware Infrastructure

FlexArray Virtualization

EMC VNXe HIGH AVAILABILITY

EMC ISILON AND ELEMENTAL SERVER

Introduction to MPIO, MCS, Trunking, and LACP

Integrated Application and Data Protection. NEC ExpressCluster White Paper

Enhancing the Dell iscsi SAN with Dell PowerVault TM Tape Libraries and Chelsio Unified Storage Router iscsi Appliance

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

EMC Data Domain Boost and Dynamic Interface Groups

SQL Server Business Intelligence on HP ProLiant DL785 Server

LANDesk White Paper. LANDesk Management Suite for Lenovo Secure Managed Client

VTrak SATA RAID Storage System

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

Windows 8 SMB 2.2 File Sharing Performance

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Transcription:

EMC Solutions for Oracle RAC 11g on Windows Direct NFS EMC Celerra Unified Storage Platforms Revision 1.0 EMC NAS Validation Group Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

2 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms Copyright 2009 EMC Corporation. All rights reserved. Published February, 2009 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms P/N h6022

Contents About this Document... 9 Chapter 1 Solution Overview... 11 Business challenge... 12 Technology solution... 12 Solution advantages...12 Terminology...13 Chapter 2 Solution Reference Architecture... 15 Overall architecture... 16 General characteristics...16 Network architecture... 17 Virtual local area networks...17 EMC NS40 Data Mover ports...18 Storage architecture...18 High availability and failover...19 RAID type and RAID group configuration...19 Disk volume setup...20 MVM compared with AVM...20 File systems, exports, and mount points...21 Database server architecture... 21 Oracle RAC 11g Server network architecture...21 iscsi client...22 High availability and failover...22 Application software... 23 Hardware and software resources... 23 Hardware resources...23 Software resources...24 Chapter 3 Solution Best Practices... 25 Solution architecture best practices... 26 Solution validation and performance testing overview...26 Storage setup and configuration... 30 Disk drive recommendations...30 RAID groups...30 iscsi LUN...32 Volume management...32 Stripe size...34 Data Mover parameter setup...34 Load distribution...35 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 3

Contents High availability...35 Network setup and configuration...36 Gigabit connection...36 Virtual local area networks...36 Network port configuration...37 Network security...37 Jumbo frames...37 Database server setup and configuration...37 Server BIOS...37 Hyperthreading...37 Processor...38 Memory...38 Windows setup and configuration...38 Background application affinity...38 System cache...38 Server role...38 Virtual memory...38 Oracle memory structure...39 Windows application and services...39 Windows patches...39 Static IP address...39 iscsi connection...39 Oracle Direct NFS client...40 Oracle database setup and configuration...42 Oracle Database file placement...42 Initialization parameters...42 Recommendation for Oracle files...43 Chapter 4 Solution Applied Technologies...45 Physical backup and recovery using RMAN...46 Oracle Recovery Manager (RMAN)...46 Chapter 5 Conclusion...49 Conclusion...50 4 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Figures Figure 1 Oracle RAC 11g on Windows using Direct NFS system architecture... 17 Figure 2 EMC NS40 Data Mover ports and Traffic types... 18 Figure 3 Oracle RAC 11g three FC shelves and single SATA shelf RAID configuration... 20 Figure 4 User load scaling... 27 Figure 5 CPU utilization chart... 27 Figure 6 Direct NFS path scaling... 28 Figure 7 Database throughput scaling with NICs... 29 Figure 8 Database throughput scaling with FTS sessions... 29 Figure 9 Oracle file placement on Celerra disk volumes... 33 Figure 10 iscsi connections between each Oracle 11g database server and Celerra... 39 Figure 11 Microsoft iscsi initiator target logon configuration... 40 Figure 12 RMAN Full backup... 46 Figure 13 RMAN Incremental backup... 47 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 5

Figures 6 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Tables Table 1 Solution advantages... 12 Table 2 Oracle database 11g solution terminology... 13 Table 3 File system layout... 21 Table 4 Application server network interface configuration... 22 Table 5 Application types and locations... 23 Table 6 Hardware specifications... 23 Table 7 Software specifications... 24 Table 8 Recommendations for RAID types corresponding to Oracle file types... 30 Table 9 Storage pools... 33 Table 10 Oracle RAC 11g solution VLANs... 36 Table 11 ORANFSTAB parameters... 41 Table 12 Initialization options... 42 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 7

Tables 8 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

About this Document EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers. This solution guide provides an overview of the architecture, best practices, and implementation strategies of EMC Solutions for Oracle RAC 11g Windows Direct NFS on EMC Celerra NS40. Purpose This guide provides an overview of an Oracle RAC 11g implementation on Windows 2003 Server using the Oracle Direct NFS protocol and Celerra NFS as the back-end storage. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. Information in this document can also be used by other EMC organizations. Audience This document is intended for personnel, partners, and customers looking for a cost-effective way to implement production Oracle databases. As such, readers should already be familiar with the installation and administration of their server operating system and Oracle RAC 11g Software. Related documents The following documents provide additional, information and are located on Powerlink. Access to these documents is based on your login credentials. If you do not have access to the content listed below, contact your EMC representative: Celerra Network Server user documentation EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System Best Practice Planning The following third-party documents are available at http://www.oracle.com provide additional information: Oracle Database Installation Guide 11g Release 1 (11.1) for Microsoft Windows (x64) Oracle RAC, Overview of Real Application Clustering Features and Functionality Oracle Database Performance Tuning Guide 11g Release 1 (11.1) Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 9

About this Document Oracle database Backup and Recovery Basics Oracle Database 11g Architecture on Windows Oracle Database 11g Direct NFS Client white paper 10 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Chapter 1 Solution Overview This chapter presents these topics: Business challenge... 12 Technology solution... 12 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 11

Solution Overview Business challenge Technology solution Solution advantages Small-to-medium size businesses (SMB) need a cost-effective way to implement production Oracle databases. Customers with this business profile often need to deploy database management systems (DBMS) within a limited budget. While limited in working capital, these businesses require the same features and functionality associated with enterprise-level solutions. The SMB customer needs a DBMS solution that can start small and grow as their business grows. Oracle Database 11g Direct NFS Client integrates the NFS client functionality directly in the Oracle software and provides a standard NFS client implementation across all hardware and operating system platforms. The Oracle RAC 11g Windows using Direct NFS solution is deployed on an EMC Celerra NS40 multi-protocol storage system. On a Windows system, the Oracle Direct NFS protocol can be used for accessing only certain types of files. The block storage requirement for this solution leverages iscsi protocol and file-level access requirement leverages NFS protocol. Thus the storage system access method is both NFS and iscsi. With NFS (or iscsi), the storage deployment follows the methods that are used with direct-attached storage (DAS) or storage area networks (SAN). The NFS and iscsi volumes on the Celerra are presented to the Oracle database server as though they are internal physical disks. The Oracle Database RAC 11g on Celerra NFS solution consists of three individual solution components: Consolidation, Backup/Recovery, and Resiliency. The Consolidation component details how the Celerra storage is provisioned and presented to the Windows Server. The performance of the Consolidation component is tested by utilizing an industry-standard online transaction processing (OLTP) database performance benchmark, while providing real-world tuning on a reasonably-priced and configured platform. The Decision Support System (DSS) workload is generated and tested using simulation scripts provided by Oracle Corporation. These scripts instigate complete table scan operation on large tables and are run in parallel on a single node in multiple SQLPlus sessions. The Basic backup/recovery component verifies the basic backup and restores capabilities using Oracle Recovery Manager (RMAN), and the resulting performance characteristics. This backup uses only the functionality provided by the database server and operating system software to perform backup and recovery. It uses the database server s CPU, memory, and I/O channels for all backup, restore, and recovery operations. The Resiliency component tests the availability of the Oracle RAC database in various failure scenarios and measures the resulting impact. Table 1 lists the solution advantages that satisfy the Business challenges mentioned in the Business challenge section. Table 1 Solution advantages Benefit Lower total cost of ownership (TCO) Details Reduces costs from acquisition, administration, and maintenance compared to equivalent DAS or SAN 12 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Overview Terminology Benefit Greater manageability High availability Flexibility Protection Multi-protocol consolidation EMC Information Lifecycle Management (ILM) Details Eases implementation, provisioning, and volume management Implements a clustering architecture that provides very high levels of data availability Easily creates databases, or copies of database, available (through remounts) to other servers Integrates both availability and backup Uses FC, iscsi, NFS, and CIFS all within a single storage platform Implements tiered storage As you use the solution guide, it is important to understand certain key terms related to the components of the solution. Table 2 provides definitions of the terms used in this document. In addition, you can refer to the respective vendor document for more information. Table 2 Oracle database 11g solution terminology Term Link Aggregation VLAN OLTP DSS DOP Full backup Incremental backup Oracle Recovery Manager (RMAN) Physical storage backup Point-in-time recovery Definition A high-availability feature based on the IEEE 802.3 ad Link Aggregation Control Protocol (LACP) standards. This allows Ethernet ports with similar characteristics to the same switch to combine into single virtual device with a single MAC address. Logical networks that function independently of the physical network configuration Online Transaction Processing (OLTP) is the realtime highperforming relational database system supporting thousands of concurrent users performing common set of transactions. Decision Support System (DSS) is a large sequential read query typically generated by Data Warehousing and business intelligence applications. The degree of parallelism (DOP) is a metric that indicates how many operations can be executed or are being executed simultaneously on a table in the context of Oracle database. A non-incremental RMAN backup. Note that "full" does not refer to how much of the database is backed up, but to the fact that the backup is not incremental. Consequently, you can make a full backup of one data file. A backup in which only modified blocks are backed up. RMAN is the Oracle-preferred method for Oracle database backup and recovery. It provides block-level corruption detection during backup and restores. Uses file multiplexing to optimize performance and space consumption during backup. A full and complete copy of the database to different physical media. The incomplete recovery of database files to a non-current time. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 13

Solution Overview 14 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Chapter 2 Solution Reference Architecture This chapter presents these topics: Overall architecture... 16 Network architecture... 17 Database server architecture... 21 Application software... 23 Hardware and software resources... 23 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 15

Solution Reference Architecture Overall architecture General characteristics This SMB implementation of Oracle RAC 11g can support hundreds of concurrent user connections. Typically dual-processor Intel servers with 16 GB RAM can be used as Oracle database servers. The EMC Celerra NS40 is used as the storage device to store the Oracle database and cluster files. The database servers access all the database files over Ethernet using the TCP/IP communication protocol. To isolate network traffic for the database from public network traffic, a dedicated virtual LAN is used. However, a physically separated LAN can also be used for this purpose. The Oracle RAC 11g Windows Direct NFS solution configuration has the following general characteristics: Stores Oracle datafiles, online redo log files, archived log files and flash recovery files on a dedicated NFS file systems Mirrors online redo log files across two different file systems using Oracle software multiplexing Mirrors the control files across the online redo log file systems Mirrors the Oracle Cluster Registry (OCR) files across the different dedicated iscsi LUNs Uses RAID-protected iscsi and NFS file systems to satisfy the I/O demands of individual database objects Stores all database files on the EMC Celerra NS Series storage system, making server replacement relatively simple Runs Oracle RAC 11g Enterprise Edition x86-64 under Windows 2003 Enterprise Edition servers with 16 GB of physical RAM 16 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Reference Architecture Network architecture Figure 1 illustrates the network architecture for the validated solution. Figure 1 Oracle RAC 11g on Windows using Direct NFS system architecture Virtual local area networks The validated solution uses three VLANs to segregate network traffic. This improves throughput, manageability, application separation, high availability, and security as shown in Figure 1 on page 17. The Oracle RAC 11g Windows Direct NFS system architecture contains the following VLANs: Client/Driver VLAN supports connectivity between the Oracle RAC 11g Servers and the client workstations. Control and management of these devices are also provided using the client network. RAC Interconnect VLAN supports connectivity between the Oracle RAC 11g Servers for network I/O required by Oracle Cluster Ready Services (CRS). Two Network Interface Cards (NICs) are configured on each Oracle RAC 11g Server to the RAC Interconnect network. Link aggregation is configured on the servers to provide load balancing and port failover between the two ports for this network. Storage VLAN uses both iscsi and NFS protocols to provide connectivity between the servers and the storage. The database servers that are connected to the storage VLAN have four NICs dedicated to the storage VLAN. The Microsoft iscsi Initiator s multiple connections per session (MCS) feature has been used between the database servers and Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 17

Solution Reference Architecture Switches storage for path redundancy and iscsi load balancing. The Oracle Direct NFS client also provides path redundancy and load balances NFS traffic. For IP switches on which the Client and Storage VLANs are configured, EMC recommends the switches that support Gigabit Ethernet (GbE) connections, jumbo frames, and port channeling. EMC NS40 Data Mover ports The network ports used on the EMC NS40 Data Mover for accessing storage from database server hosts are shown in Figure 2. Figure 2 Storage architecture EMC NS40 Data Mover ports and Traffic types Ports cge0 (the last character is a zero) through cge3 are used for both iscsi and NFS storage networks. They handle all I/O required by the database servers to the datafiles, online redo log files, archived log files, control files, and flash recovery files. To set up the storage configuration, do the following tasks: 1. Establish the RAID levels 2. Allocate hot spares 3. Create disk volumes 4. Assign disk volumes to the user-defined storage pool 5. Create file systems from a user-defined storage pool that holds iscsi LUNs 6. Export the file system to be used by NFS clients 7. Create iscsi targets 8. Create iscsi LUNs 9. Mask the iscsi LUNs to their associated iscsi initiators on the Oracle RAC 11g servers 18 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Reference Architecture High availability and failover The EMC Celerra NS Series has built-in high-availability (HA) features. These HA features allow the Celerra NS Series to survive various failures without losing access to the Oracle RAC 11g database. These features protect against the following: Data Mover failure Network port failure Power loss affecting a single circuit connected to the storage array Storage processor failure Disk failure The EMC Celerra NS40 is configurable for either a single or dual Data Mover. If the single Data Mover option is chosen, the Data Mover failover feature of NS40 is not available. RAID type and RAID group configuration The solution s RAID configuration utilizes mixed RAID 1 and RAID 5 on one shelf of Fibre Channel drives and RAID 5 on the remaining three shelves as illustrated in Figure 3 on page 20. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 19

Solution Reference Architecture Figure 3 Oracle RAC 11g three FC shelves and single SATA shelf RAID configuration Disk volume setup MVM compared with AVM The Fibre Channel drives are used to store Oracle RAC 11g datafiles, online redo log files, OCR files, control files, and temp files. The SATA drives are used to store archived log files and flash recovery files. After the RAID groups are created and LUNs are bound, the EMC Celerra NS Series automatically maps disk volumes to physical LUNs and the disk volumes are accessible to the Data Movers. Manual Volume Management (MVM) allows the greatest flexibility for the user to configure storage in a way that highest performance can be achieved. However, this is a more complex method for configuring storage. Automatic Volume Management (AVM) on the other hand, is a simpler, more automatic way of configuring storage but it is not as flexible as MVM in its configuration options. 20 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Reference Architecture For MVM, after the disk volumes are created, metavolumes are created from sliced or striped volumes. Each metavolume is set up to stripe across multiple disk volumes so that multiple disks can work concurrently. For disks managed using AVM, system-defined or user-defined storage pools can be used. Userdefined storage pools provide a better option for those who want more control over their storage allocation while still utilizing the more automated management provided by AVM. The RAID group layout for this solution was created using three standard Celerra storage templates. The following RAID configurations are used for this database solution: FC_RAID5_4+1_HS_R1_R1_4+1 (shelf 0_0) FC_RAID5_4+1_R5_4+1_R5_4+1 (shelf 1_0) FC_RAID5_4+1_R5_4+1_R5_4+1 (shelf 2_0) ATA_RAID5_HS_6+1_6+1 (shelf 3_0) File systems, exports, and mount points In the next step, the database file systems are created and exported to the database servers. Also, the file systems are created to host iscsi LUNs. For this solution testing, the file systems and corresponding storage volume used to create the iscsi LUNs and NFS exports are shown in Table 3. Table 3 File system layout Celerra Protocol Storage volumes file system /datafs NFS Striped LUNs from all available RAID 5 groups on FC shelves /log1fs NFS Striped LUN from half of the available RAID 1 group on an FC shelf /log2fs NFS Striped LUN from half of the available RAID 1 group on an FC shelf /flashfs NFS LUN from RAID 5 group on SATA shelf /archfs NFS LUN from RAID 5 group on SATA shelf /ocr1fs iscsi LUN from RAID 1 group on FC shelf /ocr2fs iscsi LUN from RAID 1 group on FC shelf /ocr3fs iscsi LUN from RAID 1 group on FC shelf Database server architecture This section describes the database server architecture. Oracle RAC 11g Server network architecture Each Oracle RAC 11g Server has seven network interfaces. Four interfaces connect to the iscsi and NFS storage network. Two interfaces connect the server to the RAC interconnect, enabling the heartbeat and other network I/O required by Oracle cluster ready services. One interface connects to the client network. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 21

Solution Reference Architecture Table 4 provides a list of the interfaces used. Table 4 iscsi client Oracle Direct NFS client Application server network interface configuration Interface port ID eth0 eth1 eth2 eth3 eth4 eth5 eth6 Description Client network Cluster interconnect network Cluster interconnect network iscsi and NFS storage network iscsi and NFS storage network iscsi and NFS storage network iscsi and NFS storage network Each Oracle RAC 11g database server uses the iscsi protocol to connect to the Celerra NS Series network file server for accessing OCR and voting disk files. The iscsi runs on top of TCP/IP protocol. While the Celerra NS40 has a native iscsi target, each Oracle RAC 11g server requires the installation of the Microsoft iscsi initiator. The Microsoft iscsi initiator configuration uses the MCS functionality to enable multi-path features such as load balancing and high availability. With Oracle Database 11g, you can configure Oracle Database to access NFS V3 NAS devices directly using Direct NFS Client. The Direct NFS Client includes two fundamental I/O optimizations to increase throughput and overall performance. First, Direct NFS Client is capable of performing concurrent direct I/O, which bypasses any operating system-level caches. Second, the Direct NFS Client delivers optimized performance by automatically load balancing requests across all specified paths. Oracle Direct NFS Client currently supports up to four parallel network paths to the storage to provide scalability and high availability. This Direct NFS solution is based on validation of Oracle 11g Direct NFS on the Windows operating system. Oracle 11g R1 has a known bug with regard to Direct NFS resiliency issues. The bug is resolved in the 11.1.0.7 patch. If the appropriate 11.1.0.7 patch for the Windows OS is not applied, it can cause serious implications for the stability and continuity of a running database when configured to use Direct NFS. EMC does not recommend implementation of Direct NFS solutions on NAS/IP Storage architecture with Direct NFS unless the appropriate Oracle 11.1.0.7 patch has been installed and configured. See Oracle Metalink for more information on downloading and installing the Oracle 11.1.0.7 patch for your Windows operating system. High availability and failover TCP/IP provides the ability to establish redundant paths for sending I/O from a networked computer to another networked computer. This approach uses Microsoft initiator MCS with a round-robin load-balancing policy. It supports redundant paths that facilitate high availability and load balancing for the networked connection. The Oracle Direct NFS client is also capable of load balancing I/O across all the available paths, thus providing high availability. 22 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Reference Architecture Application software The Oracle RAC 11g binary files were installed on the database server s local disks. The Oracle datafiles, online redo log files, archive log files, flashback recovery files, cluster files, and temp files reside on the EMC Celerra NS Series. The file systems were designed (in terms of the RAID level and number of disks used) to be appropriate for each file type. Table 5 lists each file type and its location: Table 5 Application types and locations File type Database binary files Datafiles, tempfiles Online redo log files Archived log files Control files Flash recovery area SP file OCR files Voting disks Location Database servers local disk /datafs Mirrored across /log1fs and /log2fs /archfs Mirrored across /log1fs and /log2fs /flashfs Local file system on all the nodes with identical path Mirrored across /ocr1fs and /ocr2fs Mirrored across /ocr1fs, /ocr2fs and /ocr3fs Hardware and software resources This section explains the hardware and software resources. Hardware resources Table 6 Table 6 lists the hardware resources for the Oracle RAC 11g Windows Direct NFS solution. Hardware specifications Equipment Quantity Configuration Dell 2950 (EM64T) server Four Two 3.0-GHz Intel Pentium 4 dual-core processors 16 GB of memory One 73 GB 10K internal SCSI disks Two onboard 10/100/1000 MB Ethernet NICs Two additional dual-port 10/100/1000 MB Ethernet NIC One additional quad-port 10/100/1000 MB Ethernet NIC Gigabit Ethernet switch One for each VLAN (one for client, one for RAC interconnect, and one for storage) VLAN support Optional jumbo frame support LACP and/or EtherChannel support Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 23

Solution Reference Architecture Equipment Quantity Configuration Celerra NS40 networked server One Two Data Movers Four GbE network connections per Data Mover Three FC shelves (15 x 146 GB FC disks ) One SATA shelf (15 x 750 GB SATA disks) One Control Station Software resources Table 7 Table 7 lists the software resources for the Oracle RAC 11g Windows Direct NFS solution. Software specifications Software title Microsoft Windows 2003 Enterprise Edition SP2 (64-bit) Oracle RAC 11g Enterprise Edition Celerra Manager Advanced Edition Microsoft iscsi Software Initiator 2.04 or later DART version 5.6.40.3 or later Number of licenses One per Database server One per Database server One per Celerra NS server Free license One per Data Mover 24 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Chapter 3 Solution Best Practices This chapter presents these topics: Solution architecture best practices... 26 Storage setup and configuration... 30 Network setup and configuration... 36 Database server setup and configuration... 37 Windows setup and configuration...38 Oracle Direct NFS client... 40 Oracle database setup and configuration... 42 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 25

Solution Best Practices Solution architecture best practices This section discusses the best practices for running Oracle RAC 11g Enterprise Edition x86-64 on Windows 2003 Server Enterprise Edition SP2 with an EMC Celerra Network Server. The topics covered include setup and configuration of: Storage Network Database server hardware and BIOS Windows Server 2003 operating system Oracle software install Database parameters and settings Backup and recovery Solution validation and performance testing overview The Oracle RAC 11g Windows Direct NFS Celerra NS Series solution has been validated by EMC NAS Solutions Engineering. The Reference Architecture was tested for functionality and iterative performance tuning. The OLTP benchmark performance tests were run using Quest Benchmark Factory 5.0, whereas the DSS benchmark performance tests were run using Full Table Scan (FTS) script provided by Oracle Corporation. Functional testing of the solution validated the following: Proper database operation Database integrity Database protection Database recoverability The TPC-C performance tuning tested database transaction response time under an increasing TPC-C user load. The gating metric for the test was an average response time of 2 seconds. The test was considered to fail once the TPC-C transaction response time exceeded the 2-seconds limit. Iterations of this test procedure were performed to isolate and tune various system and Oracle settings. Refer to Table 12 on page 42 for more information about these settings. With a four-node RAC configuration, the performance optimization tests yielded a maximum user load of approximately 6,100 TPC-C users and 303 Transactions per second (TPS) in a tuned environment. 26 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices Figure 4 on page 27 shows the user load scaling. Figure 4 User load scaling The conclusions drawn from the graph (Figure 4) are: The Oracle RAC 11g Servers could accommodate a modest load on the CPU, while the Data Mover CPU on the NS40 showed only minor inflection. The Data Mover CPU shows plenty of headroom for scalability and the ability to service other processes such as file serving and so on. Figure 5 shows the CPU utilization. Figure 5 CPU utilization chart Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 27

Solution Best Practices Figure 6 shows the Direct NFS path scaling. Figure 6 Direct NFS path scaling It can be concluded from the graph (Figure 6) that from a one Direct NFS path configuration to a four Direct NFS paths configuration, the OLTP database performance showed only minor inflection. The DSS scripts create a database of size 30 GB TPC-H Line item table, containing 180 million rows of records. A test was executed to measure I/O performance using the DSS scripts containing large complex queries. This resulted in intensive sequential data reads and table joining operations. DSS scripts are capable of creating only a Single Instance (SI) database and hence only one server in the Network architecture was used. The DSS workload is an I/O intensive application and therefore all the four network ports (Direct NFS paths) have been configured to achieve maximum possible throughput during the performance testing. Also, the DSS database used the /datafs file system as described in the section RAID type and RAID group configuration for placing online redo logs, control files, and datafiles. Figure 7 on page 29 shows achieved database throughput when the number of NICs varied from one-to-four ports, retaining DOP 32 and the number of FTS sessions as 4. The database throughput linearly scaled up when the number of paths increased from 1 to 3. 28 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices Figure 7 Database throughput scaling with NICs Figure 8 shows achieved database throughput when the number of FTS query sessions varied from one-to-four and when the degree of parallelism (DOP) was configured to 8, 16, and 32. The database throughput increased with an increase in DOP. Figure 8 Database throughput scaling with FTS sessions Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 29

Solution Best Practices Storage setup and configuration This section discusses the best practices for storage configuration. Disk drive recommendations The following are the general recommendations for disk drive settings: Drives with higher revolutions per minute (rpm) provide higher overall random-access throughput and shorter response times than drives with slower rpm. For better performance, higher-rpm drives are recommended. Because of significantly better performance, Fibre Channel drives are always recommended for storing datafiles and online redo log files. Serial Advanced Technology-Attached (SATA) drives have slower response rotational speed and moderate performance with random I/O. However, they are less expensive than Fibre Channel drives for the same or similar capacity. SATA drives are therefore the best option for storing archived redo logs and backup files. In this solution example, a second shelf of SATA drives would be required to provide disk capacity for archived redo logs and flashback recovery files as FC and SATA drives cannot co-exist on the same shelf. RAID groups Table 8 summarizes general recommendations for RAID types corresponding to different Oracle file types. Note: Celerra RAID 1 with striping is basically RAID 1+0, that is, a stripe of mirrors. Celerra provides striping on RAID 1 volumes. Table 8 Recommendations for RAID types corresponding to Oracle file types Description RAID 1 / FC RAID 5 / FC RAID 5 / SATA Datafiles OK Note: In some cases, if an application creates a large amount of temp activity, place the temporary tablespace datafiles on RAID 1 devices instead of RAID 5 devices. This provides a performance benefit due to RAID 1 having superior sequential I/O performance. The same is true when you undo tablespaces and when an application creates a lot of undo activity. Further, an application that creates a large number of full table scans or index scans may benefit if the datafiles are being placed on a RAID 1 device. Recommended Note: RAID 5 is generally recommended for database files, due to storage efficiency. However, if the write I/O is greater than 30 percent of the total I/O, RAID 1 (with Celerra striping) can provide better performance, as it avoids hot spots and gives the best possible performance during a disk failure. Random write performance on RAID 1 can be 20 percent higher than RAID 5. Control files OK OK OK Avoid 30 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices Description RAID 1 / FC RAID 5 / FC RAID 5 / SATA Online redo logs Archived logs Flashback recovery area OCR file / Voting disk Recommended Note: Online redo log files should be put on RAID 1 devices. You should not use RAID 5 because sequential write performance of distributed parity (RAID 5) is not as high as that of simple mirroring (RAID 1). Further, RAID 1 provides the best data protection, and protection of online redo log files is critical for Oracle recoverability. OK Avoid OK OK Recommended Note: In some cases, placing archived redo log files on RAID 1 may be appropriate. RAID 1 for archived redo logs will provide better mean time to recovery (MTTR), as the sequential read performance is superior to RAID 5. However, due to storage efficiency, RAID 5 can be chosen. This is a tradeoff, and must be determined on a case-to-case basis. OK OK Recommended OK Note: We placed one copy on each of online redo log volumes (RAID 1). You should use FC disks for these files as unavailability of these files for any significant period of time (possibly due to disk I/O performance issues) can cause one or more of the RAC nodes to reboot and fence itself from the cluster. OK Avoid Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 31

Solution Best Practices Best practices for file system design dictate that a file system should consist entirely of volumes that are all of the same RAID type and that which consist of the same number and type of component spindles. Therefore, EMC does not recommend mixing any of the following within a single database file system: RAID levels Disk types Disk rotational speeds iscsi LUN Volume management Storage template File systems In Oracle RAC 11g, the main file components of Oracle Clusterware are the Oracle Cluster Registry (OCR file) and the voting disks. Due to the nature of these files, they must be placed on shared raw devices. The Direct NFS protocol cannot be used to access these files. Therefore EMC recommends placing these files on iscsi LUNs. EMC recommends creating a minimum of three iscsi LUNs on four different Celerra file systems. Three LUNs created on FC drives are to be used for placing multiple copies of OCR and voting disks. The Celerra iscsi LUNs must be masked so that only the appropriate hosts can have access to them. Create appropriate iscsi LUN mask entries using Celerra Manager to provide access only to the database servers. EMC recommends Automatic Volume Management (AVM) for high-performance Oracle databases on a Celerra Network Server. By utilizing user-defined storage pools, AVM provides the flexibility of precise placement of file systems on particular disks or on particular locations on specific disks. It also allows to stripe the workload across the storage processors on the back-end storage array, typically EMC CLARiiON. While using AVM, select the LUNs that alternate ownership between the two storage processors of the back-end storage array. Celerra enables the use of predetermined storage templates. These storage templates have been created to optimally configure the back-end disk storage into the most efficient combination of RAID groups on a per shelf basis. Initializing the back-end RAID configuration is the first step in creating storage pools on the Celerra. It is recommended to create the back-end disk RAID configuration using a standard storage template. The specific storage templates used for this Oracle RAC 11g solution are: FC_RAID5_4+1_HS_R1_1+1_R1_1+1_R1_1+1_R1_1+1_HS FC_RAID5_4+1_R5_4+1_R5+4+1 RAID5_HS_6+1_6+1 The disk volumes created by the configuration of the back-end storage are used for Celerra file systems. For better performance and ease of management, EMC do not recommend mixing iscsi I/O and NFS I/O to a same file system. 32 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices The user-defined pool created as shown in Table 9 was used for creating Celerra file systems. Table 9 Storage pools Celerra file system Storage pool Celerra disks datafs datafs_pool d8,d13,d27,d15,d29,d17,d31(stripe) log1fs log1fs_pool d9,d23 (stripe) log2fs log2fs_pool d24.d12 (stripe) archfs archfs_pool d19 flashfs flashfs_pool d32 ocrfs1 ocrfs1_pool d22 ocrfs2 ocrfs2_pool d10 ocrfs3 ocrfs2_pool d11 Figure 9 shows the file system layout for this solution. Figure 9 Oracle file placement on Celerra disk volumes Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 33

Solution Best Practices Stripe size EMC recommends a stripe size of 32 KB for all type of database workloads. The default stripe size for all the file system on FC shelves (datafiles, redo files and temp files) should be 32 KB. Similarly, the recommended stripe size for the file systems on SATA II shelves (archive and flash) should be 256 KB. Data Mover parameter setup This section explains the Data Mover parameters. NFS exports It is required to export the Celerra file systems to the database servers with root access privileges. To export the file system, type: #server_export <movername> -Protocol nfs -name <fs_name> -option <options> <mount_point> transchecksum EMC recommends enabling transchecksum on the Data Mover that is serving Oracle Direct NFS clients. This avoids likelihood of tcp Port and XID (transaction identifier) reuse by two or more databases running on the same physical server, possibly causing data corruption. To enable the transchecksum, type: #server_param <movername> -facility nfs -modify transchecksum - value 1 Note: This applies to NFSv3 only. Refer to NAS Support Matrix available on Powerlink to understand the Celerra version that supports this parameter. Prefetch Uncached EMC recommends to turn off the file system read prefetching for an online transaction processing (OLTP) workload. You must leave it on for DSS workload. Prefetch will waste I/Os in an OLTP environment, if any sequential I/Os are performed. In a DSS workload environment, setting the opposite is true. This setting is not applicable for a file system that contains iscsi LUNs and hence leaves it to default. To turn off the read prefetch mechanism for a file system, type: $server_mount <movername> -option <options> noprefetch <fs_name> <mount_point> This setting allows well-formed writes (that is, multiple of a disk block and disk block aligned) to be sent directly to the disk without being cached on to the server. EMC testing shows significant improvement in the performance by turning off file system write caching for OLTP workload. Retain this in the default state in case of a DSS workload environment. This setting is not applicable for a file system that contains iscsi LUNs and hence retains the default setting. To turn off the write cache mechanism for a file system, type: $server_mount <movername> -option <options> uncached <fs_name> <mount_point> 34 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices NFS threads EMC recommends using the default NFS thread count of 256 for optimal performance. It is not advisable to set this to a value lower than 32 or higher than 512. For more information, refer to Celerra Network Server Parameters Guide available on Powerlink. File.asyncthreshold EMC recommends using the default value of 32 for the parameter file asyncthreshold. This provides optimum performance for databases. For more information, refer to the Celerra Network Server Parameters Guide available on Powerlink. Load distribution High availability Data Mover For tablespaces with heavy I/O workloads consisting of concurrent reads and writes, EMC recommends spreading the I/O load across multiple spindles. Figure 9 on page 33 shows the seven Celerra disk volumes used for the Oracle database. This effectively distributes the I/O load across 35 spindles. This section explains high-availability features. The Data Mover failover capability is a key feature unique to the Celerra Network Server. This feature offers redundancy at the file-server level, allowing continuous data access. It also helps build a fault-resilient RAC architecture. EMC recommends setting the Data Mover failover policy to auto. This allows the Control Station to immediately fail the Data Mover over to its standby status in the event of hardware or software failure. The standby Data Mover assumes the faulted Data Mover s: Network identity: The IP and MAC addresses of all its NICs Storage identity: The file systems that the faulted Data Mover controlled Service identity: The shares and exports that the faulted Data Mover controlled This ensures continuous access to the Celerra file system or iscsi LUNs for database transactions. The iscsi or Direct NFS clients do not see any significant interruption in I/O. Data Mover failover occurs if any of these conditions exists: Failure (operation below the configured threshold) of both internal network interfaces by the lack of a "heartbeat" (Data Mover timeout) Power failure within the Data Mover (unlikely as the Data Mover is typically wired into the same power supply as the entire array) Software panic due to exception or memory error Data Mover hang Data Mover failover does not occur under these conditions: Data Mover is removed from its slot Data Mover is rebooted manually Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 35

Solution Best Practices As manual rebooting of Data Mover does not initiate a failover, EMC recommends initiating a manual failover, before taking down a Data Mover for maintenance. Oracle cluster synchronization service The synchronization services component (CSS) of Oracle Clusterware maintains the following two heartbeat mechanisms: Disk heartbeat to the voting disk Network heartbeat across the RAC interconnects that establishes and confirms valid node membership in the cluster Both of these heartbeat mechanisms have an associated timeout value. For more information on Oracle Clusterware MissCount and DiskTimeout parameters, refer to Oracle MetaLink Note 2994430.1. EMC recommends setting the disk heartbeat parameter disktimeout to 200 seconds. You should leave the network heartbeat parameter misscount to the default of 60 seconds. These settings will ensure that the RAC nodes do not evict when the active Data Mover fails over to standby. The command to configure this option is: crsctl set css disktimeout 200 Network setup and configuration Gigabit connection This section contains EMC recommendations to set up and configure the network for optimal performance. Use Gigabit Ethernet for the network connections between the database servers and the Celerra Network Servers and avoid the use of 100BaseT. Also use Gigabit Ethernet for the RAC interconnects. Virtual local area networks Use virtual local area networks (VLANs) to provide better throughput, manageability, application separation, high availability, and security. Table 10 describes the three VLANs used for this solution. Table 10 Oracle RAC 11g solution VLANs VLAN ID Description CRS setting 1 Client network Public 2 RAC interconnect Private 3 Storage Do not use Refer to the Network architecture section for more information. 36 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices Network port configuration Spread the NFS I/O and iscsi I/O over all the four Celerra network ports. Refer to the Network architecture section for the recommended configuration. In addition, set speed and duplex settings to auto on all ports. This is one of the single most common (and easily resolvable) performance issues observed today. Network port bonding and load balancing are managed by the Oracle Direct NFS client within the database; therefore leave all four interfaces non-trunked by the operating system (OS). Network security Jumbo frames Place the Celerra Network Server and the Oracle RAC 11g servers in private segregated storage networks to isolate the traffic between them. The network in which the external interface of the Control Station resides should also be on a segregated subnet, secured by the firewall, as the Control Station is the most important administrative interface to the Celerra Network Server. Any other security policies implemented in your organization should be employed to secure the environment. Maximum Transfer Unit (MTU) sizes of greater than 1,500 bytes are referred to as jumbo frames. jumbo frames require Gigabit Ethernet across the entire network infrastructure server, switches, and database servers. Whenever possible, EMC recommends the use of jumbo frames on all segments of the storage network. For Oracle RAC 11g installations, jumbo frames are recommended for the private RAC interconnects to boost throughput and to possibly lower CPU utilization due to the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). Celerra Data Movers support MTU sizes of up to 9,000 bytes. Typical Oracle database environments transfer data in 8 KB and 32 KB block sizes, requiring multiple 1,500 frames per database I/O, while using an MTU size of 1,500 bytes. Using jumbo frames reduces the number of frames needed for every large I/O request and, in turn, reduces the host CPU needed to generate a large number of interrupts for each application I/O. The benefit of jumbo frames is primarily a complex function of the workload I/O sizes, network utilization, and Celerra Data Mover CPU utilization, so it is not easy to predict. Enabling jumbo frames in DSS type workload environment can significantly improves the network throughput. Database server setup and configuration Server BIOS Hyperthreading This section explains the database server setup and configuration. Regardless of your server vendor and architecture, you should monitor the BIOS version shipped with your system and determine if it is the latest production version supported by your vendor. In most cases the BIOS version does not support the latest production version supported by the vendor. In this case, it is recommended to flash the BIOS. Intel hyperthreading technology allows multithreaded operating systems to view a single physical processor as if it were two logical processors. A processor that incorporates this technology shares CPU resources among multiple threads. In theory, this enables faster enterprise-server response times and provides additional CPU processing power to handle larger workloads. As a result, server performance is likely to improve. In EMC s testing, however, performance with hyperthreading was worse than performance without it. For this reason, EMC recommends Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 37

Solution Best Practices Processor Memory disabling hyperthreading. Use the BIOS configuration menu to disable hyperthreading. Refer to your server vendor s documentation for instructions. Intel Extended Memory 64 Technology (EM64T) delivers the flexibility for operating systems and applications that support 64-bit computing. You must enable this from BIOS configuration menu. As Oracle workload is always memory intensive, EMC recommends configuring the system with the maximum amount of memory feasible to meet your scalability and performance needs. Refer to your database server hardware documentation to determine the total number of memory slots in your database server, and the number and density of memory modules that you can install. Windows setup and configuration Before starting Oracle software installation, the Windows operating system should be configured to maximize performance for the database servers. To set the background application affinity, System cache, and Server role properties, access the Windows Server and navigate to Start > Settings > Control Panel > System > Advanced > Performance Settings > Advanced Background application affinity It is important that background processes be favored over foreground processes on the Server Console. All Oracle processes are considered background processes. System cache Server role Virtual memory The performance of system cache should be minimized. System cache is used for file-serving activities. Oracle uses its own buffer cache instead of the system cache. The Windows Server memory manager tries to balance each application's usage of memory by dynamically paging memory between the physical RAM and a virtual memory paging file. If an application is particularly memory-intensive (like Oracle) or if a large number of applications are running concurrently, the combined memory requirements of the applications can exceed the physical memory capacity. The large proportion of memory is reserved for file caching (41 percent can be quite beneficial to file and print servers). However, it is not advantageous to the application servers that often run memory-intensive network applications. The file cache of a Windows Server is not necessary for the Oracle databases that perform caching through the System Global Area. Reset the Windows Server memory model from the default file and print server, with its large file cache, to a network applications model, with a reduced file cache and more physical memory available for Oracle Database. Set the Server Role as Application Server using the following path: Start > Settings > Control Panel >Administrative Tools > Configure Your Server Wizard. Oracle uses virtual memory. Although the paging file(s) should not be consistently used, it is far worse to run short of virtual memory when you need it on a temporary basis. 38 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices Oracle memory structure Set the virtual memory between one to four times of physical RAM installed in the server. Try to place the page file on a different physical internal disk other than the disk where Windows is installed. If possible, split the paging file into multiple files on multiple physical devices. This enables parallel access to virtual memory, and increases performance. In Oracle 11g, it is strongly recommended to use automatic memory management to manage the memory on your system. Automatic memory management enables Oracle Database to automatically manage and tune the instance memory. Automatic memory management can be configured using a target memory size initialization parameter (MEMORY_TARGET) and a maximum memory size initialization parameter (MEMORY_MAX_TARGET). Set these sizes to an optimal value suitable for your environment. Oracle Database then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and the instance program global area (instance PGA). Windows application and services Disable all Windows services that are not required for the database server. Remove all applications from Startup folders of Windows Server console operators. Disable screen savers that can saturate the CPU. If necessary, select Blank Screen. Windows patches Static IP address iscsi connection Apply the latest reliable service packs; however, some new Service Packs can interfere with Oracle performance. It is usually best to wait a few weeks until a Service Pack is known to be effective and problem-free. Always install with an Uninstall folder, so that the Service Pack can be reversed later, if necessary. Configure all network interfaces to use static IP address instead of a DHCP server assigned IP address. To provide redundancy and load balancing, EMC recommends using two Ethernet links on each side (Data Mover and each database server) of the iscsi storage network. EMC recommends using at least two different network interface ports on the Celerra Data Mover for the iscsi target address. Configure the MCS load-balance policy as Weighted Paths. Figure 10 iscsi connections between each Oracle 11g database server and Celerra By default, the Microsoft iscsi initiator timeout is set to 60 seconds. This timeout defines how much time the initiator will hold a request before reporting an iscsi connection error. This value can be increased to accommodate some longer outages, such as Data Mover failover. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 39

Solution Best Practices To change the time-out value, search the Windows Registry for the MaxRequestHoldTime entry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet, and change the value to 600. When configuring the Microsoft iscsi initiator, you should configure the Log On to Target to Automatically restore this session when the system reboots as shown in Figure 11. Setting this feature will ensure that the iscsi sessions will be reconnected at system boot and that the iscsi LUN connectivity remains intact. Figure 11 Microsoft iscsi initiator target logon configuration Oracle Direct NFS client Oracle Database 11g can be configured to access Network File System (NFS) Version 3 servers directly using an internal Oracle Direct Network File System client. However, this is applicable for accessing datafiles, redo log files, control files, archive files, and flash recovery files. Implementation on the Windows platform still requires block storage access for storing cluster registry files, Voting disks, and server parameter files. This setup required specific configuration settings to ensure efficient and correct usage with Oracle. The following example shows the typical configuration of one of the nodes in the cluster. It must be repeated on all other nodes. Oracle Database uses an ODM library oranfsodm11.dll, to enable Direct NFS. To replace the standard ODM library oraodm11.dll with the ODM NFS library oranfsodm11.dll, perform the following steps: 1. Shut down the Oracle database instances. 2. Type the following commands: Cd $ORACLE_HOME/bin copy oradm11.dll oraodm11.dll.stub copy /Y oranfsodm11.dll oraodm11.dll 3. To enable Direct NFS Client, create an Oracle-specific file ORANFSTAB in ORACLE_BASE\ORACLE_HOME\dbs. When ORANFSTAB is placed in ORACLE_ BASE\ORACLE_HOME\dbs, its entries apply to all databases using ORACLE_HOME. Direct NFS client searches for the mount point entries in ORANFSTAB. It uses the first matched entry as the mount point. 40 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices The following is an example of an ORANFSTAB file: Server.rtpsol17 local:10.6.26.61 path:10.6.26.62 local:10.6.27.63 path:10.6.27.64 local:10.6.28.250 path:10.6.28.251 local:10.6.29.252 path: 10.6.29.253 export:/datafs mount: m:\datafs export:/log1fs mount: m:\log1fs export:/log2fs mount: m:\log2fs export:/flashfs mount: m:\flashfs dontroute The following table shows the description of the parameters in the ORANFSTAB file. Table 11 ORANFSTAB parameters Parameter Server Path Export Mount dontroute drive Description The Celerra NFS server name. Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the server_ifconfig command on the Celerra. The exported path from the Celerra NFS server. The corresponding local mount point for the exported path. Specifies that outgoing messages should not be routed by the operating system, but sent using the IP address they are bound to. If dontroute is not specified, it is mandatory that all the paths to the Celerra are in separate subnets. In this example, the drive letter m: is the shared OCFS2 formatted iscsi drive across all the RAC nodes serving local mount points for different exported file systems from Celerra rtpsol17. If the database creation process using Database Configuration Assistant (DBCA) fails, you can create a top-level directory structure naming it ORACLE_SID within the exported file systems manually prior to invoking the DBCA. This can be done by mounting the NFS file system on Celerra Control Station or another host using NFS and then creating the required directory tree. DBCA by default creates a Server Parameter (SP) file on a file system mounted from Direct NFS. You should move the SP file out of the Direct NFS mount location to any other location to enable database startup process to access SP file. Database creation using DBCA requires you to specify a path to Oracle files which is shared across the RAC nodes. Hence, the local mount point specified in the ORANFSTAB file must be a shared directory across the RAC nodes. You may use OCFS formatted drive for this purpose. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 41

Solution Best Practices Oracle database setup and configuration This section explains the Oracle database setup and configuration. Oracle Database file placement It is not recommended to place Oracle files for different instances (multiple databases running on the same server) on the same set of Celerra file system. In such cases you should create a separate set of file systems for each database. This provides the flexibility in using Celerra advanced features such as SnapSure TM, Celerra Replicator TM, and so on. Initialization parameters To configure the Oracle instance for optimal performance with the Celerra Network Server, we recommend the initialization options in Table 12 contained in the spfile or init.ora file for the Oracle instance. Table 12 Initialization options Parameter Name Database block size File System I/O Disk Async I/O Multiple database writer processes Shared Server Syntax and description DB_BLOCK_SIZE=n For best database performance, DB_BLOCK_SIZE should be a multiple of the OS block size. For example, if the Windows page size is 4096, DB_BLOCK_SIZE =4096 * n FILESYSTEMIO_OPTIONS=setall Directio: This setting enables direct I/O. Direct I/O is a feature available in modern file systems that delivers data directly to the application without caching in the file system buffer cache. Direct I/O preserves file system semantics and reduces the CPU overhead by decreasing the kernel code path execution. I/O requests are directly passed to the network stack, bypassing some code layers. Direct I/O is a very beneficial feature to Oracle s log writer, both in terms of throughput and latency. Asynch: ASYNCH I/O optimizes the concurrency of queuing multiple I/O requests to the storage device, allowing the application code to continue processing until the point where it simply must wait for the I/O requests to complete. Setall: Setall turns on both Direct I/O and Ascynch I/O. This is the recommended setting. DISK_ASYNCH_IO=true This parameter controls whether I/O to datafiles, control files, and logfiles is asynchronous or not. Async I/O is now recommended on all the storage protocols. DB_WRITER_PROCESSES=1 Set this to 1, EMC s testing with the configuration specified in this document showed that database performance degraded marginally if multiple database writer processes existed. SHARED_SERVERS =m DISPATCHERS=(PROTOCOL=TCP)(DISPATCHERS=n) Use this mode when a large number of users need to connect to the database. It is also useful when database memory is limited or when better performance is needed. The values for m and n vary depending on your environment. 42 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Best Practices Parameter Name Multi-block read count Syntax and description DB_FILE_MULTIBLOCK_READ_COUNT= n DB_FILE_MULTIBLOCK_READ_COUNT determines the maximum number of database blocks read in one I/O during a full table scan. The number of database bytes read is calculated by multiplying the DB_BLOCK_SIZE and DB_FILE_MULTIBLOCK_READ_COUNT.The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. Increasing this value can improve performance for databases that perform many full table scans, but degrade performance for OLTP databases where full table scans are seldom (if ever) performed. Setting this value to a multiple of file system block size limits the amount of fragmentation that occurs in the I/O subsystem. This parameter is specified in DB Blocks. The file system block size settings are in KB, so adjust as required. EMC recommends to set DB_FILE_MULTIBLOCK_READ_COUNT between 1 and 4 for an OLTP database and between 16 and 32 for a Decision Support System (DSS). Recommendation for Oracle files This section details the recommendation for Oracle files. Control files EMC recommends that when you create the control file, allow for growth by setting MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, and MAXLOGMEMBERS to high values. EMC recommends database to consist of minimum of two control files located on separate physical disks. One way to multiplex your control files is to store a control file copy on every disk drive that stores members of the redo log groups, if the redo log files are multiplexed. Online redo log files Server parameter file It is recommended to execute a mission-critical, production database in ARCHIVELOG mode. EMC recommends to multiplex your redo log files for these databases. Loss of online redo log files could result in failure of the database being able to recover. The best practice to multiplex your online redo log files is to place members of a redo log group on different disks. To understand how redo log and archive log files can be placed, refer to Figure 9 on page 33. EMC recommends you place the Server parameter file in a common location instead of manually maintaining a synchronized copy on each RAC nodes. The SP file path should be outside the Direct NFS path as defined in the ORANFSTAB file. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 43

Solution Best Practices 44 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Chapter 4 Solution Applied Technologies This chapter presents this topic: Physical backup and recovery using RMAN... 46 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 45

Solution Applied Technologies Physical backup and recovery using RMAN A complete high availability and disaster recovery strategy requires dependable data backup, restore, and recovery procedures to protect the database against double-disk failures and other hardware failures at the storage layer. Oracle Recovery Manager (RMAN) Oracle Recovery Manager (RMAN), a command-line and Enterprise Manager-based tool, is the Oracle-preferred method for efficiently backing up and recovering your Oracle database. The tests were performed to study the impact of the RMAN on the TPC-C user loads. This section explains the results. Full Backup Figure 12 shows the Oracle TPS and response time during the RMAN backup windows. Figure 12 RMAN Full backup The full database backup performed at the peak user load did not impact the performance of database. Incremental backup The incremental backup strategy involves two steps. First, you need to perform Level 0 backup followed by incremental backup at regular intervals of time. In the testing environment, we have observed that Level 0 backup did not impact the database performance; however, the incremental backup impacted database performance. 46 Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms

Solution Applied Technologies Figure 13 shows the Oracle TPS and response time during the RMAN backup windows. Figure 13 RMAN Incremental backup Restore and recover database The complete recovery of database can be performed only in the offline mode. In the testing environment, the database recovery from Full backup and Incremental backup has taken almost the same time. RMAN can be tuned to optimize the backup and restore performance. However this should be done carefully by monitoring Database I/O response requirements. By increasing RMAN tunable parameters parallelism and channels, the backup time can be reduced drastically. Oracle RAC 11g on Windows Direct NFS for EMC Celerra Unified Storage Platforms 47