EMC Unified Storage for Oracle Database 11g

Size: px
Start display at page:

Download "EMC Unified Storage for Oracle Database 11g"

Transcription

1 EMC Unified Storage for Oracle Database 11g Enabled by EMC CLARiiON and EMC Celerra Using FCP and NFS Proven Solution Guide

2 Copyright 2010 EMC Corporation. All rights reserved. Published June 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H6948

3 Table of Contents Chapter 1: About this Document... 7 Overview... 7 Audience and purpose... 8 Scope... 9 Business challenge Technology solution Reference Architecture Validated environment profile Hardware and software resources Prerequisites and supporting documentation Terminology Typographic conventions Chapter 2: Storage Design Overview Concepts Best practices CX4 cache configuration for SnapView snapshot Storage processor failover LUN/RAID group layout Storage design layout Chapter 3: File System File system layout Chapter 4: Application Design Considerations Application design layout Memory configuration for Oracle 11g HugePages Chapter 5: Network Design

4 Concepts Best practices SAN network layout IP network layout Virtual LANs Jumbo frames Ethernet trunking and link aggregation Public and private networks Oracle RAC 11g/10g server network architecture Chapter 6: Installation and Configuration Overview Task 1: Build the network infrastructure Task 2: Set up and configure ASM for CLARiiON Task 3: Set up and configure database servers Task 4: Configure NFS client options Task 5: Install Oracle Database 11g/10g Task 6: Configure database server memory options Task 7: Tune HugePages Task 8: Set database initialization parameters Task 9: Configure Oracle Database control files and logfiles Task 10: Enable passwordless authentication using SSH Task 11: Set up and configure CLARiiON storage for Replication Manager and SnapView Task 12: Install and configure EMC RecoverPoint Task 13: Set up the virtualized utility servers Task 14: Configure and connect EMC RecoverPoint appliances (RPAs) Task 15: Install and configure EMC MirrorView/A Task 16: Install and configure EMC CLARiiON (CX) splitters Chapter 7: Testing and Validation Overview Section A: Store solution component Section B: Basic Backup solution component Section C: Advanced Backup solution component

5 Section D: Basic Protect solution component Section E: Advanced Protect solution component using EMC MirrorView and Oracle Data Guard Section F: Advanced Protect solution component using EMC RecoverPoint Section G: Test/Dev solution component using EMC SnapView clone Section H: Backup Server solution component Section I: Migration solution component Chapter 8: Virtualization Overview Advantages of virtualization Considerations VMware infrastructure Virtualization best practices VMware ESX server VMware and NFS Chapter 9: Backup and Restore Overview Section A: Backup and restore concepts Section B: Backup and recovery strategy Section C: Physical backup and restore Section D: Replication Manager in Test/Dev and Advanced Backup solution components Chapter 10: Data Protection and Replication Overview Section A: Basic Protect using Oracle Data Guard Section B: Advanced Protect using EMC MirrorView and Oracle Data Guard Section C: Advanced Protect using EMC RecoverPoint Chapter 11: Test/Dev Solution Using EMC SnapView Clone Overview CLARiiON SnapView clone Best practices Mount and recovery of a target clone database using Replication Manager Database cloning

6 Chapter 12: Migration Chapter 13: Conclusion

7 Chapter 1: About this Document Chapter 1: About this Document Overview Introduction EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers. The introduction of the EMC Celerra unified storage platform prompted the creation of a solution that showcases the new capability of this unit: to expose the back-end EMC CLARiiON array to hosts for Fibre Channel Protocol (FCP) access, in addition to the normal Network File System (NFS) access previously provided by the Celerra NS Series. In this solution, all Oracle objects that require higher performance and lower latency I/O are placed over an FCP connection, and all other objects are placed over an NFS connection. This may sound counterintuitive as it requires the management of two separate storage protocols. However, it is far simpler to manage and configure this solution than to manage a solution using the FCP alone, and this solution provides identical performance. Thus, management and simplicity are improved by the blended FCP/NFS solution, while performance is not affected. This document summarizes a series of implementation procedures and best practices that were discovered, validated, or otherwise encountered during the validation of a solution for Oracle Database 11g/10g using the EMC Celerra unified storage platform and Oracle RAC 11g and 10g on Linux over FCP and NFS. EMC Unified Storage for Oracle Database 11g/10g Enabled by EMC CLARiiON and EMC Celerra 7

8 Chapter 1: About this Document Audience and purpose Audience The intended audience for the Proven Solution Guide is: Internal EMC personnel EMC partners Customers Purpose The purpose of this solution is to: Improve the performance, scalability, flexibility, and resiliency of an Oracle software stack that is physically booted on normal hardware by connecting multiple protocols to one storage platform as follows: Fibre Channel Protocol (FCP) and Oracle ASM to access high-demand, low-latency storage elements NFS to access all other storage elements Facilitate and reduce the risk of migrating an existing Oracle Database 10g installation to 11g by providing documentation of best practices. Reduce cost by migrating an online production Oracle Database mounted over FCP to a target database mounted over NFS with no downtime and minimal performance impact. Improve the performance of an Oracle 11g or 10g production database by offloading all performance impacts of database operations, such as backup and recovery, using: EMC Replication Manager EMC SnapView TM These demonstrate significant performance and manageability benefits in comparison to normal Oracle Recovery Manager (RMAN) backup and recovery. Provide disaster recovery capability using: EMC RecoverPoint with CLARiiON splitters EMC MirrorView TM /Asynchronous over iscsi These demonstrate significant performance and manageability benefits in comparison to normal Oracle Data Guard disaster recovery. Provide the capability to clone a running production database with minimal performance impact and no downtime using SnapView clones and Replication Manager. 8

9 Chapter 1: About this Document Scope Overview This section describes the components of the solution. Core solution components The following table describes the core solution components that are included in this solution: Component Scale-up OLTP Resiliency Description Using an industry-standard OLTP benchmark against a single database instance, comprehensive performance testing is performed to validate the maximum achievable performance using the solution stack of hardware and software. The purpose of resiliency testing is to validate the faulttolerance and high-availability features of the hardware and software stack. Faults are inserted into the configuration at various layers in the solutions stack. Some of the layers where fault tolerance is tested include: Oracle RAC node, Oracle RAC node interconnect port, storage processors, and Data Movers. Functionality solution components The following table describes the functionality solution components that are included in this solution: Component Description Basic Backup Advanced Backup This is backup and recovery using Oracle RMAN, the built-in backup and recovery tool provided by Oracle. This is backup and recovery using EMC value-added software or hardware. In this solution, the following are used to provide Advanced Backup functionality: EMC Replication Manager EMC SnapView snapshot Basic Protect Advanced Protect This is disaster recovery using Oracle Data Guard, Oracle s built-in remote replication tool. This is disaster recovery using EMC value-added software and hardware: In this solution the following are used to provide Advanced Protect functionality: EMC RecoverPoint with CLARiiON splitters EMC MirrorView/A over iscsi Test/dev A running production OLTP database is cloned with minimal, 9

10 Chapter 1: About this Document if any, performance impact on the production server, as well as no downtime. The resulting dataset is provisioned on another server for use for testing and development. EMC Replication Manager is used to automate the test/dev process. Migration An online production Oracle database that is mounted over FCP/ASM is migrated to a target database mounted using NFS, with no downtime and minimal performance impact on the production database. Business challenge Business challenges for midsize enterprises Midsize enterprises face the same challenges as their larger counterparts when it comes to managing database environments. These challenges include: Rising costs Control over resource utilization and scaling Lack of sufficient IT resources to deploy, manage, and maintain complex environments at the departmental level The need to reduce power, cooling, and space requirements Unlike large enterprises, midsize enterprises are constrained by smaller budgets and cannot afford a custom, one-off solution. This makes the process of creating a database solution for midsize enterprises even more challenging than for large enterprises. 10

11 Chapter 1: About this Document Technology solution Blended solution for midsize enterprises This solution demonstrates how organizations can: Deploy a solution using a combination of the NFS and FCP protocols on the Celerra. FCP is used for high-i/o and low-latency database objects (notably the datafiles, tempfiles, online redo logfiles, and controlfiles). NFS is used for all other database objects (consisting basically of the flashback recovery area, disk-based backups, archive logs, and CRS files). Manageability advantages are obtained by using a combination of FCP and NFS. Specifically, archived logs and backups can be accessed through a normal file system interface rather than ASM. Further, another clustered file system is not required for the CRS files. This simplifies the software installation and configuration on the database servers. Avoid investing in additional FC infrastructure by implementing a blended solution that uses both FCP and NFS to access storage elements. Work with different protocols in the blended solution to migrate an online production Oracle Database mounted over FCP to a target database mounted over NFS, with no downtime and minimal performance impact on the production database. Maximize the use of the database-server CPU, memory, and I/O channels by offloading performance impacts from the production server during: Backup and recovery by using Replication Manager or SnapView Disaster recovery operations by using RecoverPoint or MirrorView/A Reduce the complexity of backup operations and eliminate the need to implement scripted solutions by using Replication Manager. Save time and maximize system uptime when migrating existing Oracle Database 10g systems to Oracle Database 11g. Implement a disaster recovery solution with MirrorView/A over iscsi that reduces costs and complexity by using IP as the network protocol. Use EMC SnapView to free up the database server s CPU, memory, and I/O channels from the effects of operations relating to backup, restore, and recovery. SnapView clones also help in creating test/development systems without any impact on the production environment. Blended FCP/NFS solution This is a blended FCP/NFS solution. Depending on the nature of the database object, either FCP or NFS is used to access it. The following table shows which protocol is used to access each database object. Database object Type Accessed using Datafiles High demand FCP 11

12 Chapter 1: About this Document Online redo logfiles Low latency Controlfiles Tempfiles Flashback recovery area Archive logs Disk-based backups CRS files Low demand High latency NFS Two sites connected by WAN Two sites connected by a WAN are used in the solution: one site is used for production; the other site is used as a disaster recovery target. A Celerra is present at each site. Oracle RAC 11g or 10g for x86-64 is run on Red Hat Enterprise Linux or on Oracle Enterprise Linux. FCP storage networks consisting of dedicated, redundant FCP switches are present at both sites. An EMC RecoverPoint cluster is also included at each site. The solution includes virtualized servers for use as Test/dev, Basic Protect, and Advanced Protect targets. Virtualization of the test/dev and disaster recovery (DR) target servers is supported using VMware ESX Server. Production site The following components are present at the production site and are connected to the production FCP storage network and to the WAN: A Celerra (actually the CLARiiON back-end array) A physically booted four-node Oracle RAC 11g or 10g cluster A RecoverPoint cluster connected to the FCP storage network and the WAN The Oracle RAC 11g or 10g servers are also connected to the client and RAC interconnect networks. Disaster recovery target site The disaster recovery target site consists of: A target Celerra (actually the CLARiiON back-end array) connected to the target FCP storage network A RecoverPoint cluster connected to the FCP storage network and the WAN 12

13 Chapter 1: About this Document Connected to both sites The following are present at both sites: A VMware ESX server is connected to both the production and target FCP storage networks. A virtualized single-instance Oracle 11g or 10g server is used as: The disaster recovery target for Basic Protect and Advanced Protect (DR site) The target for Test/Dev (production site) The virtualized single-instance Oracle 11g or 10g target server accesses both the production and target FCP storage networks and is connected to the client WAN through virtualized connections on the virtualization server. A virtualized Replication Manager Server is responsible for handling replication tasks through the Replication Manager Agent, which is installed on the production database servers. The LUNs on the Celerra are discovered using Raw Device Mapping (RDM) on the target VMs. Storage layout The following table describes how each Oracle file type and database object is stored and accessed for this solution: What Protocol Stored on File-system type Oracle datafiles Oracle tempfiles Oracle online redo logfiles FCP FC disk (LUNs) RAID-protected ASM diskgroup Oracle controlfiles Voting disk OCR files Archived logfiles Flashback recovery area NFS FC disk SATA II RAID-protected NFS Backup target High-performance database objects are accessed over an FCP network using redundant network switches. 13

14 Chapter 1: About this Document ASM and ASMLib Oracle ASM is used as the file system/volume manager. Oracle ASMLib is used to virtualize the LUNs on the database server. Oracle datafiles, tempfiles, and online redo logfiles are stored on separate LUNs that are mounted on the database server using ASM over FCP. Three ASM diskgroups are used one diskgroup for datafiles and tempfiles, and two diskgroups for online redo logfiles. The online redo logfiles are mirrored across the two ASM diskgroups using Oracle software multiplexing. The controlfiles are mirrored across the online redo log ASM diskgroups. Each ASM diskgroup and its underlying LUNs are designed to satisfy the I/O demands of individual database objects. For example, RAID 5 is used for the datafiles and the tempfiles, but RAID 1 is used for the online redo logfiles. All of these diskgroups are stored on FC disks. Network architecture TCP/IP and NFS provide network connectivity and file system semantics for NFS file systems on Oracle RAC 11g or 10g. Client virtual machines run on the VMware ESX server. They are connected to a client network. Client, RAC interconnect, and redundant TCP/IP storage networks consist of dedicated network switches and virtual local area networks (VLANs). The RAC interconnect and storage networks consist of trunked IP connections to balance and distribute network I/O. Jumbo frames are enabled on these networks. 14

15 Chapter 1: About this Document Reference Architecture Corresponding Reference Architecture This solution has a corresponding Reference Architecture document that is available on Powerlink, EMC.com, and EMC KB.WIKI. Refer to EMC Unified Storage for Oracle Database 11g/10g - Enabled by EMC CLARiiON and EMC Celerra Using FCP and NFS Reference Architecture for details. If you do not have access to this content, contact your EMC representative. Reference Architecture diagram The following diagram depicts the overall physical architecture of the solution. 15

16 Chapter 1: About this Document Validated environment profile Environment profile and test results For information on the performance results, refer to the testing summary results contained in Chapter 7: Testing and Validation. Hardware and software resources Hardware The hardware used to validate the solution is listed below. Equipment Quantity Configuration EMC Celerra unified storage platforms (includes an EMC CLARiiON CX4 back-end storage array) 2 2 Data Movers 4 GbE network connections per Data Mover 2 or 3 FC shelves 1 SATA shelf Dell PowerConnect Gigabit Ethernet switches 30 or GB FC disks (depending on configuration) GB SATA disks 1 Control Station 2 storage processors DART version ports per switch QLogic FCP switches 2 16 ports 4 Gb throughput Database servers Dell PowerEdge 2900 (Oracle RAC 11g/10g servers) GHz Intel Pentium 4 quad-core processors 24 GB of RAM GB 15k internal SCSI disks 2 onboard GbE Ethernet NICs 2 additional Intel PRO/1000 PT quad-port GbE Ethernet NICs 2 SANblade QLE2462-E-SP 4 Gb/s dual-port FC HBAs (4 ports in total) EMC RecoverPoint appliances (RPA) Virtualization server Dell PowerEdge 6450 (VMware ESX server) 4 2 Dell 2950 servers per site QLA2432 HBA cards GHz AMD Opteron quad-core processors 32 GB of RAM GB 15k internal SCSI disks 16

17 Chapter 1: About this Document 2 onboard GbE Ethernet NICs 3 additional Intel PRO/1000 PT quad-port GbE Ethernet NICs 2 SANblade QLE2462-E-SP 4 Gb/s dual-port FC HBAs (4 ports in total) Software The software used to validate the solution is listed below. Software Oracle Enterprise Linux 4.7 VMware ESX Server/vSphere 4.0 Version Oracle VM Microsoft Windows Server 2003 Standard Edition Oracle RAC Enterprise Edition Oracle Database Standard Edition 2003 Quest Benchmark Factory for Databases EMC Celerra Manager Advanced Edition 11g or 10g EMC Navisphere Agent EMC PowerPath (build 157) EMC FLARE EMC DART g or 10g (11g version ) 5.6 EMC Navisphere Management 6.28 EMC RecoverPoint 3.0 SP1 EMC Replication Manager EMC MirrorView 6.7 EMC CLARiiON splitter driver

18 Chapter 1: About this Document Prerequisites and supporting documentation Technology It is assumed the reader has a general knowledge of: EMC Celerra EMC CLARiiON CX4 Oracle Database (including RMAN and Data Guard) EMC SnapView EMC Replication Manager EMC RecoverPoint EMC MirrorView VMware ESX Server VMware vsphere Supporting documents The following documents, located on Powerlink.com, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access to the following content, contact your EMC representative. CLARiiON CX4 series documentation EMC Unified Storage for Oracle Database 11g/10g - Physically Booted Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Third-party documents The following resources have more information about Oracle: Oracle Technology Network MetaLink Oracle support 18

19 Chapter 1: About this Document Terminology Terms and definitions This section defines the terms used in this document. Term Solution Solution attribute Definition A solution is a complete stack of hardware and software upon which a customer would choose to run their entire business or business function. A solution includes database server hardware and software, IP networks, storage networks, storage array hardware and software, among other components. A solution attribute addresses the entire solution stack, but does so in a way relating to a discrete area of testing. For example, performance testing is a solution attribute. Solution component Basic solution component Advanced solution component Core solution component Functionality solution component Physically-booted solution Virtualized solution Scale-up A solution component addresses a subset of the solution stack that consists of a discrete set of hardware or software, and focuses on a single IT function. For example, backup and recovery, and disaster recovery are solution components. A solution component can be either basic or advanced. A basic solution component uses only the features and functionality provided by the Oracle stack. For example, RMAN is for backup and recovery, and Data is Guard for disaster recovery. An advanced solution component uses the features and functionality of EMC hardware or software. For example, EMC SnapView is for backup and recovery, and EMC MirrorView is for disaster recovery. A core solution component addresses the entire solution stack, but does so in a way relating to a discrete area of testing. For example, performance testing is a core solution component. A functionality solution component addresses a subset of the solution stack that consists of a discrete set of hardware or software, and focuses on a single IT function. For example, backup and recovery, and disaster recovery are both functionality solution components. A functionality solution component can be either basic or advanced. A configuration in which the production database servers are directly booted off a locally attached hard disk without the use of a hypervisor such as VMware or Oracle VM. Utility servers (such as test/dev target or disaster recovery target) may still be virtualized in a physically-booted solution. A configuration in which the production database servers are virtualized using a hypervisor technology such as VMware or Oracle VM. The use of a clustered or single-image database server configuration. Scaling is provided by increasing the number of CPUs in the database server (in the case of a single-instance configuration) or by adding nodes to the cluster (in the case of a clustered configuration). Scale-up assumes that all customers of the database will be able to access all database data. 19

20 Chapter 1: About this Document Resiliency Test/dev Basic Backup Advanced Backup Testing that is designed to validate the ability of a configuration to withstand faults at various layers. The layers that are tested include: network switch, database server storage network port, storage array network port, database server cluster node, and storage processor. The use of storage layer replication (such as snapshots and clones) to provide an instantaneous, writeable copy of a running production database with no downtime on the production database server and with minimal, if any, performance impact on the production server. This is backup and recovery using Oracle RMAN, the built-in backup and recovery tool provided by Oracle. This is backup and recovery using EMC value-added software or hardware. In this solution the following are used to provide Advanced Backup functionality: EMC Replication Manager EMC SnapView Advanced Backup and Recovery A solution component that provides backup and recovery functionality through the storage layer using specialized hardware or software. Advanced Backup and Recovery has the following benefits: Offloads the database server s CPUs from the I/O and processing requirements of the backup and recovery operations Superior Mean Time to Recovery (MTTR) through the use of virtual storage layer replication (commonly referred to as snapshots) Basic Backup and Recovery Advanced Protect A solution component that provides backup and recovery functionality through the operating system and database server software stack. Basic Backup and Recovery uses the database server s CPUs for all I/O and processing of backup and recovery operations. A solution component that provides disaster recovery functionality provided through the storage layer using specialized hardware or software. Advanced Protect has the following benefits: Offloads the database server s CPUs from the I/O and processing requirements of the disaster recovery operations Superior failover and failback capabilities Reduces the software required to be installed at the disaster recovery target because of the use of consistency technology Basic Protect Kernel NFS (KNFS) High availability Fault tolerance A solution component that provides disaster recovery functionality provided through the operating system and database server software stack. Basic Protect uses the database server s CPUs for all I/O and processing of disaster recovery operations. A network storage protocol in which the NFS client is embedded in the operating system kernel. The use of specialized hardware or software technology to reduce both planned and unplanned downtime. The use of specialized hardware or software technology to eliminate both 20

21 Chapter 1: About this Document planned and unplanned downtime. Enterprise Flash Drive (EFD) Serial advanced technologyattachment (SATA) drive Migration Ceil A drive that stores data using Flash memory and contains no moving parts. SATA is a newer standard for connecting hard drives into computer systems. SATA is based on serial signaling technology, while Integrated Drive Electronics (IDE) hard drives use parallel signaling. An online production Oracle Database that is mounted over FCP/ASM is migrated to a target database mounted using NFS, with no downtime and minimal performance impact on the production database. In Oracle/PLSQL, the ceil function returns the smallest integer value that is greater than or equal to a number. The syntax for the ceil function is: ceil (number) number is the value used to find the smallest integer value. Typographic conventions Typographic conventions In this document, many steps are listed in the form of terminal output. This is referred to as a code listing. For example: Note the following about code listings: Commands you type are shown in bold. For lengthy commands the backslash \ character is used to show line continuation. While this is a common UNIX convention, it may not work in all cases. You should enter the command on one line. The use of ellipses ( ) in the output indicates that lengthy output was deleted for brevity. If a Celerra or Linux command is referred to in text it is indicated in bold and lowercase, like this: the fs_copy command. If a SQL or RMAN command is referred to in text, it is indicated in uppercase, like this: The ALTER DATABASE RENAME FILE command. A special font is not used in either case. Migration Customers often request the ability to migrate a virtualized Oracle Database across storage protocols. In response to this, the Oracle Consulting (CSV) group has validated that customers who have an Oracle Database can migrate data from: An FCP/ASM to an NFS-mounted file system 21

22 Chapter 1: About this Document An NFS-mounted file system to an FCP/ASM Detailed information regarding migration is found in the Oracle RAC/Database 11g Cross-Protocol Migration A Detailed Review white paper. See that paper for more information. 22

23 Chapter 2: Storage Design Chapter 2: Storage Design Overview Introduction to Storage Design The storage design layout instructions presented in this chapter apply to the specific components used during the development of this solution. Concepts Setting up CX storage To set up CLARiiON (CX) the following steps must be carried out: Step Action 1 Configure zoning. 2 Configure RAID groups and bind LUNs. 3 Allocate hot spares. 4 Create storage groups. 5 Discover FCP LUNs from the database servers. High availability and failover EMC Celerra has built in high-availability (HA) features. These HA features allow the Celerra to survive various failures without a loss of access to the Oracle database. These HA features protect against the following: Power loss affecting a single circuit connected to the storage array Storage processor failure Storage processor reboot Disk failure Best practices Disk drives The following are the general recommendations for disk drives: Drives with higher revolutions per minute (rpm) provide higher overall randomaccess throughput and shorter response times than drives with slower rpm. For optimum performance, higher-rpm drives are recommended for datafiles and tempfiles as well as online redo logfiles. Because of significantly better performance, Fibre Channel drives are always recommended for storing datafiles, tempfiles, and online redo log files. Serial Advanced Technology-Attached (SATA II) drives have slower response and rotational speed, and moderate performance with random I/O. However, they are less expensive than the Fibre Channel drives for the same or similar 23

24 Chapter 2: Storage Design capacity. SATA II drives are frequently the best option for storing archived redo logs and the flashback recovery area. In the event of high performance requirements for backup and recovery, Fibre Channel drives can also be used for this purpose. Enterprise Flash Drives (EFDs) Enterprise Flash Drives (EFDs) can be used to dramatically improve cost, performance, efficiency, power, space, and cooling requirements of Oracle databases stored on EMC Celerra. To know if EFDs will fit in your situation, you need to determine if a set of datafiles is being accessed more heavily than other datafiles. This is an extremely common condition in Oracle databases. If so, then migrate this set of datafiles to EFDs. The candidate datafiles that are affected may change over time, requiring application of an Information Lifecycle Management (ILM) strategy. To determine if EFDs will provide improved performance in your environment, you need to identify what set of datafiles have low cache read-hit rates and are exhibiting random I/O patterns. These random workloads are usually the best candidates for migration to EFD's and tend to exhibit the highest performance gains. Pure sequential workloads like online redo logs will benefit as well, although to a lesser degree than random workloads. RAID types and file types The following table describes the recommendations for RAID types corresponding to Oracle file types: Description RAID 5/EFD RAID 10/FC RAID 5/FC RAID 5/SATA II Datafiles/tempfiles Control files Online redo logs Possible (apply Recommended Recommended Avoid tuning) 1 Possible (apply tuning) Unlikely to be cost justifiable Archived logs Avoid Possible (apply tuning) 1 Flashback recovery area OCR file/voting disk Recommended Recommended Avoid Recommended Avoid Avoid Possible (apply tuning) 2 Recommended Avoid OK OK Recommended Avoid OK OK Avoid 1 The decision to use EFDs for datafiles and tempfiles should be driven by the I/O requirements for specific datafiles. 2 The use of FC disks for archived logs is fairly rare. However, if many archived logs are being created, and the I/O requirements for archived logs exceed a reasonable 24

25 Chapter 2: Storage Design number of SATA II disks, this may be a more cost-effective solution. Tempfiles, undo, and sequential table or index scans In some cases, if an application creates a large amount of temp activity, placing your tempfiles on RAID 10 devices may be faster due to RAID 10 s superior sequential I/O performance. This is also true for undo. Further, an application that performs many full table scans or index scans may benefit from these datafiles being placed on a separate RAID 10 device. Online redo logfiles Online redo log files should be put on RAID 1 or RAID 10 devices. You should not use RAID 5 because sequential write performance of distributed parity (RAID 5) is not as high as that of mirroring (RAID 1). RAID 1 or RAID 10 provides the best data protection; protection of online redo log files is critical for Oracle recoverability. OCR files and voting disk files You should use FC disks for OCR files and voting disk files; unavailability of these files for any significant period of time (due to disk I/O performance issues) may cause one or more of the RAC nodes to reboot and fence itself off from the cluster. The LUN/RAID group layout images in Chapter 2: Storage Design > LUN/RAID group layout, show two different storage configurations that can be used for Oracle RAC 11g/10g databases on a Celerra. That section can help you to determine the best configuration to meet your performance needs. Stripe size EMC recommends a stripe size of 32 KB for all types of database workloads. The default stripe size for all the file systems on FC shelves (redo logs and data) should be 32 KB. Similarly, the recommended stripe size for the file systems on SATA II shelves (archive and flash) should be 256 KB. Shelf configuration The most common error when planning storage is designing for capacity rather than for performance. The single most important storage parameter for performance is disk latency. High disk latency is synonymous with slower performance; low disk counts lead to increased disk latency. The recommendation is a configuration that produces average database I/O latency (the Oracle measurement db file sequential read) of less than or equal to 20 ms. In today s disk technology, the increase in storage capacity of a disk drive has outpaced the increase in performance. Therefore, the performance capacity must be the standard to use when planning an Oracle database s storage configuration, not disk storage capacity. The number of disks that should be used is determined first by the I/O requirements then by capacity. This is especially true for datafiles and tempfiles. EFDs can dramatically reduce the number of disks required to perform the I/O required by the 25

26 Chapter 2: Storage Design workload. Consult with your EMC sales representative for specific sizing recommendations for your workload. Reserved LUN pool There is no benefit in assigning LUNs with a high capacity, such as 25 GB to 30 GB, to the reserved LUN pool. It is better to configure the reserved LUN pool with a higher number of LUNs with less capacity (around 5 GB to 8 GB) than with a lower number of LUNs with higher capacity. Approximately 20 to 25 small LUNs are sufficient for most purposes. CX4 cache configuration for SnapView snapshot Recommended cache settings Poor performance was observed using SnapView snapshot with the default settings. This was in the context of a scale-up OLTP workload using a TPC-C-like benchmark. If you have a similar workload and wish to use a SnapView snapshot on a CX4 CLARiiON array, you will experience better performance by setting your cache settings as described in the following table of recommended cache settings. Cache Setting Low watermark High watermark SP A read cache memory SP B read cache memory Write cache memory Value 10 percent 30 percent 200 MB 200 MB 1061 MB Storage processor failover High availability The storage processor (SP) failover capability is a key feature that offers redundancy at the storage processor level, allowing continuous data access. It also helps to build a fault-resilient RAC architecture. LUN/RAID group layout LUN/RAID group layout design A LUN/RAID group configuration consisting of three Fibre Channel shelves with RAID 10 and RAID 1 was tested and found to provide good performance for Oracle RAC 11g databases on Celerra. Two RAID and disk configurations were tested over the FCP protocol. These are described below. 26

27 Chapter 2: Storage Design RAID group layout: 3 FC shelf RAID 5 The RAID group layout for three-fc shelf RAID 5/RAID 1 is as follows. 27

28 Chapter 2: Storage Design RAID group layout: 3 FC shelf RAID 10 The RAID group layout for three-fc shelf RAID 10/RAID 1 is as follows. 28

29 Chapter 2: Storage Design Storage design layout ASM diskgroup guidelines Automatic Storage Management (ASM) is used to store the database objects requiring high performance. The Oracle Cluster Registry file and the voting disk must be stored on shared storage; therefore, NFS is used to store these files. NFS provides a very convenient low-management overhead shared storage environment. In addition, files not requiring high performance are stored on NFS. These NFS file systems are in turn stored on low-cost SATA II drives. This lowers the cost in terms of storage while improving manageability. The following table contains a detailed description of all the database objects and where they should be stored. File system/ mount point File system type LUNs stored on Contents +DATA ASM LUNs 5 through 10 Oracle datafiles +LOG1 and +LOG2 ASM LUNs 1 through 4 Online redo logs and control file (mirrored copies) /u02 NFS LUN 0 Oracle Cluster Registry file and voting disk /u03 NFS LUN 9 Flashback recovery area (all backups stored here) /u04 NFS LUN 10 Archived log dump destination ASM diskgroup design best practice A diskgroup should consist entirely of LUNs that are all of the same RAID type and that consist of the same number and type of component spindles. EMC does not recommend mixing any of the following within a single ASM diskgroup: RAID levels Disk types Disk rotational speeds 29

30 Chapter 2: Storage Design PowerPath/ ASM workaround The validation was performed using Oracle Enterprise Linux 5.1. We observed the following issue while creating ASM disks: ~]# service oracleasm createdisk LUN1 /dev/emcp Marking disk "/dev/emcpowera1" as an ASM disk: asmtool: Device "/dev/emcpowera1" is not a partition [FAILED] That is, the partition was not recognized by the oracleasm service as being a partition. We used the following workaround to create the ASM disks: [root@mteoradb55 ~]# /usr/sbin/asmtool -C -l /dev/oracleasm -n LUN1 \ > -s /dev/emcpowera1 -a force=yes asmtool: Device "/dev/emcpowera1" is not a partition asmtool: Continuing anyway [root@mteoradb55 ~]# service oracleasm scandisks Scanning system for ASM disks: [ OK ] [root@mteoradb55 ~]# service oracleasm listdisks LUN1 [root@mteoradb55 ~]# Note PowerPath 5.1 was required for this version of Linux, but this version was not yet GA at the time the validation was started. Therefore, a GA release candidate was used for these tests. 30

31 Chapter 3: File System Chapter 3: File System File system layout ASM diskgroup guidelines Automatic Storage Management (ASM) is used to store the database objects requiring high performance. The Oracle Cluster Registry OCR file and the voting disk have very modest I/O requirements. These two files must be stored on shared storage. NFS provides a convenient, low management overhead shared storage environment. Therefore, NFS is used to store these files. In addition, other files not requiring high performance are stored on NFS. These NFS file systems are in turn stored on low-cost SATA II drives. This drives down the cost in terms of storage, while improving manageability. File system layout The following table contains a detailed description of all the database objects and where they are stored. File system/ mount point File system type LUNs stored on Contents /u02 NFS LUN 0 Oracle Cluster Registry file and voting disk +DATA ASM LUNs 5 through 10 Oracle datafiles +LOG1 and +LOG2 ASM LUNs 1 through 4 Online redo logs and control file (mirrored copies) /u03 NFS LUN 9 Flashback recovery area (all backups stored here) /u04 NFS LUN 10 Archived log dump destination 31

32 Chapter 4: Application Design Chapter 4: Application Design Considerations Heartbeat mechanisms The synchronization services component (CSS) of Oracle Clusterware maintains two heartbeat mechanisms: The disk heartbeat to the voting disk The network heartbeat across the RAC interconnects that establishes and confirms valid node membership in the cluster Both of these heartbeat mechanisms have an associated time-out value. For more information on Oracle Clusterware MissCount and DiskTimeout parameters see MetaLink Note UU. EMC recommends setting the disk heartbeat parameter disktimeout to 160 seconds. You should leave the network heartbeat parameter misscount at the default of 60 seconds. These settings will ensure that the RAC nodes do not evict when the active Data Mover fails over to its partner. The command to configure this option is: $ORA_CRS_HOME/bin/crsctl set css disktimeout

33 Chapter 4: Application Design Application design layout Oracle Cluster Ready Services Oracle Cluster Ready Services (CRS) are enabled on each of the Oracle RAC 11g/10g servers. The servers operate in active/active mode to provide local protection against a server failure and to provide load balancing. CRS required files (including the voting disk and the OCR file) can reside on NFS volumes provided that the required mount-point parameters are used. For more information on the mount-point parameters required for the Oracle Clusterware files, see Chapter 6: Installation and Configuration > Task 4: Configure NFS client options. NFS client Each Oracle RAC 11g/10g server, which is hosted in the operating system (OS), uses the KNFS protocol to connect to the Celerra storage array. KNFS runs over TCP/IP. Oracle binary files The Oracle RAC 11g/10g binary files, including the Oracle CRS, are all installed on the database servers' local disks. Stored on Celerra Datafiles, tempfiles, online redo logfiles, and controlfiles reside on the FCP file system. Flashback recovery area, disk-based backups, archive logs, and CRS files reside on Celerra NFS file systems. These file systems are designed (in terms of the RAID level and number of disks used) to be appropriate for each type of file. A separate clustered file system is not required for the CRS files. The following table lists each file or activity type and indicates where it resides. File or activity type Database binary files Datafiles, tempfiles Online redo log files Archived log files Flashback recovery area Control files CRS, OCR, and voting disk files Location Database servers local disk (or vmdk file for virtualized servers) +DATA Mirrored across +LOG1 and +LOG2 /archfs /flashfs Mirrored across +LOG1 and +LOG2 /datafs 33

34 Chapter 4: Application Design Memory configuration for Oracle 11g Memory configuration and performance Memory configuration in Oracle 11g is one of the most challenging aspects of configuring the database server. If the memory is not configured properly the performance of the database server will be very poor: The database server will be unstable. The database may not open at all, and if it does open, you may experience errors due to lack of shared pool space. In an OLTP context, the size of the shared pool is frequently the limitation on performance of the database. Automatic Memory Management A new feature called Automatic Memory Management was introduced in Oracle 11g 64 bit (Release 1). The purpose of Automatic Memory Management is to simplify the memory configuration process for Oracle 11g. For example, in Oracle 10g, the user is required to set two parameters, SGA_TARGET and PGA_AGGREGATE_TARGET, so that Oracle can manage other memory-related configurations such as buffer cache and shared pool. When using Oracle 11g-style Automatic Memory Management, the user does not set these System Global Area (SGA) Program Global Area (PGA) parameters. Instead, the following parameters are set: MEMORY_TARGET MEMORY_MAX_TARGET Once these parameters are set, Oracle 11g can, in theory, handle all memory management issues, including both SGA and PGA memory. However, the Automatic Memory Management model in Oracle 11g 64 bit (Release 1) requires configuration of shared memory as a file system mounted under /dev/shm. This adds an additional management burden to the DBA/system administrator. Effects of Automatic Memory Management on performance Decreased database performance We observed a significant decrease in performance when we enabled the Oracle 11g Automatic Memory Management feature. Linux HugePages are not supported Linux HugePages are not supported when the Automatic Memory Management feature is implemented. When Automatic Memory Management is enabled, the entire SGA memory should fit under /dev/shm and, as a result, HugePages are not used. On both Oracle 11g and Oracle 10g, tuning HugePages increases the performance of the database significantly. It is EMC s opinion that the performance improvements of HugePages, plus the lack of a requirement for a /dev/shm file system, make the Oracle 11g automatic memory model a poor trade-off. EMC recommendations To achieve optimal performance on Oracle 11g, EMC recommends the following: 34

35 Chapter 4: Application Design Disable the Automatic Memory Management feature Use the 10g style of memory management on Oracle 11g The memory management configuration procedure is described in the previous section. This provides optimal performance and manageability per our testing. HugePages HugePages The Linux 2.6 kernel includes a feature called HugePages. This feature allows you to specify the number of physically contiguous large memory pages that will be allocated and pinned in RAM for shared memory segments like the Oracle System Global Area (SGA). The pre-allocated memory pages can only be used for shared memory and must be large enough to accommodate the entire SGA. HugePages can create a very significant performance improvement for Oracle RAC 11g/10g database servers. The performance payoff for enabling HugePages is significant. Warning HugePages must be tuned carefully and set correctly. Unused HugePages can only be used for shared memory allocations - even if the system runs out of memory and starts swapping. Incorrectly configured HugePages settings may result in poor performance and may even make the machine unusable. HugePages parameters The HugePages parameters are stored in /etc/sysctl.conf. You can change the value of HugePages parameters by editing the systctl.conf file and rebooting the instance. The following table describes the HugePages parameters: Parameter HugePages_Total HugePages_Free Hugepagesize Description Total number of HugePages that are allocated for shared memory segments (This is a tunable value. You must determine how to set this value.) Number of HugePages that are not being used Size of each Huge Page Optimum values for HugePages parameters The amount of memory allocated to HugePages must be large enough to accommodate the entire SGA: HugePages_Total x Hugepagessize = Amount of memory allocated to HugePages 35

36 Chapter 4: Application Design To avoid wasting memory resources, the value of HugePages_Free should be zero. Note The value of vm.nr_hugepages should be set to a value that is at least equal to kernel.shmmax/2048. When the database is started, the HugePages_Free should show a value close to zero to reflect that memory is tuned. For more information on tuning HugePages, see Chapter 6: Installation and Configuration > Task 7: Tune HugePages. 36

37 Chapter 5: Network Design Chapter 5: Network Design Concepts Jumbo frames Maximum Transfer Unit (MTU) sizes of greater than 1,500 bytes are referred to as jumbo frames. Jumbo frames require Gigabit Ethernet across the entire network infrastructure server, switches, and database servers. VLAN Virtual local area networks (VLANs) logically group devices that are on different network segments or sub-networks. Trunking TCP/IP provides the ability to establish redundant paths for sending I/O from one networked computer to another networked computer. This approach uses the link aggregation protocol, commonly referred to as trunking. Redundant paths facilitate high availability and load balancing for the networked connection. 37

38 Chapter 5: Network Design Trunking device A trunking device is a virtual device created using two or more network devices to achieve higher performance with load-balancing capability, and high availability with failover capability. With Ethernet trunking/link aggregation, packets traveling through the virtual device are distributed among the underlying devices to achieve higher aggregated bandwidth, based on the source MAC address. Best practices Gigabit Ethernet EMC recommends that you use Gigabit Ethernet for the RAC interconnects if RAC is used. If 10 GbE is available, that is even better. Jumbo frames and the RAC interconnect For Oracle RAC 11g/10g installations, jumbo frames are recommended for the private RAC interconnect. This boosts the throughput as well as possibly lowering the CPU utilization due to the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). VLANs EMC recommends that you use VLANs to segment different types of traffic to specific subnets. This provides better throughput, manageability, application separation, high availability, and security. SAN network layout SAN network layout for validated scenario The SAN network layout is configured as follows: Two QLogic FC switches are used for the test bed. Two connections from each database servers are connected to the QLogic switches. One FC port from SPA and SPB are each connected to the two FC switches. Zoning Each FC port from the database servers are zoned to both SP ports. 38

39 Chapter 5: Network Design IP network layout IP network design for validated scenario The IP network layout is configured as follows: TCP/IP and NFS provide network connectivity. Client virtual machines run on a VMware ESX server. They are connected to a client network. Client, RAC interconnect, and redundant TCP/IP storage networks consist of dedicated network switches and virtual local area networks (VLANs). The RAC interconnect and storage networks consist of trunked IP connections to balance and distribute network I/O. Jumbo frames are enabled on these networks. The Oracle RAC 11g or 10g servers are connected to the client, RAC interconnect, WAN, and production storage networks. Virtual LANs Virtual LANs This solution uses three VLANs to segregate network traffic of different types. This improves throughput, manageability, application separation, high availability, and security. The following table describes the database server network port setup: VLAN ID Description CRS setting 1 Client network Public 2 RAC interconnect Private 3 Storage None (not used) Client VLAN The client VLAN supports connectivity between the physically booted Oracle RAC 11g/10g servers, the virtualized Oracle Database 11g/10g, and the client workstations. The client VLAN also supports connectivity between the Celerra and the client workstations to provide network file services to the clients. Control and management of these devices are also provided through the client network. RAC interconnect VLAN The RAC interconnect VLAN supports connectivity between the Oracle RAC 11g/10g servers for network I/O required by Oracle CRS. Three network interface cards (NICs) are configured on each Oracle RAC 10g server to the RAC interconnect network. Link aggregation is configured on the servers to provide load balancing and 39

40 Chapter 5: Network Design port failover between the two ports for this network. Redundant switches In addition to VLANs, separate redundant storage switches are used. The RAC interconnect connections are also on a dedicated switch. For real-world solution builds, it is recommended that these switches support Gigabit Ethernet (GbE) connections, jumbo frames, and port channeling. Jumbo frames Overview Jumbo frames are configured for the following layers: Celerra Data Mover Oracle RAC 11g/10g servers Switch Note Configuration steps for the switch are not covered here, as that is vendor-specific. Check your switch documentation for details. Linux servers To configure jumbo frames on a Linux server, execute the following command: ifconfig eth0 mtu 9000 Alternatively, place the following statement in the network scripts in /etc/sysconfig/network-scripts: MTU=9000 RAC interconnect Jumbo frames should be configured for the storage and RAC interconnect networks of this solution to boost the throughput, as well as possibly lowering the CPU utilization due to the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). Typical Oracle database environments transfer data in 8 KB and 32 KB block sizes, which require multiple 1,500 frames per database I/O, while using an MTU size of 1,500. Using jumbo frames, the number of frames needed for every large I/O request can be reduced, thus the host CPU needed to generate a large number of interrupts for each application I/O is reduced. The benefit of jumbo frames is primarily a complex function of the workload I/O sizes, network utilization, and Oracle database server CPU utilization, and so is not easy to predict. For information on using jumbo frames with the RAC Interconnect, see support.oracle.com. 40

41 Chapter 5: Network Design Verifying that jumbo frames are enabled To test whether jumbo frames are enabled, use the following command: ping M do s 8192 <target> Where: target is the interface to be tested Jumbo frames must be enabled on all layers of the network for this command to succeed. Ethernet trunking and link aggregation Trunking and link aggregation Two NICs on each Oracle RAC 11g/10g server are used in the NFS connection, referred to previously as the storage network. The RAC interconnect network is trunked in a similar manner using three NICs. EMC recommends that you configure an Ethernet trunking interface with two Gigabit Ethernet ports to the same switch. Enabling trunking on a Linux database server On the database servers, network redundancy is achieved by using Linux kernel bonding. This is accomplished using the scripts contained in /etc/sysconfig/networkscripts. A typical bonded connection is as follows: DEVICE=bond0 ONBOOT=yes BOOTPROTO=none USERCTL=no IPADDR= NETMASK= MTU=9000 This device (bond0) consists of two Ethernet ports, whose scripts are similar to the following: DEVICE=eth1 MASTER=bond0 SLAVE=yes ONBOOT=yes BOOTPROTO=none HWADDR=00:04:23:B9:66:F3 41

42 Chapter 5: Network Design The result is that the Ethernet ports that show their master as bond0 are joined to the bonded connection. Modify the /etc/modprobe.conf file. The following is an example of the lines that must be added: options bonding max_bonds=2 mode=4 alias bond0 bonding alias bond1 bonding Either reboot the Linux server or down and up the interfaces to enable the trunk. Public and private networks Public and private networks Each node should have: One static IP address for the public network One static IP address for the private cluster interconnect The private interconnect should only be used by Oracle to transfer cluster manager and cache fusion-related data. Although it is possible to use the public network for the RAC interconnect, this is not recommended as it may cause degraded database performance (reducing the amount of bandwidth for cache fusion and cluster manager traffic). Configuring virtual IP addresses The virtual IP addresses must be defined in either the /etc/hosts file or DNS for all RAC nodes and client nodes. The public virtual IP addresses will be configured automatically by Oracle when the Oracle Universal Installer is run, which starts Oracle's Virtual Internet Protocol Configuration Assistant (vipca). All virtual IP addresses will be activated when the following command is run: srvctl start nodeapps -n <node_name> Where: node_name is the hostname/ip address that will be configured in the client's tnsnames.ora file. 42

43 Chapter 5: Network Design Oracle RAC 11g/10g server network architecture Oracle RAC 11g/10g server network interfaces - NFS The following table lists each interface and describes its use for the Oracle 11g/10g NFS configuration. Interface port ID eth0 eth1 eth2 eth3 eth4 eth5 eth6 eth7 eth8 eth9 Description Client network Unused Unused Unused Storage network (trunked) Storage network (trunked) Unused RAC interconnect (trunked) RAC interconnect (trunked) RAC interconnect (trunked) Oracle RAC 11g/10g server network interfaces - FCP The following tables list each server and describe its use for the Oracle 11g/10g FCP configuration. There are two dual FC port host bus adapters on each of the database servers. Two FC ports from each of these database servers are connected to one of the QLogic FC switches. The other two FC ports are connected to a different switch for high availability. One port from SPA and one port from SPB are connected to each of the two FC switches. Database Server HBA Port 0 HBA Port 1 HBA Port 2 HBA Port 3 Description QLogic FC Switch-1 QLogic FC Switch-1 QLogic FC Switch-2 QLogic FC Switch-2 CLARiiON SPA Port 0 SPB Port 1 Description QLogic FC Switch-1 QLogic FC Switch-1 43

44 Chapter 5: Network Design SPA Port 1 SPB Port 0 QLogic FC Switch-2 QLogic FC Switch-2 44

45 Chapter 6: Installation and Configuration Chapter 6: Installation and Configuration Overview Introduction This chapter provides procedures and guidelines for installing and configuring the components that make up the validated solution scenario. Scope The installation and configuration instructions presented in this chapter apply to the specific revision levels of components used during the development of this solution. Before attempting to implement any real-world solution based on this validated scenario, gather the appropriate installation and configuration documentation for the revision levels of the hardware and software components as planned in the solution. Version-specific release notes are especially important. 45

46 Chapter 6: Installation and Configuration Task 1: Build the network infrastructure Network infrastructure For details on building a network infrastructure, see Chapter 5: Network Design > IP network layout > IP network design for validated scenario. Task 2: Set up and configure ASM for CLARiiON Configure ASM and manage CLARiiON For details on configuring ASM and managing the CLARiiON, follow the steps in the table below. Step Action 1 Find the operating system (OS) version. [root@mteoradb55 ~]# cat /etc/redhat-release Enterprise Linux Enterprise Linux Server release 5.1 (Carthage) [root@mteoradb55 ~]# 2 Check the PowerPath installation. [root@mteoradb55 ~]# rpm -qa EMC* EMCpower.LINUX [root@mteoradb55 ~]# 3 Check the ASM rpms applied on the OS. [root@mteoradb55 ~]# rpm -qa grep oracleasm oracleasm-support el5 oracleasmlib el5 oracleasm el el5 [root@mteoradb55 ~]# 4 Configure the ASM. [root@mteoradb55 ~]# /etc/init.d/oracleasm configure 5 Check the status of ASM. [root@mteoradb55 ~]# /etc/init.d/oracleasm status 6 Create the ASM disk. [root@mteoradb55 ~]# /etc/init.d/oracleasm createdisk LUN1 /dev/emcpowerab 7 Scan the ASM disk. [root@mteoradb55 ~]# /etc/init.d/oracleasm scandisks 8 List the ASM disks. [root@mteoradb55 ~]# /etc/init.d/oracleasm listdisks 46

47 Chapter 6: Installation and Configuration Task 3: Set up and configure database servers Check BIOS version Dell PowerEdge 2900 servers were used in our testing. These servers were preconfigured with the A06 BIOS. Upgrading the BIOS to the latest version (2.2.6 as of the time of this publication) resolved a range of issues, including hanging reboot problems and networking issues. Regardless of the server vendor and architecture, you should monitor the BIOS version shipped with the system and determine if it is the latest production version supported by the vendor. If it is not the latest production version supported by the vendor, then flashing the BIOS is recommended. Disable Hyper- Threading Intel Hyper-Threading Technology allows multi-threaded operating systems to view a single physical processor as if it were two logical processors. A processor that incorporates this technology shares CPU resources among multiple threads. In theory, this enables faster enterprise-server response times and provides additional CPU processing power to handle larger workloads. As a result, server performance will supposedly improve. In EMC s testing, however, performance with Hyper-Threading was poorer than performance without it. For this reason, EMC recommends disabling Hyper-Threading. There are two ways to disable Hyper-Threading: in the kernel or through the BIOS. Intel recommends disabling Hyper-Threading in the BIOS because it is cleaner than doing so in the kernel. Refer to your server vendor s documentation for instructions. Task 4: Configure NFS client options NFS client options For optimal reliability and performance, EMC recommends the NFS client options listed in the table below. The mount options are listed in the /etc/fstab file. Option Syntax Recommended Description Hard mount hard Always The NFS file handles are kept intact when the NFS server does not respond. When the NFS server responds, all the open file handles resume, and do not need to be closed and reopened by restarting the application. This option is required for Data Mover failover to occur transparently without having to restart the Oracle instance. NFS protocol vers= 3 Always Sets the NFS version to be used. Version 3 is recommended. 47

48 Chapter 6: Installation and Configuration Option Syntax Recommended Description version TCP proto=tcp Always All the NFS and RPC requests will be transferred over a connection-oriented protocol. This is required for reliable network transport. Background bg Always Enables client attempts to connect in the background if the connection fails. No interrupt nointr Always This toggle allows or disallows client keyboard interruptions to kill a hung or failed process on a failed hard-mounted file system. Read size and write size rsize=32768, wsize=32768 Always Sets the number of bytes NFS uses when reading or writing files from an NFS server. The default value is dependent on the kernel. However, throughput can be improved greatly by setting rsize/wsize= No auto noauto Only for backup/utility file systems Disables automatic mounting of the file system on boot-up. This is useful for file systems that are infrequently used (for example, stage file systems). Timeout timeo=600 Always Sets the time (in tenths of a second) the NFS client waits for the request to complete. sunrpc.tcp_slot _table_entries The NFS module called sunrpc.tcp_slot_table_entries controls the concurrent I/Os to the storage system. The default value of this parameter is 16. The parameter should be set to the maximum value (128) for enhanced I/O performance. To configure this option, type the following command: [root@mteoraesx2-vm3 ~]# sysctl -w sunrpc.tcp_slot_table_entries=128 sunrpc.tcp_slot_table_entries = 128 Important Before configuring this option, you must make the changes in sysctl.conf, and then run sysctl w. This reparses the file, and the resulting text is output. 48

49 Chapter 6: Installation and Configuration Task 5: Install Oracle Database 11g/10g Install Oracle Database 11g for Linux See Oracle s installation guide: Oracle Database Installation Guide 11g Release 1 (11.1) for Linux Install Oracle Database 10g for Linux See Oracle s installation guide: Oracle Database Client Installation Guide 10g Release 1 ( ) for Linux x86-64 Task 6: Configure database server memory options Database server memory Refer to your database server documentation to determine the total number of memory slots your database server has, and the number and density of memory modules that you can install. EMC recommends that you configure the system with the maximum amount of memory feasible to meet the scalability and performance needs. Compared to the cost of the remaining components in an Oracle database server configuration, the cost of memory is minor. Configuring an Oracle database server with the maximum amount of memory is entirely appropriate. Shared memory Oracle uses shared memory segments for the Shared Global Area (SGA), which is an area of memory that is shared by Oracle processes. The size of the SGA has a significant impact on the database performance, and there is a direct correlation between SGA size and disk I/O. EMC s Oracle RAC 11g/10g testing was done with servers using 20 GB of SGA. Memory configuration files The following table describes the files that must be configured for memory management: File Created by Function /etc/sysctl.conf Linux installer Contains the shared memory parameters for the Linux operating system. This file must be configured in order for Oracle to create the SGA with shared memory. /etc/security/limit s.conf Oracle parameter file Linux installer Oracle installer, dbca, or DBA who Contains the limits imposed by Linux on users use of resources. This file must be configured correctly in order for Oracle to use shared memory for the SGA. Contains the parameters used by Oracle to start an instance. This file must contain the correct parameters in order for Oracle to 49

50 Chapter 6: Installation and Configuration creates the database start an instance using shared memory. Configuring /etc/sysctl.conf Configure the etc/sysctl.conf file as follows: # Oracle parameters kernel.shmall = kernel.shmmax = kernel.shmmni = 4096 kernel.sem = fs.file-max = net.ipv4.ip_local_port_range = net.core.rmem_default = net.core.rmem_max = net.core.wmem_default = net.core.wmem_max = vm.nr_hugepages = sunrpc.tcp_slot_table_entries = 128 Recommended parameter values The following table describes recommended values for kernel parameters. Kernel parameter kernel.shmmax kernel.shmini kernel.shmall Parameter function Defines the maximum size in bytes of a single shared memory segment that a Linux process can allocate in its virtual address space. Since the SGA is comprised of shared memory, SHMMAX can potentially limit the size of the SGA. Sets the system-wide maximum number of shared memory segments. The value should be at least ceil(shmmax/page_size). The PAGE_SIZE on our Linux systems was Recommended value (Slightly larger than the SGA size)

51 Chapter 6: Installation and Configuration Configuring /etc/security/lim its.conf The section of the /etc/security/limits.conf file relevant to Oracle should be configured as follows: # Oracle parameters oracle soft nproc 2047 oracle hard nproc oracle soft nofile 1024 oracle hard nofile oracle soft memlock oracle hard memlock Important Ensure that the memlock parameter has been configured. This is required for the shared memory file system. This is not covered in the Oracle Database 11g Installation Guide, so be sure to set this parameter. Task 7: Tune HugePages Tuning HugePages The following table describes how to tune HugePages parameters to ensure optimum performance. Step Action 1 Ensure that the machine you are using has adequate memory. For example, our test system had 24 GB of RAM and a 20 GB SGA. 2 Set the HugePages parameters in /etc/sysctl.conf to a size into which the SGA will fit comfortably. For example, to create a HugePages pool of 21 GB, which would be large enough to accommodate the SGA, set the following parameter values: HugePages_Total: Hugepagesize: 2048 KB 3 Reboot the instance. 4 Check the values of the HugePages parameters by typing the following command: [root@mteoradb51 ~]# grep Huge /proc/meminfo On our test system, this command produced the following output: HugePages_Total: HugePages_Free: 1000 Hugepagesize: 2048 KB 5 If the value of HugePages_Free is equal to zero, the tuning is complete: 51

52 Chapter 6: Installation and Configuration If the value of HugePages_Free is greater than zero: a) Subtract the value of HugePages_Free from HugePages_Total. Make note of the answer. b) Open /etc/sysctl.conf and change the value of HugePages_Total to the answer you calculated in step a). c) Repeat steps 3, 4, and 5. Tuning HugePages on RHEL 5/OEL 5 On Red Hat Enterprise Linux 5 and on Oracle Enterprise Linux 5 systems, HugePages cannot be configured using the steps mentioned above. We used a shell script called hugepage_settings.sh to configure HugePages on these systems. This script is available on Oracle MetaLink Note The hugepage_settings.sh script configures HugePages as follows: HugePages_Total: HugePages_Free: 2244 HugePages_Rsvd: 2240 Hugepagesize: 2048 kb More information about HugePages For more information on enabling and tuning HugePages, refer to: Oracle MetaLink Note Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases Task 8: Set database initialization parameters Overview This section describes the initialization parameters that should be set in order to configure the Oracle instance for optimal performance on the CLARiiON CX4 series. These parameters are stored in the spfile or init.ora file for the Oracle instance. Database block size Parameter Syntax Description Database block size DB_BLOCK_SIZE=n For best database performance, DB_BLOCK_SIZE should be a multiple of the OS block size. For example, if the Linux page size is 4096, DB_BLOCK_SIZE =4096 *n. 52

53 Chapter 6: Installation and Configuration Direct I/O Parameter Direct I/O Syntax Description FILESYSTEM_IO_OPTIONS=setall This setting enables direct I/O and async I/O. Direct I/O is a feature available in modern file systems that delivers data directly to the application without caching in the file system buffer cache. Direct I/O preserves file system semantics and reduces the CPU overhead by decreasing the kernel code path execution. I/O requests are directly passed to network stack, bypassing some code layers. Direct I/O is a very beneficial feature to Oracle s log writer, both in terms of throughput and latency. Async I/O is beneficial for datafile I/O. Multiple database writer processes Parameter Syntax Description Multiple database writer processes DB_WRITER_PROCESSES=2*n The recommended value for db_writer_processes is that it at least matches the number of CPUs. During testing, we observed very good performance by just setting db_writer_processes to 1. Multi Block Read Count Parameter Syntax Description Multi Block Read Count DB_FILE_MULTIBLOCK_READ_COUNT= n DB_FILE_MULTIBLOCK_READ_COUNT determines the maximum number of database blocks read in one I/O during a full table scan. The number of database bytes read is calculated by multiplying the DB_BLOCK_SIZE by the DB_FILE_MULTIBLOCK_READ_COUNT. The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. Increasing this value may improve performance for databases that perform many full table scans, but degrade performance for OLTP databases where full table scans are seldom (if ever) performed. Setting this value to a multiple of the NFS READ/WRITE size specified in the mount limits the amount of fragmentation that occurs in the I/O subsystem. This parameter is specified in DB Blocks and NFS settings are in bytes - adjust as required. EMC recommends that DB_FILE_MULTIBLOCK_READ_COUNT be set to between 1 and 4 for an OLTP database and to between 16 and 32 for DSS. 53

54 Chapter 6: Installation and Configuration Disk Async I/O Parameter Disk Async I/O Syntax Description DISK_ASYNCH_IO=true RHEL 4 update 3 and later support async I/O with direct I/O on NFS. Async I/O is now recommended on all the storage protocols. Use Indirect Memory Buffers Parameter Syntax Description Use Indirect Memory Buffers USE_INDIRECT_DATA_BUFFERS=true Required to support the use of the /dev/shm inmemory file system for storing the SGA shared memory structures. Task 9: Configure Oracle Database control files and logfiles Control files EMC recommends that when you create the control file, allow for growth by setting MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, and MAXLOGMEMBERS to high values. Your database should have a minimum of two control files located on separate physical ASM diskgroups. One way to multiplex your control files is to store a control file copy on every diskgroup that stores members of the redo log groups. Online and archived redo log files EMC recommends that you: Run a mission-critical, production database in ARCHIVELOG mode. Multiplex your redo log files for these databases. Loss of online redo log files could result in a database recovery failure. The best practice to multiplex your online redo log files is to place members of a redo log group on different ASM diskgroups. To understand how redo log and archive log files can be placed, refer to the Reference Architecture diagram. 54

55 Chapter 6: Installation and Configuration Task 10: Enable passwordless authentication using SSH Overview The use of passwordless authentication using ssh is a fundamental concept to make successful use of Oracle RAC 10g or 11g with Celerra. SSH files SSH passwordless authentication relies on the three files described in the following table. File Created by Purpose ~/.ssh/id_dsa.pub ssh-keygen Contains the host s dsa key for ssh authentication (functions as the proxy for a password) ~/.ssh/authorized_keys ssh Contains the dsa keys of hosts that are authorized to log in to this server without issuing a password ~/.ssh/known_hosts ssh Contains the dsa key and hostname of all hosts that are allowed to log in to this server using ssh id_dsa.pub The most important ssh file is id_dsa.pub. Important If the id_dsa.pub file is re-created after you have established a passwordless authentication for a host onto another host, the passwordless authentication will cease to work. Therefore, do not accept the option to overwrite id_dsa.pub if ssh-keygen is run and it discovers that id_dsa.pub already exists. Enabling authentication: Single user/single host The following table describes how to enable passwordless authentication using ssh for a single user on a single host: Step Action 1 Create the dsa_id.pub file using ssh-keygen. 2 Copy the key for the host for which authorization is being given to the authorized_keys file of the host that allows the login. 3 Complete a login so that ssh knows about the host that is logging in. That is, record the host s key and hostname in the known_hosts file. 55

56 Chapter 6: Installation and Configuration Enabling authentication: Single user/multiple hosts Prerequisites To enable authentication for a user on multiple hosts, you must first enable authentication for the user on a single host: Chapter 6: Installation and Configuration > Task 10: Enable passwordless authentication using SSH > Enabling authentication: Single user/single host Procedure summary After you have enabled authentication for a user on a single host, you can then enable authentication for the user on multiple hosts by copying the authorized_keys and known_hosts files to the other hosts. This is a very common task when setting up Oracle RAC 11g//10g prior to installation of Oracle Clusterware. It is possible to automate this task by using the ssh_multi_handler.bash script. ssh_multi_handler.bash #!/bin/bash # # # Script: ssh_multi_handler.bash # # Purpose: Handles creation of authorized_keys # # # ALL_HOSTS="rtpsol347 rtpsol348 rtpsol349 rtpsol350" THE_USER=root mv -f ~/.ssh/authorized_keys ~/.ssh/authorized_keys.bak mv -f ~/.ssh/known_hosts ~/.ssh/known_hosts.bak for i in ${ALL_HOSTS} do ssh ${THE_USER}@${i} "ssh-keygen -t dsa" ssh ${THE_USER}@${i} "cat ~/.ssh/id_dsa.pub" \ >> ~/.ssh/authorized_keys ssh ${THE_USER}@${i} date done for i in $ALL_HOSTS do scp ~/.ssh/authorized_keys ~/.ssh/known_hosts \ ${THE_USER}@${i}:~/.ssh/ done for i in ${ALL_HOSTS} do for j in ${ALL_HOSTS} do ssh ${THE_USER}@${i} "ssh ${THE_USER}@${j} date" done 56

57 Chapter 6: Installation and Configuration done mv -f ~/.ssh/authorized_keys.bak ~/.ssh/authorized_keys mv -f ~/.ssh/known_hosts.bak ~/.ssh/known_hosts exit How to use ssh_multi_handler.bash At the end of the process described below, all of the equivalent users on the set of hosts will be able to log in to all of the other hosts without issuing a password. Step Action 1 Copy and paste the text from ssh_multi_handler.bash into a new file on the Linux server. 2 Edit the variable definitions at the top of the script. 3 Use chmod on the script to allow it to be executed. 4 Run the script. Output on our systems On our systems with the settings noted previously, this script produced the following effect: ssh multi-host output [root@rtpsol347 ~]#./ssh_multi_handler.bash Enter file in which to save the key (/root/.ssh/id_dsa): Generating public/private dsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: f8:21:61:55:55:92:15:ed:0a:62:89:c5:ed:93:5f:27 root@rtpsol347.solutions1.rtp.dg.com root@rtpsol347's password: Tue Aug 8 22:21:31 EDT 2006 root@rtpsol348's password:...(additional similar output not shown) authorized_keys 100% KB/s 00:00 known_hosts 100% KB/s 00:00 root@rtpsol348's password:...<repeated 3 times> Tue Aug 8 22:22:05 EDT <repeated 15 times> [root@rtpsol347 ~]# The 16 date outputs, without any requests for passwords, indicate that the 57

58 Chapter 6: Installation and Configuration passwordless authentication files on all root users among these four hosts have been successfully created. Enabling authentication: Single host/different user Another common task is to set up passwordless authentication across two users between two hosts. For example, enable the Oracle user on the database server to run commands as the root or nasadmin user on the Celerra Control Station. You can set this up by using the ssh_single_handler.bash script. This script creates passwordless authentication from the presently logged in user to the root user on the Celerra Control Station. ssh_single_handler.bash #!/bin/bash # # # Script: ssh_single_handler.bash # # Purpose: Handles creation of authorized_keys # # # THE_USER=root THE_HOST=rtpsol33 ssh-keygen -t dsa KEY=`cat ~/.ssh/id_dsa.pub` ssh ${THE_USER}@${THE_HOST} "echo ${KEY} >> \ ~/.ssh/authorized_keys" ssh ${THE_USER}@${THE_HOST} date exit Output on our systems On our systems with the settings noted previously, ssh_single_handler.bash produced the following effect: ssh single host output [oracle@rtpsol347 scripts]$./ssh_single_handler.bash Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: 09:13:4d:7d:20:0c:9a:c4:4e:35:c9:c9:11:9e:30:31 oracle@rtpsol347.solutions1.rtp.dg.com 58

59 Chapter 6: Installation and Configuration Wed Aug 9 09:40:01 EDT 2006 [oracle@rtpsol347 scripts]$ The date output without a password request indicates that the passwordless authentication files have been created. Task 11: Set up and configure CLARiiON storage for Replication Manager and SnapView Setup and configuration of CLARiiON storage Carry out the steps in the following table to configure CLARiiON storage to use Replication Manager (RM) and SnapView to create clones for backup and recovery and test/dev. Note To enable the CLARiiON cloning feature, two LUNs with a minimum capacity of 1 GB (for CX4 series) must be designated as clone private LUNs. These LUNs will be used during cloning. Step Action 1 Create a storage group: a. In Navisphere Manager, click the Storage tab, right-click Storage Groups and select Create Storage Group. b. In the Storage Group Name field, enter EMC Replication Storage. The name EMC Replication Storage must be used in order for Replication Manager to work correctly. The name is case sensitive. c. Click Apply, and then click OK. 2 Add LUNs to the storage group: a. In Navisphere Manager, under Storage Groups, right-click EMC Replication Storage and choose Select LUNs. (The Storage Group Properties tab is displayed.) b. In the LUNs tab, select all the LUNs that are used to store the database, then click Apply. 3 Create a separate storage group for clone target LUNs: a. In Navisphere Manager, click the Storage tab, right-click Storage Groups and select Create Storage Group. b. In the Storage Group Name field, enter Mount Host SG. 4 Add target LUNs to the Mount Host SG storage group: a. Select Mount Host SG and right-click. b. Click Select LUNs. (The Storage Group Properties tab is displayed.) c. Select the target LUNs to be used for cloning, and then click Apply. 5 Add the host to the clone target storage group. 59

60 Chapter 6: Installation and Configuration This allows the clone target LUNs to be mounted on the target storage group to bring up the test/dev copy of the database. Configuring Replication Manager The procedure for preparing and installing Replication Manager (RM) components is outside the scope of this document. After installing Replication Manager Server and Agents, carry out the steps in the following tables to: Add a host Add storage Create a LUNs pool Create an application set Adding a host using RM The following table lists the steps for adding a host using Replication Manager. Step Action 1 In the Replication Manager console, select and right-click Hosts, then select New Host. 2 Type a name for the host (database server name), then click OK. 3 Repeat steps 1 and 2 to add all other hosts (database servers) and the mount host server to Replication Manager. Adding storage using RM After you have added the hosts and the mount host, you must add the storage. The following table lists the steps for adding storage using Replication Manager. Step Action 1 In Replication Manager, select and right-click Storage Services, then select Add Storage. 2 To start the Add Storage Wizard, in the Confirm dialog box, click Yes, and then click Next to go to the next screen. 3 On the Select Storage Services screen, select the appropriate storage service and double-click it. 4 On the Array Connections screen, enter the login credentials for the storage array, and then click OK. 5 When the Discover Storage screen progress bar reaches 100%, click Close. 6 On the Select Target Devices screen, select the snap cache for snapshots and the LUNs, then click Next. 7 To complete the procedure, click Finish. 8 When the Add Storage screen progress bar reaches 100%, click 60

61 Chapter 6: Installation and Configuration Close. The storage array is now visible in the Storage Services list. Creating a LUNS pool A dedicated storage pool for cloning must be created so that the specified LUNs in that pool can be selected as cloning targets. To create a new storage pool, carry out the steps in the following table: Step Action 1 In Replication Manager, select and right-click Storage Pools, then select New Storage Pool. 2 On the New Pool screen, enter a pool name and a description, and then click Add. 3 Select the LUNs that you want to include in the storage pool and click OK. 4 Click Add to add the updated LUNs to the new pool, and then click Close. Creating an application set using RM After you have added the storage to Replication Manager, you must then create an application set using the Replication Manager console. The following table lists the steps for creating an application set: Step Action 1 In Replication Manager, select and right-click Application Sets, then select New Application Set. 2 In the Add Application Set wizard, click Next to go to the next screen. 3 On the Application Set Name and Objects screen, click Add Instance to create a database instance. 4 On the Application Credentials screen, enter the Oracle database credentials or the ASM credentials, and then click OK. The database instance created. 5 Select the newly created instance and type a name for the application set, then click Next. 6 On the Completing the Application Set wizard screen, click Finish. The application set is created. 61

62 Chapter 6: Installation and Configuration Task 12: Install and configure EMC RecoverPoint Installation When performing the following steps, the RecoverPoint GUI provides many options. A detailed explanation of those options is available in the EMC RecoverPoint Administrator s Guide. Refer to the EMC RecoverPoint Installation Guide for full installation instructions. Configuring RecoverPoint Configuration requires one storage group per site, which consists of the repository volume, journal volumes, and all the replication volumes. The repository volume and journal volumes will not be accessible from any hosts other than the RPAs. But the RPAs will need to have access to all the volumes that will take part in replication. Similarly, there must be one more storage group consisting of hosts and the replication volumes to which the hosts will have access. The hosts should not have access to repository and journal volumes. The steps for validating the disaster recovery solution using RecoverPoint are described in the following table. Step Action 1 Zone all the RPA ports with all the CLARiiON array ports. A single zone consisting of all RPA ports and all CLARiiON ports per site should be created. 2 Manually register the RPA initiators as Initiator Type: RecoverPoint Appliance with the failover mode set to 4. 3 Complete the installation of the RPAs. Refer to Chapter 2 of the EMC RecoverPoint Installation Guide for instructions. Start from the step after booting up the RPA with the new software and continue up to, but not including, attaching the splitters. During our testing, the RecoverPoint appliances at both sites were configured as shown below: RPA Setup Details Parameter Site 1: Value Site 2: Value Site Name Source Target Management Default Gateway Management Subnet Mask WAN default gateway WAN subnet mask Site Management IP Time zone America/New_York America/New_York Primary DNS server Secondary DNS server Local domain solutions1.rtp.dg.com solutions1.rtp.dg.com NTP server N/A 62

63 Chapter 6: Installation and Configuration Number of virtual ports N/A N/A Initiator only mode N/A N/A Number of exposed LUNs N/A N/A Box 1 Box Management IP Box WAN IP Remote maintenance port N/A N/A Box 2 Box Management IP Box WAN IP Remote maintenance port N/A N/A 4 Verify the RPA to CX splitter connectivity by logging in to the CLI of the RPA. You can verify the RPA to CX splitter connectivity using the following option: Run: [3]:Diagnostics > [2]:Fibre Channel diagnostics > [4]:Detect Fibre Channel LUNs If connectivity has been established, you will see the FC LUNs. 5 To add new splitters to the RPA environment, right-click on the splitter icon in the RecoverPoint GUI. Splitters can then be selected and added. Splitters must be added for both sites. 6 Create a consistency group: a) In the navigation pane of the RecoverPoint management console, select Consistency Groups. b) In the Consistency Groups pane, click Add new group. A detailed description of the different options that can be specified for a consistency group is available in Chapter 2 of the EMC RecoverPoint Administrator s Guide. 7 Configure the source and target copies: a) In the navigation pane of the RecoverPoint management console, select Consistency Groups > Oracle DB Consistency Group. b) In the Oracle DB Consistency Group pane, select the Status tab. c) Click the Add Copy icon. The New Copy dialog box is displayed. 8 In the New Copy dialog box, select the site of the production source and enter the values for General Settings and Advanced Settings. Refer to the EMC RecoverPoint Administrator s Guide for a detailed explanation of all the available options. 9 Define the replication sets: a) In the navigation pane, select Volumes. b) Click Configuration. 63

64 Chapter 6: Installation and Configuration The Volumes Configuration dialog box is displayed. c) To create a new replication set, click Add New Replication Set. d) Add a volume to both the source-side replica and target-side replica. e) Add at least one journal volume per replication set. 10 Attach volumes to the splitters: a) In the navigation pane, select Splitters. The Splitters tab with the available splitters is displayed. b) Select a splitter and double-click. c) Click Rescan and then click Attach. The replication sets that are discovered by the splitter are displayed. d) Select the replication sets that you want to add to the splitters. Note A volume cannot be replicated unless it is attached to a splitter. Once the replication sets have been added, the Splitter Properties screen will display all the replication sets that are associated with the splitters. 11 Enable a consistency group by selecting its name in the navigation panel and clicking Enable Group. Note The consistency groups should be enabled before starting replication. 12 Start replication from source to target site by clicking Start Transfer. 13 Enable image access. Once the replication is complete, the target image can be accessed by selecting Image Access from the drop-down menu. Task 13: Set up the virtualized utility servers Setting up the virtualized utility servers Virtualized single instance database servers were used as targets for test/dev and disaster recovery solutions. To set up a virtualization configuration, you need to do the steps outlined in the following table: Step Action 1 Deploy a VMware ESX server. 2 Capture the total physical memory and total number of CPUs that are available on the ESX server. 3 Create four virtual machines (VMs) on the ESX server. For the storage network configuration, see Chapter 8: Virtualization > VMware ESX server > Typical storage network configuration. 4 Distribute the memory and CPUs available equally to each of the VMs. 64

65 Chapter 6: Installation and Configuration 5 Assign a VMkernel IP ( ) to each ESX server so that it can be used to mount NFS storage. For the storage configuration, see Chapter 8: Virtualization > VMware ESX server > Storage configuration. Note All the VMs need to be located on common storage. This is mandatory for performing VMotion. 6 Configure four additional NICs on the ESX server; dedicate each NIC to a VM. These additional NICs are used to configure the dedicated private network connection to Celerra where the database files reside. 7 Ensure all necessary software, for example, Oracle, is installed and configured. Note All database objects are stored on an NFS mount. Task 14: Configure and connect EMC RecoverPoint appliances (RPAs) Configuring RecoverPoint appliances (RPAs) Refer to the EMC RecoverPoint Installation Guide, located on Powerlink.com, for full installation and configuration instructions. Access to this document is based on your login credentials. If you do not have access, contact your EMC representative. Task 15: Install and configure EMC MirrorView/A Installing MirrorView/A Refer to the EMC MirrorView Installation Guide, located on Powerlink.com, for full installation instructions. Access to this document is based on your login credentials. If you do not have access, contact your EMC representative. 65

66 Chapter 6: Installation and Configuration Task 16: Install and configure EMC CLARiiON (CX) splitters Installing CX splitters Refer to the section on installing CLARiiON (CX) splitters in the EMC RecoverPoint Installation Guide, located on Powerlink.com, for full installation instructions. Access to this document is based on your login credentials. If you do not have access, contact your EMC representative. 66

67 Chapter 7: Testing and Validation Chapter 7: Testing and Validation Overview Introduction to testing and validation This chapter provides a detailed summary and description of the tests performed to validate an EMC Proven Solution. The goal of the testing was to record the response of the end-to-end solution and component subsystem under reasonable load. The solution was tested under a load that is representative of the market for Oracle RAC 11g/10g on Linux with an EMC Celerra unified storage platform over Fibre Channel Protocol (FCP) and Network File System (NFS). Objectives The objectives of this testing were to carry out: Performance testing of a blended solution using FCP for datafiles, online redo log files, and controlfiles (all of the files in the Oracle database environment that require high-performance I/O), and NFS for all the other files required to be stored by the Oracle database environment (archived log files, OCR files, flashback recovery area, and database backups). Functionality testing of a test/dev solution using a blended configuration, whereby a cloned version of a physically-booted production Oracle RAC 11g/10g database was replicated and then mounted on a VMware virtual machine running a single-instance Oracle Database 11g/10g. Testing focus This Proven Solution Guide focuses on the Store and Advanced Backup solution components of the objectives. To accomplish these objectives, EMC did the following: Used Quest Benchmark Factory for Databases to run a TPC-C workload against the solution stack described in the system configuration. Scaled the workload iteratively until a breaking point in the test was reached (either an error was returned due to some resource constraint or performance scaling became negative). Gathered the operating system and Oracle performance statistics, and used these as the input to a tuning effort. In this way, the maximum workload reasonably achievable on this solution stack was reached. Performed functional tests, as appropriate, for each solution being validated. Section A: Store solution component Overview of the Store solution component The Store solution component was designed as a set of performance measurements to determine the bounding point of the solution stack in terms of performance. A reasonable amount of fine tuning was performed in order to ensure that the performance measurements achieved were consistent with real-world performance. 67

68 Chapter 7: Testing and Validation Test procedure The following procedure was used to validate the Store solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Restart all the client machines. 3 Stop all the database instances. 4 Initiate the Benchmark Factory console and agents on the client machines. 5 Start the Benchmark Factory job. 6 Monitor the progress of the test. 7 Allow the test to finish. 8 Capture the results. 68

69 Chapter 7: Testing and Validation Test results 1-Node RAC Store Results Users: 3700 TPS: Response Time: Users TPS Response Time DB CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size Node RAC Store Results Users: 4800 TPS: Response Time: Users TPS Response Time DB CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size Node RAC Store Results Users: 6300 TPS: Response Time: Users TPS Response Time DB CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size Node RAC Store Results Users: 9700 TPS: Response Time: 0.19 Users TPS Response Time DB CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size

70 Chapter 7: Testing and Validation Section B: Basic Backup solution component Overview of the Basic Backup solution component The Basic Backup solution component demonstrates that the validated configuration is compatible with Oracle Recovery Manager (RMAN) disk-to-disk backup. The backup tests were performance tests, where the performance of each node level was tested while creating an RMAN backup. The restore was a functionality test, but the amount of time required to perform the RMAN restore was also tuned and measured. The transactions that were restored and recovered were examined to ensure that there was no data loss. Test configuration The following tests were executed with 24 GB of memory on each database server. Test procedure The following procedure was used to validate the Basic Backup solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Restart all the client machines and stop all the database instances. 3 Initiate the Benchmark Factory console and agents on the client machines. 4 Start the Benchmark Factory job. 5 Monitor the progress of the test. 6 Initiate RMAN backup at user load 3600 and monitor the progress. 7 Allow the test to finish. 8 Shut down the database and mount the database. 9 Perform the restore operation using RMAN and capture the observations. Test results Summary The RMAN backup operation was performed while the Benchmark factory load was running. The RMAN backup started at user load When RMAN was initiated at user load 3600, there was a moderate increase in response time and transaction throughput. Testbed performance The aggregate performance for the entire testbed was as follows: Machine Users TPS Response Time 70

71 Chapter 7: Testing and Validation (seconds) Mterac Conclusion RMAN provided a reliable high-performance backup solution for Oracle 11g using our configuration. However, the time required to restore the database (44 minutes) was significant. Section C: Advanced Backup solution component Overview of Advanced Backup solution component The Advanced Backup solution component demonstrates that the validated configuration is compatible with CLARiiON SnapView using Replication Manager. The backup tests were performance tests. The performance of each node level was tested while performing hot backup using a SnapView snapshot with Replication Manager. The restore was a functionality test. The amount of time required to perform the SnapView restore was tuned and measured Test Type Performance Functional Test Description 1 hot backup using SnapView snapshot with Replication Manager Restore and recover from a SnapView snapshot using Replication Manager Test configuration The test configuration for the Advanced Backup solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Advanced Backup solution component: Step Action 1 Configure Replication Manager. 2 Register the production hosts, mount hosts, and storage in Replication Manager. 3 Create the application set in Replication Manager for the database to be replicated. 4 Create a job in the Replication Manager console to take the SnapView snapshot. 5 Close all the Benchmark Factory agents that are running. 71

72 Chapter 7: Testing and Validation 6 Close the Benchmark Factory console. 7 Restart the Benchmark Factory console and agents. 8 Stop and restart the database instances. 9 Start the Benchmark Factory test with the user load ranging from 4000 to When the user load reaches iteration 5500, take a snapshot of the database by running the job in the Replication Manager console. 11 Monitor the performance impact on the production database. 12 When the Benchmark Factory test is complete, capture the results. 13 Shut down the database. 14 Stop and disable the ASM instances. 15 Dismount the data diskgroups. 16 Restore the database using Replication Manager. 17 Recover the database. 18 Capture the time taken to restore the database. Test results Summary The advanced backup operation using EMC SnapView was performed while the OLTP load was running on the database. When the database was taken into hot backup mode at the 5500th iteration to create the snapshot, there was a significant increase in response time and a significant decrease in transaction throughput. Testbed performance The aggregate performance for the entire testbed was as follows: Machine Users TPS Response Time (seconds) Mterac Conclusion The CLARiiON SnapView feature works with Oracle RAC 11g for our configuration and can be performed successfully through RM. In most test runs, very slight performance hit was observed during backup. However, this was temporary. The performance recovered to the expected levels, in spite of not deleting the snapshots after the entire run. The restore of a SnapView hot backup is faster than a RMAN disk-to-disk restore. 72

73 Chapter 7: Testing and Validation Section D: Basic Protect solution component Overview The Basic Protect solution component was designed to test the disaster recovery functionality already built into the Oracle RAC of the validated configuration. The Basic Protect solution component assumes that there are no additional costs to the customer in terms of software or hardware. Oracle Data Guard was used for the Basic Protect solution. Test configuration The test configuration for the Basic Protect solution component was identical to the Store solution component. Functional validation only Only functional validation was done for the Basic Protect solution component. No tuning or performance measurements were carried out. Test procedure The following procedure was used to validate the use of Data Guard with the validated configuration: Step Action 1 Configure Data Guard for Maximum Performance mode. SQL> select PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE from v$database; PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY 2 Enable automatic archival on the database. SQL> ALTER DATABASE FORCE LOGGING; 3 Place the primary database in Force Logging mode. 4 Set the initialization parameters for the primary database. a) Create a parameter file from the spfile used by the primary database. b) Use the following commands to create the pfile initmterac4.ora for the primary database: sqlplus / as sysdba; Create pfile= /u02/dataguard/initmterac4.ora from spfile; 5 Modify the pfile initmterac4.ora for the Data Guard configuration as shown below: ## *.log_archive_dest_1='location=+srcarch/mterac4/' 73

74 Chapter 7: Testing and Validation **************************************************** ****************** ADD THE FOLLOWING FOR PRIMARY DATABASE DATAGUARD CONFIG **************************************************** ****************** db_unique_name=mterac4 log_archive_config= dg_config=(mterac4,mterac4_sb) log_archive_dest_1= LOCATION=+SRCARCH/mterac4/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mterac4' log_archive_dest_2= service=mterac4_sb VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=mterac4_sb' log_archive_dest_state_1=enable log_archive_dest_state_2=enable fal_server=mterac4_sb fal_client=mterac4 db_file_name_convert='+srcdata/mterac4/','+srcdata/m terac4/' log_file_name_convert= +srclog1/mterac4/, +srclog1/ mterac4/, +srclog2/mterac4/, +srclog2/mterac4 standby_file_management=auto 6 Modify the spfile parameters using the parameter file initmterac4.ora : SQL>Create spfile= +SRCDATA/mterac4/spfilemterac4.ora from pfile=/u02/dataguard/initmterac4.ora ; 7 Copy a complete set of database datafiles to the destination CLARiiON LUNs using RMAN. 8 Create the standby control file for the standby database, and then open the primary database to user access. SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/u02/dataguard/mterac4_sb.ctl'; SQL> ALTER DATABASE OPEN; 9 Create the parameter file for the standby database. The parameter values for the standby database are the same as the primary database. 10 Create and modify the /u02/dataguard/initmterac4_sb.ora file as follows: db_unique_name=mterac4_sb log_archive_config= dg_config=(mterac4,mterac4_sb) log_archive_dest_1= LOCATION=+SRCARCH/mterac4/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mterac4_sb' 74

75 Chapter 7: Testing and Validation log_archive_dest_2= service=mterac4 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=mterac4' log_archive_dest_state_1=enable log_archive_dest_state_2=enable fal_server=mterac4 fal_client=mterac4_sb db_file_name_convert='+srcdata/mterac4/','+srcdata/m terac4/' log_file_name_convert= +srclog1/mterac4/, +srclog1/mterac4_sb/, +srclog2/mterac4/, +srclog2/ mterac4_sb/ standby_file_management=auto *.control_files= /u02/dataguard/mterac4_sb.ctl 11 Modify the tnsnames.ora file on all primary database nodes and standby database nodes to include all primary and standby net service names. This allows transparent failover. 12 Create a password file. cd /u01/app/oracle/product/10.2.0/db_1/dbs mv orapwmterac41 orapwmterac41_old mv orapwmterac42 orapwmterac42_old mv orapwmterac43 orapwmterac43_old mv orapwmterac44 orapwmterac44_old orapwd file=orapwmterac41 password=nasadmin orapwd file=orapwmterac42 password=nasadmin orapwd file=orapwmterac43 password=nasadmin orapwd file=orapwmterac44 password=nasadmin orapwd file=orapwmterac4_sb1 password=nasadmin orapwd file=orapwmterac4_sb2 password=nasadmin orapwd file=orapwmterac4_sb3 password=nasadmin orapwd file=orapwmterac4_sb4 password=nasadmin Important The password for the SYS user must be identical on every node (both primary and standby) for successful transmission of the redo logs. 12 Create the following directory in all standby nodes: /u01/app/oracle/product/10.2.0/db_1/admin/mterac4_sb 13 Create the following directories for each /u01/app/oracle/product/10.2.0/db_1/admin/mterac4_sb directory in all 75

76 Chapter 7: Testing and Validation standby nodes: adump bdump cdump udump 14 Register the standby database and database instances with the Oracle Cluster Registry (OCR) using the server control utility. srvctl add database -d mterac4_sb -o /u01/app/oracle/product/10.2.0/db_1 srvctl add instance -d mterac4_sb -i mterac4_sb1 -n mteoradb9 srvctl add instance -d mterac4_sb -i mterac4_sb2 -n mteoradb10 srvctl add instance -d mterac4_sb -i mterac4_sb3 -n mteoradb11 srvctl add instance -d mterac4_sb -i mterac4_sb4 -n mteoradb12 15 Update the value of ORACLE_SID on all standby nodes (mteoradb9, mteoradb10, mteoradb11 and mteoradb12). This setting is contained in the.bash_profile.bash file for all Oracle users. 16 Copy the modified pfile from the standby database (initmterac4_sb.ora) to the standby database nodes. 17 Copy the parameter file and the standby control file to the standby database nodes. Note Because the ASM file system was used to configure the database, no utilities were supported by Oracle to copy the files between ASM disk groups. The control files were kept under the OCFS2 file system (/u02/dataguard/mterac4_sb.ctl). To use Oracle s utility DBMS_FILE_TRANSFER to copy the contents to ASM disk groups, open the DR database in write mode. However, this defeats the purpose of the standby database. The standby database should not be opened in write mode at the standby site during normal operation. See the initialization parameter file in the previous procedure for the control_files parameter. 18 Perform the following steps to bring up the database: SQL> startup nomount; ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes 76

77 Chapter 7: Testing and Validation SQL> alter database mount; Database altered. SQL> show parameter control_files; NAME TYPE VALUE control_files string /u02/dataguard/mterac4_sb.ctl 19 Create the standby redo log files on the standby database. Important The size of the current standby redo log files must exactly match the size of the current primary database online redo log files. 20 Start the managed recovery and realtime apply on the standby database. The statement includes the DISCONNECT FROM SESSION option so that redo apply ran in a background session. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; 21 Verify the status of the existing logs on the primary and standby database. Primary database SQL> select GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS from v$log; GROUP# THREAD# SEQUENCE# ARC STATUS YES INACTIVE NO CURRENT YES INACTIVE NO CURRENT YES INACTIVE NO CURRENT YES INACTIVE NO CURRENT 8 rows selected. Secondary database SQL> select GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS from v$log; GROUP# THREAD# SEQUENCE# ARC STATUS YES CLEARING YES CLEARING_CURRENT YES CLEARING YES CLEARING_CURRENT YES CLEARING 77

78 Chapter 7: Testing and Validation YES CLEARING_CURRENT YES CLEARING YES CLEARING_CURRENT 8 rows selected. 22 Force a log switch to archive the current online redo log file on the primary database. SQL> alter system archive log current; System altered. 23 Archive the new redo data on the standby database. To verify whether the standby database received and archived the redo data, on the standby database, query the V$ARCHIVED_LOG view. SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME 2> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; 24 Apply the new archived redo log files. To verify whether the archived redo log files were applied, on the standby database, query the V$ARCHIVED_LOG view. SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; SEQUENCE# APP YES 46 YES 46 YES 47 YES 47 YES 47 YES 48 YES 48 YES 48 YES 49 YES 49 YES SEQUENCE# APP YES 50 YES 50 YES 50 YES 51 NO 51 NO 51 YES 79 YES 80 YES 81 YES 82 YES SEQUENCE# APP

79 Chapter 7: Testing and Validation 83 YES 84 YES 85 NO 25 rows selected. Note The last redo log is not cleared. This is normal as another log is being applied by the time the last redo log is cleared. Test results Summary The Basic Protect solution component was used for both sites. No attempt was made to emulate a WAN connection as this round of testing included only functionality tests. The production and disaster recovery (DR) sites are directly connected. Conclusion This solution component uses tools provided by the operating system and database server software to provide disaster recovery. This is a very effective solution component that uses the database server s CPU, memory, and I/O channels for all operations relating to the disaster recovery configuration. Section E: Advanced Protect solution component using EMC MirrorView and Oracle Data Guard Overview of Advanced Protect solution component The purpose of this test series was to test the disaster protection functionality of the validated configuration in conjunction with EMC MirrorView. The Advanced Protect solution component uses MirrorView to enhance the disaster recovery functionality of the blended FCP/NFS solution. The use of Oracle Data Guard to create a disaster recovery configuration is an established best practice. MirrorView/Asynchronous (MirrorView/A) over iscsi is commonly used as a way of seeding the database for the Data Guard configuration. After the production database was copied to the target location, redo log shipping was then established using Data Guard. No attempt was made to emulate a WAN connection in this round of testing. The production site and the disaster recovery (DR) site were directly connected over a LAN. The advantages of MirrorView are: The data can be replicated over a long distance. Local access is not required. No downtime on the source database is required. 79

80 Chapter 7: Testing and Validation Test configuration The test configuration for the Advanced Backup solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Advanced Protect solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Close the Benchmark Factory console. 3 Restart the Benchmark Factory console and agents. 4 Stop and restart the database instances. 5 Start the Benchmark Factory test with the user load ranging from 4000 to When the user load reaches iteration 5000, run a script to mirror the LUNs. 7 Wait for the Benchmark Factory test to complete and for the mirrors to synchronize and to become consistent. 8 Fracture the mirrors. 9 Copy the standby control file and the parameter file that is created from production to the target host. 10 Update the parameters at the target host. 11 Start the database in mount phase on the target host. 12 Do the remaining tasks to enable Data Guard to carry out redo log shipping. 13 Do the switchover and switchback. Test results Summary Testing showed that a Data Guard configuration could be successfully initialized (or seeded ) using MirrorView. However, performance was limited by Data Guard to only 9800 users and TPS. Data Guard is a viable option for remote replication but clearly creates a performance penalty. Testbed performance The aggregate performance for the entire testbed was as follows: Machine Users TPS Response Time (seconds) Mterac

81 Chapter 7: Testing and Validation Section F: Advanced Protect solution component using EMC RecoverPoint Overview of Advanced Protect solution component The purpose of this solution component was to replicate a small RAC database from the production site and to bring up the replicated database at the target site as a single-instance database on a VMware host. Functional validation only Only functional validation was done for the Advanced Protect solution component using EMC RecoverPoint with CLARiiON (CX) splitters. No tuning or performance measurements were carried out. EMC plans to carry out performance testing during the next test cycle. Test configuration The test configuration as shown in the image below was used to validate the Advanced Protect solution component. 81

82 Chapter 7: Testing and Validation Test procedure The following procedure was used to validate the functionality of RecoverPoint with the Advanced Protect solution component: Step Action 1 Create a small database using the listed LUNs at the production site. 2 Create consistency groups comprising of DATA and REDO LOG LUNs. 3 Establish the replication pairs for these LUNs. 4 Enable the consistency groups. 5 Verify that the replication starts successfully. 6 While the replication is in progress, create a table named foo at the source site and insert a row. 7 At the target site, access the latest system-defined bookmark image and discover the replicated LUNs on the VMware ESX server. 8 At the target site, map the LUNs discovered by the ESX server onto the VMware host using Raw Device Mapping (RDM). Note Detailed instructions for mapping the LUNs from an ESX server to a VMware host using RDM are available in Chapter 8: Virtualization > VMware ESX server > LUN discovery. 9 Once the LUNs are mapped to the VMware host, discover the corresponding PowerPath devices and ASM disks. 10 Mount the ASM disk groups and modify the pfile parameters so that the replicated database can be brought up as a single-instance database. 11 Bring up the single-instance database at the target site and verify that the entries inserted in table foo at the source site have been successfully replicated to the target site. LUN designation A single VMware host was used to bring up a single-instance database at the target site. The source LUNs were designated as follows: Name Number Type RAID Size Data 320 FC R5 2 GB Log1 321 FC R5 1 GB Arch 330 FC R5 2 GB Log2 331 FC R5 1 GB Flash 340 FC R5 2 GB 82

83 Chapter 7: Testing and Validation The target LUNs were designated as follows: Name Number Type RAID Size Data replica 320 FC R10 2 GB Log1 replica 321 FC R10 1 GB Arch replica 330 FC R10 2 GB Log2 replica 331 FC R10 1 GB Flash replica 340 FC R10 2 GB Test results Summary The objective was to replicate a small RAC database from the Source site and bring up the replicated database at the target site as a single instance database on a VM host. Test results There was no attempt to do any tuning or performance measurements during this cycle. The performance impact is planned to be validated during the next test cycle. Conclusion EMC RecoverPoint with CX splitters can be used to successfully replicate a database in the context of the validated configuration described in the reference architecture for this solution. EMC RecoverPoint CRR complements EMC s existing portfolio of remote-replication products by adding heterogeneous replication (with bandwidth compression) in asynchronous-replication environments, which lowers multi-year total cost of ownership. RecoverPoint CDP offered as a stand-alone solution, or combined with CRR enables customers to roll back to any point in time for effective local recovery from events such as database corruption. With the CLARiiON splitter, the host-based splitters or the expensive fabric switches are not required. Hence it results in huge cost reduction for customers. Section G: Test/Dev solution component using EMC SnapView clone Overview of Test/Dev solution component The Test/Dev solution component provides a rapid, high-performance method for copying a running Oracle 11g/10g database. The copy can then be used for testing and development purposes. SnapView clones EMC SnapView clones can be used to clone a running Oracle RAC 11g/10g production database for rapidly creating testing and development copies of this database. 83

84 Chapter 7: Testing and Validation A number of best practices should be followed. Archived log and flashback recovery area LUNs While cloning the LUNs, it is better to avoid cloning the archived log and flashback recovery area LUNs. These LUNs are usually large and are typically stored on LCFC or SATA II disks. The combination of these settings means that cloning these LUNs will take much longer than cloning the datafile LUNs. If required, you can configure a separate set of LUNs for the archived logs and flashback recovery area for the testing or development database. Since the test/dev database can be easily refreshed, you may choose to simply skip backing up these databases. In this case, a flashback recovery area is not required. In order for the test/dev database to perform a successful recovery, the archived logs from the production database should be accessible to the test/dev database. You must always move the archived log dump destination of the test/dev database to a new set of LUNs before opening this database for transactional use. Significant transactional I/O on the test/dev database could create an archived log. If you do not change the archive dump destination of the test/dev database, and it uses the same LUNs for storing the archived logs as the production database, the test/dev database could overwrite the archived logs of the production database. This could destroy the recoverability of the production database. This setting is contained in the pfile or spfile of the test/dev database in the parameters LOG_ARCHIVE_DUMP_DEST1 to LOG_ARCHIVE_DUMP_DEST10. SyncRate While initializing the clone, the most important setting is the SyncRate. If you need the test/dev database to be created rapidly, specify the SyncRate option as high. This speeds up the synchronization process, but at the cost of a greater performance impact on the production database. If performance on the production database is your primary concern, specify the SyncRate option as low. Here is an example of the naviseccli command using the SyncRate option: naviseccli address snapview addclone name lun0clonegrp luns 50 -SyncRate <high medium low value> If you do not specify this option, then the default is set as medium. Test configuration The test configuration for the Test/Dev solution component was identical to the Store solution component. A storage group named EMC Replication Storage was created and populated with target LUNs for storing replicas. Replication Manager takes care of copying data from the source LUNs to the target LUNs. The time required to create the clone depends on the size of the source LUN. For this test, only the data and redo log LUNs were cloned. Because of the amount of time required, archived logs and flashback recovery area files were not cloned. Only the source archive and the flash LUNs were used to bring up the cloned 84

85 Chapter 7: Testing and Validation database. Testing procedure The following procedure was used to validate the Test/Dev solution component: Step Action 1 Configure Replication Manager. 2 Register the production hosts, mount hosts, and storage in Replication Manager. 3 Create the application set in Replication Manager for the database to be replicated. 4 Create a job in the Replication Manager console to create a SnapView clone. 5 Close all the Benchmark Factory agents that are running. 6 Close the Benchmark Factory console. 7 Restart the Benchmark Factory console and agents. 8 Stop and restart the database instances. 9 Start the Benchmark Factory test with the user load ranging from 4000 to When the user load reaches iteration 6000, take a snapshot of the database by running the job in the Replication Manager console. Monitor the performance impact on the production database. 11 Capture the results when the Benchmark Factory test completed. 12 Discover the PowerPath devices and ASM disks for the mount host (target database server). 13 Perform the mount and recovery of the replica (SnapView clone) on the mount host using Replication Manager. 14 Capture the time taken to recover the database. Test results Summary The SnapView clone was successfully validated. The performance impact of the operation was minor. Performance was very similar to the baseline. Conclusion The CLARiiON SnapView clone feature works with the validated configuration and can be performed successfully through Replication Manager. In most test runs, a modest performance hit was observed during hot backup, however, this was temporary. The performance recovered to the expected levels, 85

86 Chapter 7: Testing and Validation after 10 to 15 minutes. This again depends on the size of the database. Testbed performance The aggregate performance for the entire testbed was as follows: Machine Users TPS Response Time (seconds) Mterac Section H: Backup Server solution component Overview The purpose of the Backup Server solution component is to offload the burden of backup from the production database servers to a utility database server running within a VM. The Backup Server solution component is becoming more popular because of higher performance disk-to-disk snapshot technology (using Replication Manager). Oracle Database 11g RMAN has a CATALOG BACKUPPIECE command. This command adds backup pieces of information of the target database on disk to the production database RMAN repository. The backup pieces should be on the shared location. As long as backup pieces are accessible to both production and target databases, RMAN commands such as RESTORE and RECOVER behave transparently across different databases. Test configuration The test configuration for the Backup Server solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Backup Server solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Close the Benchmark Factory console. 3 Restart the Benchmark Factory console and agents. 4 Stop and restart the database instances. 5 Start the Benchmark Factory test and start ranging the user load from 4000 to When the user load is at iteration 5600, initiate cloning of the production database with Replication Manager. 86

87 Chapter 7: Testing and Validation 7 Start the ASM instance in the target server, mount the database, and run the RMAN backup. 8 Place the RMAN backup pieces in a shared flashback recovery area. The backup pieces should be accessible to both the target and production servers. 9 Catalog the backup pieces using the CATALOG BACKUPPIECE command within RMAN in the production server. 10 Shut down the production server. 11 Perform restore and recovery on the production server of the backup taken on the target server. Section I: Migration solution component Overview The Migration solution component demonstrates that EMC Replication Manager can be used to migrate an Oracle 11g database mounted on NFS to a target database mounted on FCP/ASM with minimum performance impact and no downtime of the production database. Test objectives This test was just a functionality validation of the migration of Oracle 11g database from a NAS to SAN configuration and SAN to NAS configuration. The performance impact on the production database during online migration was not validated. Test configuration The test configuration for the Migration solution component was identical to the Store solution component. Test procedures The following procedures were used to validate the Migration solution component: Migrating an online Oracle Database from SAN to NAS These steps were followed to perform the SAN to NAS migration operation: Step Action 1 Using EMC Replication Manager, a consistent backup of the running physical production database is performed on the CLARiiON, utilizing a SnapView checkpoint snapshot. 2 This backup is mounted (but not opened) on the migration server, in this case a physically booted server. The NFS target array is also mounted on the migration server. 3 Using Oracle Recovery Manager (RMAN), a backup of this database is 87

88 Chapter 7: Testing and Validation taken onto the target location. This backup is performed as a database image, so that the datafiles are written directly to the target NFS mount. 4 The migration server is then switched to the new database, which has been copied by RMAN to the NFS mount. 5 The target database is set in Data Guard continuous recovery mode, and Data Guard log ship/log apply is used to catch the physical target database up to the physical production version. 6 Once the physical target database is copied to production, Data Guard failover can be used to retarget to the physical target database. If appropriate networking configuration is performed, clients will not see any downtime when this operation occurs. The result is that the physical production FCP-mounted database can be migrated to NFS with minimal performance impact and no downtime. Test results Summary We were able to validate the functionality of migrating an online production Oracle database from NFS to FCP. This was accomplished with minimal performance impact and no downtime on the production server. The test showed that the impact on the production database was identical to the backup server. Conclusion The ability to migrate a NAS to SAN database is a frequent request from the customer. Customers are forced to switch from SAN to NAS or NAS to SAN based on ever-changing requirements. This solution proves that EMC Replication Manager can be a very effective application to achieve this goal with ease and no impact or downtime to the production database. 88

89 Chapter 8: Virtualization Chapter 8: Virtualization Overview Introduction to virtualization Virtualization lets a customer run multiple virtual machines on a single physical machine, sharing the resources of that single computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer. The VMware virtualization platform is built on a business-ready architecture. Customers can use software such as VMware vsphere and VMware ESX to transform or virtualize the hardware resources of an x86-based computer - including the CPU, RAM, hard disk, and network controller - to create a fully functional virtual machine that can run its own operating system and applications just like a real (or physical) computer. Server virtualization offers energy efficiencies, cost savings, and better management of service level agreements. VMware ESX abstracts server processor, memory, storage, and networking resources into multiple virtual machines. By doing so, it can dramatically improve the utilization of these resources. Using virtualization Virtualized Oracle database servers were used as targets for test/dev, backup, and disaster recovery for this solution. These servers are more conveniently managed as virtual machines than as physically booted Oracle database servers. The advantages of consolidation, flexible migration and so forth, which are the mainstays of virtualization, apply to these servers very well. A single VMware Linux host was used as the target for test/dev, backup, and disaster recovery. For test/dev, the target database was brought up as a singleinstance database on the VMware host. Similarly, the standby database for disaster recovery was a single-instance database running on a VMware host. This chapter provides procedures and guidelines for installing and configuring the virtualization components that make up the validated solution scenario. Advantages of virtualization Advantages Some advantages of including virtualized test/dev and disaster recovery (DR) target servers in the solution are: Consolidation Flexible migration Cloning Reduced costs 89

90 Chapter 8: Virtualization Considerations Virtualized single-instance Oracle only Due to the requirement for RAC qualification, presently there is no support for Oracle 11g and 10g RAC servers on virtualized devices. For this reason, EMC does not publish such a configuration as a supported and validated solution. However, the use of Oracle Database 11g and 10g (in singleinstance mode) presents far fewer support issues. VMware infrastructure Setting up the virtualized utility servers For details on setting up the virtualized utility servers, see Chapter 6. Virtualization best practices VMotion storage requirements You must have a common storage network configured on both source and target ESX servers to perform VMotion. The network configuration including the vswitch names should be exactly the same. The connectivity to the LUNs on the back-end storage from the ESX servers should also be established in the same way. ESX servers must have identical configuration All ESX servers must have an identical configuration, other than the IP address for the VMkernel port. Dedicated private connection When NFS connectivity is used, it is a best practice to have a dedicated private connection to the back-end storage from each of the VMs. We did the following: Assigned four NICs (one NIC for each VM) on the ESX server Assigned private IPs to the NICs Set up the connectivity from these four NICs to the Data Movers of the backend storage using a Dell PowerConnect switch NFS mount points If the Oracle database files sit on NFS storage, the NFS share should be mounted as 90

91 Chapter 8: Virtualization a file system within the Linux guest VM using /etc/fstab. This can deliver vastly superior performance when compared to storing Oracle database files on virtual disks that reside on an NFS share and are mounted as NFS datastores on an ESX server. VMware ESX server Typical storage network configuration The image below contains some detail from the Configuration -> Network tab on the VMware vsphere vcenter server. The storage network configuration is shown. We used two physical NICs to support the storage network. This provides physical port redundancy, as well as link aggregation and load balancing across the NICs. Ports are provided for both the NFS mounts on the VMs, as well as the NFS mounts on the ESX servers. 91

92 Chapter 8: Virtualization Storage configuration In the image below, the VMkernel Storage Network is being used to store the files for the VMs (through NFS). The storage pane shows that the NFS-mounted volume vm is where these files are stored. LUN discovery LUNs can be discovered on an ESX server in two ways: The first method uses Virtual Machine File System (VMFS). The second method uses Raw Device Mapping (RDM). EMC recommends using RDM for discovering the LUNs on the ESX server because RDM provides better disk I/O performance while still supporting VMotion. VMware and NFS NFS mounts in ESX NFS is a viable storage option for VMware ESX. It provides a simple and manageable storage networking alternative to FCP or iscsi. NFS does not require the ESX servers to run a clustered file system (in this case VMFS). For the utility servers used in this solution, NFS was used to store the OS images for the VMs. 92

93 Chapter 9: Backup and Restore Chapter 9: Backup and Restore Overview Introduction to backup and restore A thoughtful and complete backup strategy is an essential part of database maintenance in a production environment. Data backups are an essential part of any production environment. Regardless of the RAID protection level, hardware redundancy, and other high-availability features present in EMC Celerra storage arrays, conditions exist where you may need to be able to recover a database to a previous point in time. This solution used EMC SnapView to free up the database server s CPU, memory, and I/O channels from the effects of operations relating to backup, restore, and recovery. Scope This section covers the use of SnapView snapshots to perform backup and restore operations on Oracle RAC database servers. Important note on scripts The scripts provided assume that the passwordless authentication is set up using ssh between the oracle user account and the Celerra Control Station. Passwordless authentication allows the oracle user account to issue commands to the Control Station within a script. Instructions on how to accomplish this can be found in Chapter 6: Installation and Configuration > Task 10: Enable passwordless authentication using SSH. 93

94 Chapter 9: Backup and Restore Section A: Backup and restore concepts Physical storage backup A full and complete copy of the database to a different physical media. Logical backup A backup that is performed using the Oracle import/export utilities. The term logical backup is generally used within the Oracle community. Logical storage backup Creating a backup using a storage replication technology such as snapshots. A snapshot is a copy of the database that does not physically exist. Rather, it consists of the blocks in the active file system, combined with blocks in a SavVol, an area where the original versions of the updated blocks are retained. The effect of a logical storage backup is that a view of the file system as of a certain point in time can be assembled. Often, snapshots are writable, by retaining the updates to blocks in the snapshot in the SavVol. Unlike a physical storage backup, a snapshot-based backup can be taken very rapidly, and requires very little space to store (typically a small fraction of the size of a physical storage backup). Important Taking logical storage backups is not enough to protect the database from all risks. Physical storage backups are also required to protect the database against double disk failures and other hardware failures at the storage layer. Both physical and storage-based backups are recommended. Flashback Database The Oracle Flashback Database command enables you to restore an Oracle database to a recent point in time, without first needing to restore a backup of the database. EMC SnapView snapshot The EMC SnapView snapshot allows a database administrator to create a point-intime copy of the database that can be made accessible to another host or simply held as a point-in-time copy for possible restoration. Advantages of logical storage Recovery from human errors Logical backup protects against logical corruption of the database, as well as accidental file deletion, and other similar human errors. Frequency without performance impact The logical storage operation is very lightweight and, as a result, a logical storage 94

95 Chapter 9: Backup and Restore backup can be taken very frequently. Most customers report that they cannot perceive the performance impact of this operation because it is so slight. Reduced MTTR Depending on the amount of data changes, restoring from a logical storage backup can occur very quickly. This dramatically reduces mean time to recovery (MTTR) compared to what can be achieved restoring from a physical backup. Less archived redo logfiles Due to the high frequency of backups, a small number of archived redo log files need to be applied if a recovery is needed. This further reduces mean time to recovery. Section B: Backup and recovery strategy Logical storage backup using EMC SnapView and EMC Replication Manager SnapView overview A logical storage backup consists of virtual copies of all LUNs being used to store datafiles. This is enabled by EMC CLARiiON SnapView. Using EMC consistency technology, multiple LUNs can be used to store an automated storage management (ASM) disk group and snapshots of all of those LUNs can be created in a consistent manner. SnapView snapshot A SnapView snapshot is a virtual point-in-time copy of a LUN. This virtual copy is assembled by using a combination of data in the source LUN, and the before images of updated blocks that are stored on the CLARiiON target array in the reserved LUN pool (RLP). In Replication Manager, the RLP is referred to as the snap cache. We will adhere to Replication Manager terminology and use the term snap cache. Multiple restore points using EMC SnapView The following image compares backup using SnapView to conventional backup over a typical 24-hour period. 95

96 Chapter 9: Backup and Restore Midnight Database Storage Tape Midnight BEFORE AFTER Conventional backup Celerra SnapSure CLARiiON SnapView Midnight 4:00 a.m. 8:00 a.m. Noon 4:00 p.m. 8:00 p.m. Midnight Rapid restore and recovery using EMC SnapView The following image compares restore and recovery using SnapView to conventional backup over a typical 24-hour period. Backup time Data loss Recovery time Backup Restore (move data files) Recovery (apply logs) BEFORE Tape backup Multiple hours AFTER Celerra SnapSure CLARiiON SnapView Backup time Minutes Recovery time Replication Manager EMC Replication Manager automates the creation and management of EMC diskbased point-in-time replicas. Replication Manager integrates with the Oracle database server and provides an easy interface to create and manage Oracle replicas. Logical storage process A typical backup scheme would use six logical storage backups per day, at four-hour intervals, combined with one physical storage backup per day. The procedure described next can be integrated with the Oracle Enterprise Manager job scheduling process or cron. This can be used to execute a logical storage backup once every four hours. Logical storage backup using Replication Manager The following table outlines the steps used to perform a backup using SnapView snapshots: Step Action 1 Create a Replication Manager job for SnapView snapshot: a. Select Jobs, right-click and select New Job. b. To go to the Job Name and Settings screen, click Next. c. Enter: job name, replication source, replication technology, and 96

97 Chapter 9: Backup and Restore number of replicas to be created, then click Next. d. Select the mount options, and then click Next. Note: If necessary, you can select the mounting options at a later point in time. e. At the Completing the Job wizard screen, click Finish. 2 Execute a Replication Manager job: a. In Replication Manager, select Jobs from the navigation panel. b. Select a job, right-click and select Run to execute the job. c. To confirm that you want to run the job, click Yes. d. When the Running Job progress bar reaches 100%, click Close. The status of the job displays as Successful. 3 To verify the Snapshot Session information, select the appropriate item under Storage Services. Note: You can also use Navisphere to view the snapshot sessions that were created using Replication Manager. Step Action Best practice During backup, EMC recommends that you: Switch the log files Archive all the log files Back up the control file The instructions on how to carry out these operations are beyond the scope of this document. Section C: Physical backup and restore RMAN and Celerra Physical backup of the production array can be accomplished using Oracle RMAN. The target was an NFS mount on the Celerra. The backup target is typically SATA or LCFC disks on the target array. If tape is used with a product that includes a media management layer, such as EMC NetWorker, Oracle Secure Backup must be used. Normal RMAN semantics apply to this backup method. This is thoroughly covered on the Oracle Technology Network website and will not be included in this document. 97

98 Chapter 9: Backup and Restore RMAN backup script: rmanbkp.bash Run the following script from the database server to carry out physical backup of a Celerra array using Oracle RMAN: [oracle@mteoradb55 ~]$ cat rmanbkp.bash #!/bin/bash. ~/.bash_profile. /cygdrive/c/common/initialize.bash echo "This is oracle_cold_backup_database.bash" echo "rman" echo "connect target /" echo "backup database plus archivelog;" echo "exit" rman <<EOF2 connect target / backup database plus archivelog; exit EOF2 echo "Now exiting oracle_cold_backup_database.bash" exit [oracle@mteoradb55 ~]$. System output of rmanbkp.bash The output our system produces after running rmanbkp.bash is shown below: [oracle@mteoradb55 ~]$. rmanbkp.bash This is rmanbkp.bash Starting RMAN Backup Recovery Manager: Release Production on Fri Mar 7 00:49: Copyright (c) 1982, 2005, Oracle. All rights reserved. connected to target database: MTERAC16 (DBID= ) RMAN> backup database; Starting backup at 07-MAR-08 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=4512 instance=mterac161 devtype=disk channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00008 name=+data/mterac16/datafile/test input datafile fno=00009 name=+data/mterac16/datafile/test

99 Chapter 9: Backup and Restore input datafile fno=00010 name=+data/mterac16/datafile/test input datafile fno=00011 name=+data/mterac16/datafile/test input datafile fno=00012 name=+data/mterac16/datafile/test input datafile fno=00013 name=+data/mterac16/datafile/test input datafile fno=00014 name=+data/mterac16/datafile/test input datafile fno=00015 name=+data/mterac16/datafile/test input datafile fno=00016 name=+data/mterac16/datafile/test input datafile fno=00017 name=+data/mterac16/datafile/test input datafile fno=00018 name=+data/mterac16/datafile/test input datafile fno=00002 name=+data/mterac16/datafile/undotbs input datafile fno=00005 name=+data/mterac16/datafile/undotbs input datafile fno=00003 name=+data/mterac16/datafile/sysaux input datafile fno=00001 name=+data/mterac16/datafile/system input datafile fno=00006 name=+data/mterac16/datafile/undotbs input datafile fno=00007 name=+data/mterac16/datafile/undotbs input datafile fno=00004 name=+data/mterac16/datafile/users channel ORA_DISK_1: starting piece 1 at 07-MAR-08 channel ORA_DISK_1: finished piece 1 at 07-MAR-08 piece handle=/u06/mterac16/backupset/2008_03_07/o1_mf_nnndf_tag T005008_3x1oz3t8_.bkp tag=tag t comment=none channel ORA_DISK_1: backup set complete, elapsed time: 01:21:26 Finished backup at 07-MAR-08 Starting Control File and SPFILE Autobackup at 07-MAR-08 piece handle=/u06/mterac16/autobackup/2008_03_07/o1_mf_s_ _3 x1to7tb_.bkp comment=none Finished Control File and SPFILE Autobackup at 07-MAR-08 RMAN> End of RMAN backup rmanbkp.bash 99

100 Chapter 9: Backup and Restore Section D: Replication Manager in Test/Dev and Advanced Backup solution components Overview The Test/Dev and Advanced Backup solution components are integrated with EMC Replication Manager. This has significant advantages in that Replication Manager provides a layered GUI application to manage these processes. This includes a scheduler so that the jobs can be run on a regular basis. Replication Manager, however, introduces a few issues that are covered in this section. Oracle home location Currently, Replication Manager does not support ASM and Oracle having separate Oracle homes. This may be confusing, because the Oracle installation guide presents an installation in which ASM is located in its own home directory. Important If you choose to use Replication Manager for storage replication management, install Oracle and ASM in the same home directory. Dedicated server process Replication Manager cannot create an application set when connected to the target database using SHARED SERVER. Replication Manager requires a dedicated server process. In the TNSNAMES.ORA file, you must modify the value of SERVER as shown below to connect to the target database. This is only needed for the service that is used for the Replication Manager connection. # tnsnames.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. MTERAC211 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = mteoradb67-vip)(port = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = mterac21) (INSTANCE_NAME = mterac211) ) ) 100

101 Chapter 10: Data Protection and Replication Chapter 10: Data Protection and Replication Overview Introduction This solution provides options to create local and remote replicas of application data that are suitable for testing, development, reporting, and disaster recovery and many other operations that can be important in your environment. Section A: Basic Protect using Oracle Data Guard Overview Oracle Data Guard is used with the Basic Protect solution component. For best practices on Oracle Data Guard configuration, refer to the Oracle documentation on this subject. EMS CLARiiON SAN Copy and Oracle Data Guard The best practice for disaster recovery of an Oracle Database 11g/10g over NFS is to use the CLARiiON SAN Copy for seeding the disaster recovery copy of the production database, and then to use the Oracle Data Guard log transport and log apply services. The source of the database used for seeding the disaster recovery site can be a hot backup of the production database within a SnapView checkpoint. This avoids any downtime on the production server relative to seeding the disaster recovery database. The configuration steps for shipping the redo logs and bringing up the standby database are accomplished using Oracle Data Guard. The Data Guard Failover operation was performed in MAXIMUM AVAILABILITY mode. For best practices on Oracle Data Guard configuration, refer to the Oracle documentation on this subject. The following image illustrates the setup for disaster recover using CLARiiON SAN Copy and Oracle Data Guard. 101

102 Chapter 10: Data Protection and Replication Section B: Advanced Protect using EMC MirrorView and Oracle Data Guard Overview The use of Oracle Data Guard to create a disaster recovery configuration is an established best practice. EMC MirrorView Asynchronous (MirrorView/A) over iscsi is commonly used as a way of seeding the database for the Data Guard configuration. Once the production database has been copied to the target location, then redo log shipping can be established using Data Guard. The use of MirrorView over iscsi requires specific network configuration. Various means can be used to bridge an iscsi network over a WAN connection so that the data on an iscsi network can be transmitted over long distances. The advantages of MirrorView are: The data can be replicated over a long distance. Local access is not required. No downtime on the source database is required. EMC assumes that you have established a mechanism to transmit the data from the source to the target array over an IP network using the iscsi protocol. MirrorView mirroring prerequisites Before starting the mirroring procedure: The source and target arrays must be in the same CLARiiON domain. You must designate the source CLARiiON as the master and the target CLARiiON as a domain member. Once this is done, both the source and target arrays appear in the same Navisphere GUI and can be managed together. This is mandatory for MirrorView mirroring to be established. Reserved LUN pool LUNs are required for MirrorView mirroring. You should configure a large number of small LUNs for best results. We configured 20 LUNs of 10 GB in size. 102

103 Chapter 10: Data Protection and Replication Seeding the Oracle database Use MirrorView to seed the Oracle database for the Data Guard configuration as described in the following table: Step Action 1 Create a consistency group on the source array with the following command: [root@mteoradb1 db_root]# naviseccli -h mirror async \ -creategroup name ConsistentDBGroup o 2 Create a mirror group for the source LUNs, for example: [root@mteoradb59 ~]# naviseccli -h mirror -async -create -name LUN1_LOG1_mirror -lun 1 - requiredimages 1 Warning Make sure you are finished enabling paths among all arrays. If not, exit and do so. 3 Verify that all the mirror groups were created successfully using the following command: [root@mteoradb59 ~]# naviseccli -h mirror -async -list 4 Use the following script to add the target LUNs to the mirror groups and also to the consistency group: Code Listing: MirrorView/A Data Guard seeding script [root@mteoradb59 db_root]# cat mirror_new.bash echo "This is mirror.bash" DATA_LUNS=" " LOG_LUNS="1 2" SPA= echo "Add the target LOG LUNs for mirroring" for i in ${LOG_LUNS} do echo "Now adding lun LUN${i} of target Clarion to mirror" naviseccli -address ${SPA} mirror -async -addimage - name LUN${i}_LOG${i}_mirror -arrayhost lun 5${i} -recoverypolicy auto -syncrate high done echo "Add the target DATA LUNs for mirroring" for i in ${DATA_LUNS} 103

104 Chapter 10: Data Protection and Replication do echo "Now adding lun LUN${i} of target Clarion to mirror" naviseccli -address ${SPA} mirror -async -addimage - name LUN${i}_DATA_mirror -arrayhost lun 5${i} -recoverypolicy auto -syncrate high done echo "Now adding mirror for LOG LUNS to consistent group." for i in ${LOG_LUNS} do naviseccli -address ${SPA} mirror -async -addtogroup -name ConsistentDBGroup -mirrorname LUN${i}_LOG${i}_mirror done echo "Now adding mirror for DATA LUNS to consistent group." for i in ${DATA_LUNS} do naviseccli -address ${SPA} mirror -async -addtogroup -name ConsistentDBGroup -mirrorname LUN${i}_DATA_mirror done echo "Now exiting mirror.bash" Code Listing: The output from the MirrorView/A script [root@mteoradb59 db_root]#./mirror_new.bash This is mirror.bash Add the target LOG LUNs for mirroring Now adding lun LUN51 of target Clarion to mirror Now adding lun LUN52 of target Clarion to mirror Add the target DATA LUNs for mirroring Now adding lun LUN53 of target Clarion to mirror Now adding lun LUN54 of target Clarion to mirror Now adding lun LUN55 of target Clarion to mirror Now adding lun LUN56 of target Clarion to mirror Now adding lun LUN57 of target Clarion to mirror Now adding lun LUN58 of target Clarion to mirror Now adding mirror for LOG LUNS to consistent group. 104

105 Chapter 10: Data Protection and Replication Now adding mirror for DATA LUNS to consistent group. Now exiting mirror.bash 5 Verify the mirroring status by executing the following command: [root@mteoradb59 ~]# naviseccli -h mirror -async \ -list -images grep Progress Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): Once the mirrors are synchronized, fracture them using the following command: [root@mteoradb59 ]# naviseccli -h mirror -async \ -fracturegroup -name ConsistentDBGroup o Note The mirror LUNs will become available for I/O only after they are fractured. Once the seeding of the database is complete, the remaining tasks for shipping the redo logs can be performed. Shipping the redo logs using Data Guard The configuration steps for shipping the redo logs and bringing up the standby database are accomplished using Oracle Data Guard. The semantics are covered thoroughly on the Oracle Technology Network website and will not be included here. Data Guard failover operation The Data Guard failover operation is performed in MAXIMUM AVAILABILITY mode using the following steps: Step Action 1 Shut down all the database instances at the production/primary site. MTERAC71> shutdown abort ORACLE instance shut down. 105

106 Chapter 10: Data Protection and Replication MTERAC72> shutdown abort ORACLE instance shut down. MTERAC73> shutdown abort ORACLE instance shut down. MTERAC74> shutdown abort ORACLE instance shut down. 2 Issue the following commands to change the standby database to primary: [MTERAC7 Production Database & MTERAC7-SB Standby Database] MTERAC7_SB>SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM $ARCHIVE_GAP; no rows selected MTERAC7_SB>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH; Database altered. MTERAC7_SB>ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY; Database altered. MTERAC7_SB>shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. mteora7_sb>set sqlprompt NEW_PRIMARY> NEW_PRIMARY>startup ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Database opened. NEW_PRIMARY> select DATABASE_ROLE, SWITCHOVER_STATUS, GUARD_STATUS from v$database; DATABASE_ROLE SWITCHOVER_STATUS GUARD_S PRIMARY NOT ALLOWED NONE 106

107 Chapter 10: Data Protection and Replication Section C: Advanced Protect using EMC RecoverPoint Introduction EMC RecoverPoint with the CLARiiON (CX) splitter provides several advantages: You do not need to install a dedicated splitter driver on the database servers. You do not need to install a special hardware driver into an FCP switch. This provides savings on both cost and manageability. All of the splitter driver functionality is incorporated into the array. Supporting documentation The following documents, located on Powerlink.com, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access, contact your EMC representative: EMC RecoverPoint Administrator s Guide EMC RecoverPoint Installation Guide EMC RecoverPoint EMC RecoverPoint provides integrated continuous remote replication (CRR) and continuous data protection (CDP). The RecoverPoint family enables data to be replicated and recovered from either the local site (CDP) or at a remote site (CRR). RecoverPoint CRR complements the existing portfolio of EMC remote-replication products by adding heterogeneous replication (with bandwidth compression) in asynchronous-replication environments, which lowers multi-year total cost of ownership. RecoverPoint CDP offered as a stand-alone solution, or combined with CRR, enables you to roll back to any point in time for effective local recovery from events such as database corruption. Release 3.0 and later of RecoverPoint includes support for the CLARiiON (CX) splitter driver. These are array-based splitters that can be directly installed on a CLARiiON. 107

108 Chapter 10: Data Protection and Replication Replication with RecoverPoint In normal replication, the target-site volumes are not accessible from the host. This is to prevent the RecoverPoint appliances and the host servers from trying to write to the volumes at the same time. Target-side processing allows the user to roll back the replication volume to a pointin-time snapshot. Once the target replication volumes have been paused on this snapshot, the target host server is given access to these volumes. While the target replication volumes are paused on this snapshot, replication continues from the source site. Snapshots are still stored in the snapshot portion of the target journal volume, but are not distributed to the replication volume because it is paused on a snapshot. Once the replication is complete, the target image can be accessed by selecting Image Access from the drop-down menu. CLARiiON CX splitters The CX splitter is integrated with each storage processor of the CLARiiON array. This sends one copy of the I/O to the storage array and another copy to the RecoverPoint appliance. The main advantages of the CX array splitters are that they: Reduce the cost associated with the additional host-based splitter agents or specialized fabric. Provide concurrent local and remote (CLR) protection. CLR replication can be performed on the same LUNs. When an application writes to storage, the CX splitter splits the data and sends a copy of the data over Fibre Channel to the source-site RPA, and the other copy to the storage array. These requests are acknowledged by the source RPAs and are then replicated to the remote RPAs over the IP network. The remote RPAs will write this data over to journal volumes at the remote site and the consistent data is distributed to the remote volumes later. Functional validation only Only functional validation was done for the Advanced Protect solution using EMC RecoverPoint. No tuning or performance measurements were carried out. EMC plans to carry out performance testing during the next test cycle. Journal volumes If there are multiple consistency groups, then you need to configure journal volumes for each consistency group on different RAID groups so that the journal of one group will not slow down the other groups. You should configure journal volumes on separate RAID groups from the user volumes. Journal volumes can be corrupted if any host writes to it other than the RecoverPoint appliance (RPA). So, you must ensure that the journal volumes are 108

109 Chapter 10: Data Protection and Replication zoned only with RPAs. RecoverPoint performs striping on journal volumes; using a large number from different RAID groups increases the performance. All journal volume LUNs should be of the same size because RecoverPoint uses the smallest LUN size and it stripes the snapshot across all LUNs. It will not be able to stripe evenly across different sized LUNs. The size of the journal volumes should be at least 20 percent larger than the data being replicated. Journal volumes are required on both local and remote sides for Continuous Remote Replication (CRR) to support failover. Repository volumes Repository volumes should be at least 4 GB with an additional 2 GB per consistency group. In a CRR configuration, there must be one repository volume for the RPA cluster on the local and remote sites. WAN compression guidelines If you set a strong compression, this will cause CPU congestion. Conversely, if you set it to low, it will cause high loads. EMC recommends a 5x-10x compression ratio for Oracle databases. Clusters The RecoverPoint clustering does not support one-to-many RPAs between sites. The configuration should have two RPAs on both sites. Zoning When discovering the CX array splitters, you must ensure that all the RPA ports and all the CX SPA/SPB ports are included in the same zone. You must ensure that this zone is present on both sites. This can be accomplished by using a zone that spans multiple FCP switches, if required. Storage volumes The following types of storage volumes are required for RecoverPoint configuration: Repository volume This volume holds the configuration and marking information during the replication. At least one repository volume is required per site and this should be accessible from all the RPAs at the site. Journal volume This volume is used to store all the modifications. The application-specific bookmarks and timestamp details are written to the journal volume. There should be at least one journal copy per consistency group. Best practices information on the configuration of journal volumes is available in 109

110 Chapter 10: Data Protection and Replication the EMC RecoverPoint Installation Guide. Replication set The association created between the source volume and target volume is called the replication set. The source volume cannot be greater in size than the target volume. Consistency group The logical group of replication sets identified for replication is called a consistency group. Consistency groups ensure that the updates to the associated volumes are always consistent and that they can be used to restore the database at any point of time. Conclusion The primary benefits of deploying an Advanced Protect solution component using RecoverPoint with CLARiiON splitters are: Up to 148 percent total cost savings and up to 238 percent in bandwidth cost savings The only replication product that supports both local and remote replication Local and remote replication with any point in time recovery to meet RPO/RTO requirements Network-based architecture optimized for availability and performance Transparent support for heterogeneous server and storage platforms Deploying the CLARiiON splitter driver results in significant cost reductions for customers because low-cost FCP switches, such as QLogic switches, can be used in the place of high-cost intelligent switches. 110

111 Chapter 11: Test/Dev Solution Using EMC SnapView Clone Chapter 11: Test/Dev Solution Using EMC SnapView Clone Overview Introduction There is strong interest in configuring a writeable test/dev copy solution component that does not impact the production server in terms of downtime or performance. CLARiiON SnapView clone SnapView clones You can use EMC CLARiiON SnapView clone with consistency technology to create a test/dev solution using a restartable copy of a running Oracle RAC 11g/10g production database. Doing so has minimal impact on the production database. You can either use SnapView snapshot or SnapView clone to create the copy. With SnapView snapshot, the I/Os that are performed on the database copy will be handled by the same physical hardware as the production database. This can have an impact on the production database performance. With SnapView clone, the cloned LUNs can be created on a completely different set of RAID groups from the original LUNs. As a result, I/O performed on the database copy is handled by different physical hardware from the production database. The database copy s I/O does not impact the production database in any way. The SnapView clone approach requires a full copy of the production database, whereas, the SnapView snapshot approach does not. The read I/O to the production LUNs that is required to create the clone LUNs does impact the production database. However, the ongoing operation of the copy database then has no appreciable impact on the production database. We assume that the SnapView clone approach is preferable for most customers. 111

112 Chapter 11: Test/Dev Solution Using EMC SnapView Clone Use of EMC SnapView clone A clone is a full binary copy of a source LUN. Each clone LUN must be the same size as the source LUN. Best practices Mounting LUNs EMC recommends using a second set of database servers to mount the LUNs and manage them within the oracleasm kernel module. ASM writes a unique signature to each LUN and will not open a LUN containing the same signature as an existing LUN. Consistency technology Consistency technology allows the customer to create a copy (either virtual or physical) of a set of LUNs on either a single array or multiple arrays with the writes to all of the LUNs being in perfect order. The state of the copy LUNs is identical to the state in which the production LUNs would be if the database server was powered off. This allows you to create a restartable database copy; the database engine will perform crash recovery on this copy using the online redo logs in exactly the same manner as a power loss. Because of this unique functionality, the use of backup mode or RMAN cloning is not required, which is advantageous as both of these approaches use host and array resources and can have an impact on production database performance. Restartable copy versus backup A restartable copy should not be considered a substitute for a backup. Database restart following database server crash is not guaranteed by Oracle. Therefore, a crash-consistent image is not a reliable backup. Recovery cannot be performed on a crash-consistent image. The image can be restarted in the state it was in at the time the copy occurred. It cannot be recovered to the state of a later time. Therefore, EMC recommends that you use normal backup procedures to back up the production Oracle database, using either SnapView snapshot or RMAN. This does not mean, however, that a restartable copy is useless; it is very useful. For example, a restartable copy can be used for testing and development purposes. Since the creation of the restartable copy has low impact on the production database, it can be created fairly often, typically daily. 112

113 Chapter 11: Test/Dev Solution Using EMC SnapView Clone Mount and recovery of a target clone database using Replication Manager Target clone database We cloned a four-node RAC production database and then mounted the cloned database on a VMware host as a single-instance database. The cloning and mounting of the cloned database on the target VM host were performed using Replication Manager (RM). Creating and executing an RM job for SnapView clone The following table describes the main steps that are followed to create and execute an RM job for SnapView clone: Step Action 1 In Replication Manager, select Jobs, then right-click and select New Job. 2 On the Welcome screen of the Job Wizard, choose the application set that you want to replicate, and then click Next. 3 On the Job Name and Settings screen, type the job name and select: Replication source Replication technology name Number of replicas to be created Click Next. 4 On the Replication Storage screen, select the storage pool that you want to use for the replica, and then click Next. 5 On the Mount Options screen, select the mount options, and then click Next. Note You can select the mount options at a later point in time, if required. 6 On the Starting the Job screen, choose how you want to start the job, and then click Next. 7 On the Users to be Notified screen, type the addresses of the users to be notified when a job is completed, then click Next. 8 On the Completing the Job wizard screen, save the job, then click Finish. 9 In Replication Manager, under Jobs, verify that the job has been created without any errors. If errors occur during the creation of the job, this will be indicated in the Latest Status column. 10 After the clone job has been created, select the job, then right-click 113

114 Chapter 11: Test/Dev Solution Using EMC SnapView Clone and select Run to execute the job. The complete logs will be displayed during the execution. 11 In Replication Manager, under Jobs, check the Status column to verify that the job executed successfully. Mount and recovery of a target clone database using RM Replication Manager generates its own init.ora file as part of the replication process. This file is then placed in the directory specified by the ERM_TEMP_BASE variable (/tmp by default). This init.ora file is usually sufficient to start the database, but it does not necessarily contain all the parameters from the original init.ora file. The procedure to customize the init.ora file that is generated by Replication Manager is described in the following table. Step Action 1 On the mount host, change to the RM client install directory: [root@mteoraesx1-vm5 ~]# cd /opt/emc/rm/client/bin/ 2 Create a new directory using the SID name for the mount, then change to that directory: [root@mteoraesx1-vm5 bin]# mkdir mterac211 [root@mteoraesx1-vm5 bin]# cd mterac211/ 3 Create a new init<sid>.ora file using the same SID name that is specified above: [root@mteoraesx1-vm5 mterac211]# vi initmterac211.ora 4 Customize the parameter file as required. Important These parameters will be appended to the init.ora file that was generated by Replication Manager, so they must follow the correct Oracle syntax. For more information on parameter file customization, refer to EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra Unified Storage Platform Best Practices Planning. 5 In Replication Manager, select the clone replication job, then right-click and select Mount. 6 Specify the correct SID name. The new init<sid>.ora file will be picked up dynamically. Those parameters will be used to start the database on the mount host in conjunction with the parameter file generated by Replication Manager. 7 On the Mount Wizard screen, select the replica to be mounted, and then click Next. 8 On the Mount Options screen: a) Under Path options, select Original path. This ensures that the database can be mounted using the same path on the target host as well. b) Under Oracle, select Recover the database. 114

115 Chapter 11: Test/Dev Solution Using EMC SnapView Clone This ensures that the target clone database will be recovered automatically after mounting by Replication Manager. c) Click Finish. 9 In Replication Manager, verify that the mount/recovery of the clone database is started without any errors. 10 Once the job is completed successfully, verify that the clone database is open in read/write mode: SQL> select name,open_mode from v$database; NAME OPEN_MODE MTERAC21 READ WRITE Replication Manager can be used to automate the complete process of cloning the LUNs, mounting the cloned LUNs on the target host, and restoring and recovering the clone database. Database cloning The importance of cloning The ability to clone a running production Oracle database is a key requirement for many customers. The creation of test and development databases, enabling of datamart and data warehouse staging, and Oracle and OS version migration are just a few applications of this important functionality. Cloning methods Two methods can be used for database cloning: Full clone Full clone involves taking a full copy of the entire database. Full clone is recommended for small databases or for a one-time cloning process. Incremental cloning Incremental cloning is more complex but allows you to create a clone, making a full copy on the first iteration and, thereafter, making an incremental clone for all other iterations, by copying only the changed data in order to update the clone. Incremental cloning is recommended for larger databases and for situations where there is an ongoing or continuous need to clone the production database. 115

116 Chapter 12: Migration Chapter 12: Migration Migration from FCP/ASM to NFS The ability to migrate an Oracle database across storage protocols is a frequent customer request. The EMC Oracle CSV group has tested and validated a solution component for migrating an online production Oracle database, which is mounted over FCP/ASM to a target database mounted using NFS. This is performed with minimal performance impact on the production database and no downtime Migrating an online Oracle database To see the steps that were followed to perform the migration operation, see Chapter 7: Testing and Validation > Section I: Migration solution component > Test procedure. 116

117 Chapter 12: Migration Migration diagram The following diagram is a high-level view of the migration component. 117

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using NFS and DNFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel A Detailed Review EMC Information Infrastructure Solutions Abstract

More information

Virtualizing Oracle Database 10g/11g on VMware Infrastructure

Virtualizing Oracle Database 10g/11g on VMware Infrastructure Virtualizing Oracle Database 10g/11g on VMware Infrastructure Consolidation Solutions with VMware Infrastructure 3 and EMC Celerra NS40 Multi-Protocol Storage May 2009 Contents Executive Overview...1 Introduction...1

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

White Paper. Dell Reference Configuration

White Paper. Dell Reference Configuration White Paper Dell Reference Configuration Deploying Oracle Database 11g R1 Enterprise Edition Real Application Clusters with Red Hat Enterprise Linux 5.1 and Oracle Enterprise Linux 5.1 On Dell PowerEdge

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2003 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC

More information

Virtualized Exchange 2007 Local Continuous Replication

Virtualized Exchange 2007 Local Continuous Replication EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Local Continuous Replication EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Ultimate Guide to Oracle Storage

Ultimate Guide to Oracle Storage Ultimate Guide to Oracle Storage Presented by George Trujillo George.Trujillo@trubix.com George Trujillo Twenty two years IT experience with 19 years Oracle experience. Advanced database solutions such

More information

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com EMC Backup and Recovery for SAP with IBM DB2 on IBM AIX Enabled by EMC Symmetrix DMX-4, EMC CLARiiON CX3, EMC Replication Manager, IBM Tivoli Storage Manager, and EMC NetWorker Reference Architecture EMC

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

AX4 5 Series Software Overview

AX4 5 Series Software Overview AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle

Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle Storage and Database Performance Benchware Performance Suite Release 8.5 (Build 131015) November 2013 Contents 1 System Configuration

More information

NONDISRUPTIVE BACKUP OF ORACLE DATABASE WITH EMC UNIFIED STORAGE

NONDISRUPTIVE BACKUP OF ORACLE DATABASE WITH EMC UNIFIED STORAGE White Paper NONDISRUPTIVE BACKUP OF ORACLE DATABASE WITH EMC UNIFIED STORAGE A Detailed Review EMC GLOBAL SOLUTIONS Abstract This white paper explains the backup of an EMC Celerra SnapSure checkpoint of

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009

More information

EMC Replication Manager for Virtualized Environments

EMC Replication Manager for Virtualized Environments EMC Replication Manager for Virtualized Environments A Detailed Review Abstract Today s IT organization is constantly looking for ways to increase the efficiency of valuable computing resources. Increased

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2 Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network WHITE PAPER How To Build a SAN The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network TABLE OF CONTENTS Introduction... 3 What is a SAN?... 4 Why iscsi Storage?... 4

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

iscsi SAN for Education

iscsi SAN for Education WHITE PAPER iscsi SAN for Education Shared Storage, Server & Desktop Virtualization, Replication, Backup, Recovery TABLE OF CONTENTS Overview... 3 The IT Challenge for Educational Institutions... 4 What

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

EMC s UNIFIED STORAGE AND MULTITENANCY

EMC s UNIFIED STORAGE AND MULTITENANCY White Paper EMC s UNIFIED STORAGE AND MULTITENANCY Technology Concepts and Business Considerations Abstract This white paper discusses how EMC s leading-edge technologies are used to implement secure multitenancy

More information

EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series 2 Applied Technology

EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series 2 Applied Technology EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series Abstract This white paper discusses various backup options for VMware ESX Server deployed on Celerra NS Series storage using EMC Avamar

More information

Information Infrastructure for Vmware

Information Infrastructure for Vmware Information Infrastructure for Vmware Considerations & Solutions Erez Etzyon Technology Consultants Team Leader 1 Agenda Evolution of VMWare infrastructure requirements. Considerations for VMWare deployments.

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Configuration Maximums VMware vsphere 4.1

Configuration Maximums VMware vsphere 4.1 Topic Configuration s VMware vsphere 4.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.1. The limits presented in the

More information

Dell EqualLogic Best Practices Series

Dell EqualLogic Best Practices Series Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Oracle 11g Release 2 Based Decision Support Systems with Dell EqualLogic 10GbE iscsi SAN A Dell Technical Whitepaper Storage

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

What s New with VMware Virtual Infrastructure

What s New with VMware Virtual Infrastructure What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

EMC CLARiiON Guidelines for VMware Site Recovery Manager with EMC MirrorView and Microsoft Exchange

EMC CLARiiON Guidelines for VMware Site Recovery Manager with EMC MirrorView and Microsoft Exchange EMC CLARiiON Guidelines for VMware Site Recovery Manager with EMC MirrorView and Microsoft Exchange Best Practices Planning Abstract This white paper presents guidelines for the use of Microsoft Exchange

More information

Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots

Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots Dell EqualLogic Best Practices Series Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots A Dell Technical Whitepaper Storage Infrastructure

More information

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study SAP Solutions on VMware Infrastructure 3: Table of Contents Introduction... 1 SAP Solutions Based Landscape... 1 Logical Architecture... 2 Storage Configuration... 3 Oracle Database LUN Layout... 3 Operations...

More information

Virtualizing Sectra RIS/PACS Solutions for Healthcare Providers Enabled by EMC Unified Storage and VMware

Virtualizing Sectra RIS/PACS Solutions for Healthcare Providers Enabled by EMC Unified Storage and VMware Virtualizing Sectra RIS/PACS Solutions for Healthcare Providers Enabled by EMC Unified Storage and VMware Applied Technology EMC Global Solutions Abstract This white paper provides an overview of a Sectra

More information

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Symmetrix V-Max with SRDF/CE, EMC Replication Manager, and Enterprise Flash Drives Reference Architecture Copyright 2009 EMC Corporation.

More information

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly

More information

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Proven Solution Guide

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Proven Solution Guide Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

PERFORMANCE TUNING ORACLE RAC ON LINUX

PERFORMANCE TUNING ORACLE RAC ON LINUX PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Accelerating Network Attached Storage with iscsi

Accelerating Network Attached Storage with iscsi ESG Lab Review EMC MPFSi: Accelerating Network Attached Storage with iscsi A Product Review by ESG Lab May 2006 Authors: Tony Asaro Brian Garrett Copyright 2006, Enterprise Strategy Group, Inc. All Rights

More information

mplementing Oracle11g Database over NFSv4 from a Shared Backend Storage Bikash Roy Choudhury (NetApp) Steve Dickson (Red Hat)

mplementing Oracle11g Database over NFSv4 from a Shared Backend Storage Bikash Roy Choudhury (NetApp) Steve Dickson (Red Hat) mplementing Oracle11g Database over NFSv4 from a Shared Backend Storage Bikash Roy Choudhury (NetApp) Steve Dickson (Red Hat) Overview Client Architecture Why NFS for a Database? Oracle Database 11g RAC

More information

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test

More information

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical

More information

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Integration note, 4th Edition Introduction... 2 Overview... 2 Comparing Insight Management software Hyper-V R2 and VMware ESX management...

More information

Power Comparison of Dell PowerEdge 2950 using Intel X5355 and E5345 Quad Core Xeon Processors

Power Comparison of Dell PowerEdge 2950 using Intel X5355 and E5345 Quad Core Xeon Processors Power Comparison of Dell PowerEdge 2950 using Intel X5355 and E5345 Quad Core Xeon Processors By Scott Hanson and Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter

More information

Pivot3 Reference Architecture for VMware View Version 1.03

Pivot3 Reference Architecture for VMware View Version 1.03 Pivot3 Reference Architecture for VMware View Version 1.03 January 2012 Table of Contents Test and Document History... 2 Test Goals... 3 Reference Architecture Design... 4 Design Overview... 4 The Pivot3

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Proven Solution Guide

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Proven Solution Guide Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published March, 2011 EMC believes the information in

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200

More information

REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN

REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN White Paper REMOTE SITE RECOVERY OF ORACLE ENTERPRISE DATA WAREHOUSE USING EMC DATA DOMAIN EMC SOLUTIONS GROUP Abstract This white paper describes how a 12 TB Oracle data warehouse was transported from

More information

Deployment Guide: Oracle on Microsoft Windows and Dell PowerEdge Servers, a White Paper sponsored by Dell, Oracle and Microsoft

Deployment Guide: Oracle on Microsoft Windows and Dell PowerEdge Servers, a White Paper sponsored by Dell, Oracle and Microsoft Deployment Guide: Oracle on Microsoft Windows and Dell PowerEdge Servers, a White Paper sponsored by Dell, Oracle and Microsoft By Brian Thomas and Larry Pedigo, Performance Tuning Corporation December

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON, EMC NetWorker, and EMC Avamar Proven Solution Guide Copyright 2010 EMC Corporation. All rights reserved. Published July 2010

More information

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage A Dell Technical Whitepaper Ananda Sankaran Storage

More information

SAP Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON

SAP Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON SAP Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON Applied Technology Abstract This white paper demonstrates how VMware s Site Recovery Manager enables the design of a powerful

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Solutions for Large Environments Virtualization Solutions Engineering Ryan Weldon and Tom Harrington THIS WHITE PAPER IS FOR

More information

Sizing and Best Practices for Deploying Oracle 11g Transaction Processing Databases on Dell EqualLogic Storage A Dell Technical Whitepaper

Sizing and Best Practices for Deploying Oracle 11g Transaction Processing Databases on Dell EqualLogic Storage A Dell Technical Whitepaper Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Oracle 11g Transaction Processing Databases on Dell EqualLogic Storage A Dell Technical Whitepaper Chidambara Shashikiran Storage

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

EMC CLARiiON Virtual Provisioning

EMC CLARiiON Virtual Provisioning Applied Technology Abstract This white paper discusses the benefits of Virtual Provisioning on EMC CLARiiON CX4 storage systems. It provides an overview of this technology, and describes how Virtual Provisioning

More information

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Table of Contents 0 Introduction 1 The Test Environment 1 Best

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer Choosing and Architecting Storage for Your Environment Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer Agenda VMware Storage Options Fibre Channel NAS iscsi DAS Architecture

More information

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P. P2000 P4000 29/07/2010 1 HP D2200SB STORAGE BLADE Twelve hot plug SAS drives in a half height form factor P410i Smart Array controller onboard with 1GB FBWC Expand local storage capacity PCIe x4 to adjacent

More information

VMware vsphere 5.1 Advanced Administration

VMware vsphere 5.1 Advanced Administration Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.

More information

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960 Performance Validation and Test Results for Microsoft Exchange Server 2010 Abstract The purpose of this white paper is to profile the performance of the EMC CLARiiON CX4-960 with Microsoft Exchange Server

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments White Paper Integration of ETERNUS DX Storage Systems in ware Environments Technical White Paper Integration of ETERNUS DX Storage Systems in ware Environments Content The role of storage in virtual server

More information