HP ProLiant DL980 Universal Database Solution

Size: px
Start display at page:

Download "HP ProLiant DL980 Universal Database Solution"

Transcription

1 Technical white paper HP ProLiant DL980 Universal Database Solution HP Serviceguard for Linux and 3PAR StoreServ for Oracle Enterprise Database Table of contents Executive summary... 2 Overview... 3 HP Serviceguard for Linux... 3 HP ProLiant DL980 G7 server... 4 HP 3PAR StoreServ storage array... 4 HP ProLiant DL980 Universal Database Solution... 5 Building the solution... 8 Software and hardware components... 8 Hardware setup... 8 Install Red Hat Enterprise Linux Server Install Oracle GRID Infrastructure Install Oracle Enterprise Database Install HP Serviceguard for Linux Set up HP Serviceguard for Linux cluster Set up HP Serviceguard for Linux ASM multinode package Set up HP Serviceguard for Linux PROD failover package Set up HP Serviceguard for Linux TEST failover package High-Availability testing Test description Test results Conclusion Appendix A: Red Hat Enterprise Linux Server 6.2 installation Appendix B: Oracle GRID Infrastructure 11gR2 installation Appendix C: Oracle Enterprise Database Edition 11gR2 installation Appendix D: HP Serviceguard for Linux A installation Appendix E: HP Serviceguard for Linux cluster setup Appendix F: HP Serviceguard for Linux ASM package setup Appendix G: HP Serviceguard for Linux PROD package setup Appendix H: HP Serviceguard for Linux TEST package setup Appendix I: Bill of Materials For more information... 41

2 Executive summary The HP ProLiant DL980 Universal Database Solution is HP s large-scale, mission-critical, x86 database solution. Providing preconfigured, tested solutions with a choice of operating systems, storage, and databases. It delivers advanced reliability, availability, and serviceability (RAS) capabilities, enabling the high-end scalability and availability traditionally associated with RISC and mainframe systems at a fraction of the cost. HP s newest ProLiant DL980 Universal Database Solution for Oracle Databases combines high-performance HP ProLiant DL980 G7 server technology, HP 3PAR StoreServ storage technology and HP Serviceguard for Linux in a missioncritical solution designed specifically for large transactional (OLTP) and data warehousing (DSS) workloads. Additional information can be found on the HP ProLiant DL980 Universal Database Solution website. When compared to similar highlyavailable configurations, this solution: Lowers TCO by reducing power consumption, cooling, and space while improving performance. Reduce storage capacity as much as 50%. 1 Boosts system availability by as much as 200%. 2 Provides extreme performance for large-scale applications 3 and virtualization environments. The heart of this solution is the powerful HP ProLiant DL980 G7 server, the industry s most popular, highly reliable, and comprehensively scalable eight-socket x86 server offering. The DL980 G7 server leverages HP s years of experience designing mission-critical servers in RISC, EPIC, and UNIX environments. This design capitalizes on over a hundred innovative availability features to deliver a portfolio of resilient and highly reliable scale-up x86 servers in a compact 8U rack form factor. Additional information can be found on the HP ProLiant DL980 G7 website. Some of the key scalability and reliability features of the DL980 G7 server are: 4 or 8 Intel Xeon Processors with up to 10 cores per central processing unit (CPU) From 128GB to 4TB of memory Up to 8 internal hot-swappable drives with built in hardware RAID Redundant system links to provide resilient data paths Storage for this solution is delivered with the HP 3PAR StoreServ Series Tier 1 Storage Array designed to deliver enterprise IT as a utility service simply, efficiently, and flexibly. The arrays feature a tightly coupled clustered architecture, secure multi-tenancy, and mixed workload support for enterprise-class data centers. Additional information can be found on the HP 3PAR StoreServ website. Several key 3PAR StoreServ features are: Multi-tenant capabilities for managing unpredictable workloads. Save up to 90% of administrator time. 4 Silicon-based engine providing on-the-fly storage optimization. Full Mesh active-active cluster design for robust performance and high-availability. The HP Serviceguard for Linux is the high availability clustering software used in this solution and is designed to protect applications and services from planned and unplanned downtime. The HP Serviceguard Solutions for Linux portfolio also includes numerous implementation toolkits that enable you to easily integrate various databases and open source applications into a Serviceguard cluster and three distinct disaster recovery options. Additional information can be found on the HP Serviceguard for Linux website. Some of the key features of HP Serviceguard for Linux are: Robust monitoring to protection against system, software, network and storage faults Advance cluster arbitration and fencing mechanisms to prevent data corruption or loss GUI and CLI management interfaces Quick and accurate cluster package creation 1 Requires the use of HP 3PAR Thin Conversion Software and HP 3PAR Thin Provisioning Software. For details, refer to the Get Thin Guarantee Terms and Conditions. More information is available at: hp.com/storage/getthin. 2 Based on system crash rates comparison between the ProLiant DL980 G7 servers to the ProLiant DL785 G5 servers. System crash rate is determined by availability features such as hot-swap components, resilient paths, ECC, and tolerant links such as QPI. 3 Based on the TPC-H price performance per query results for non-clustered systems at 3000 GB. 4 Based on HP Storage customer results, available from HP Storage Customer success stories. 2

3 These components, when combined with Oracle Database Enterprise Edition software, create a multi-node solution which protects against system and database instance failures by eliminating the single points of failure. Many customers today desire to have highly performing, highly available Oracle database solutions but do not want the high cost or the inflexibility of all-oracle-stack solutions. Indeed, our high-performance Oracle Database solution is much simpler and more flexible than competing Oracle solutions, and because it is based on open standards, the HP solution: Supports various database software versions, operating systems and patch levels. Allows you to update, expand or extend your server environment as your needs change. Gives you the choice of support and service offerings. HP is a leading platform for Oracle solutions and continues to invest heavily with Oracle focused resources worldwide to provide cost effective open and standards-based architecture alternatives using converged infrastructure. Customer performance workload characteristics and requirements vary and HP has solutions tailored to provide maximum performance for each specific workload without compromising on required availability commitments to the business. Target audience: This HP white paper was written for IT professionals who use, program, manage, or administer Oracle databases that require high availability and specifically for those who design, evaluate, or recommend new IT high performance architectures. This white paper describes testing performed between November 2012 and February Overview This paper will provide the details of the HP ProLiant DL980 Universal Database Solution for Oracle Enterprise Database Edition 11gR2. We will focus primarily on the configuration, build, and high-availability test results to aid in designing and deploying mission-critical, high performance database solutions for Oracle databases. A brief overview of the major HP components used in this solution: HP Serviceguard for Linux HP ProLiant DL980 G7 server HP 3PAR StoreServ storage array An overview of the HP ProLiant DL980 Universal Database Solution with details on building the solution: HP ProLiant DL980 Universal Database Solution Building the solution A description of the high-availability testing performed, the database and workload used during that testing, results of the tests, and the final conclusion: Test description Database and workload defined Test results Conclusion HP Serviceguard for Linux HP Serviceguard for Linux is high-availability clustering software that manages the systems, software, network, and storage for a mission-critical solution. The environment is continuously monitored for any faults and when a failure or threshold violation is detected, Serviceguard for Linux can automatically and transparently resume normal operations in mere seconds without compromising data integrity and performance. HP Serviceguard Solutions for Linux are designed to address the financial, business and operational impacts of planned and unplanned downtime in demanding mission critical Linux environments. With HP Serviceguard Solutions for Linux, you can define, implement, and manage solutions that help to secure your infrastructure, protect data and information, and align business continuity objectives to business requirements. See Table 1 for features and benefits of HP Serviceguard for Linux. 3

4 Table 1. HP Serviceguard for Linux features and benefits Features High data integrity Zero downtime maintenance Shortened cluster setup Disaster recovery capability Simplified cluster management Benefits HP offers Quorum Service, a robust fencing mechanism that allows Serviceguard for Linux to ensure the highest level of data integrity and reliability. A key feature called Live Application Detach (LAD) allows you to perform maintenance on the Serviceguard cluster infrastructure (including the heartbeat network) with zero application downtime. 5 The Toolkits and Extensions portfolio offers predefined scripts that allow up to 93% of the setup time it takes to get an application protected. 6 HP Serviceguard Disaster Recovery Solutions for Linux enable you to remain online even after the loss of a data center, regardless of the distance. HP Serviceguard Manager for Linux allows users to visually configure, monitor, manage and administer a cluster and its components. HP Serviceguard Manager is also integrated with HP Systems Insight Manager (HP SIM) to enable management of multiple clusters running on Linux or HP-UX from a single browser. 5 Based on HP Lab analysis while performing maintenance activities of the cluster including maintenance of the heartbeat network, it used to be a common case where the application had to be brought down. However, with the LAD feature it is noticed that the application downtime has been reduced to none even if the heartbeat network is maintained and the cluster is brought down. 6 Based on HP Lab analysis that show typical manual effort for integrating Oracle Database into a cluster to require 30 engineering days. With Oracle toolkit from HP, this integration is achieved in two engineering days or less. HP ProLiant DL980 G7 server HP ProLiant DL980 G7 with HP PREMA Architecture is an eight socket server built to handle the largest x86 enterprise environments. Ideal choice for large scale databases with extreme workloads such as I/O intensive online transaction processing (OLTP) or compute intensive decision support system (DSS). Database consolidation or server virtualization is also an ideal fit for the DL980 s scale-up design. The DL980 G7 blends industry standard economies with advanced mission critical capabilities to deliver balanced scaling, self-healing resiliency and breakthrough efficiencies essential for enterprise compute environments. Operating costs are reduced with less overall maintenance, lower energy use, reduced cooling, and floor space requirements. The HP PREMA Architecture features Smart CPU caching and a resilient system fabric to reduce bottlenecks, improve throughput and performance, as well as deliver enhanced reliability not previously available in an x86 environment System management is provided by HP Integrated Lights-Out Management (ilo) and simplifies server setup, remote system management and integrates with HP Systems Insight Manager (SIM) and HP Insight Control. HP 3PAR StoreServ storage array The HP 3PAR StoreServ storage arrays are designed to deliver enterprise IT storage as a utility service simply, efficiently, and flexibly. The arrays feature a tightly coupled clustered architecture, secure multi-tenancy, and mixed workload support for enterprise-class data centers. Use of unique thin technologies reduces acquisition and operational costs by up to 50% while autonomic management features improve administrative efficiency by up to tenfold when compared with traditional storage solutions. The HP 3PAR StoreServ Gen4 ASIC in each of the system s controller nodes provides a hyper-efficient, silicon-based engine that drives on-the-fly storage optimization to maximize capacity utilization while delivering high service levels. Increase business agility Autonomic service level optimization through the use of optional HP 3PAR Dynamic Optimization (DO) and Adaptive Optimization (AO) software delivers the agility to react quickly to changing application and infrastructure requirements without the need for active management. Optional HP 3PAR Virtual Domains software, the first and only storage hypervisor-like technology on the market to deliver customized, secure, and self-service storage to multiple internal or external customers. Reduce Total Cost of Ownership (TCO) Advanced internal virtualization, wide striping, and mixed workload support reduce physical capacity purchases, storage footprint, power usage, and cooling needs without compromising performance. The two HP 3PAR StoreServ Gen4 ASICs with Thin Built In inside every controller node have the ability to drive siliconbased thin conversion to reduce legacy storage capacity requirements and reclaim allocated-but-unused capacity. 4

5 Optional HP 3PAR Thin Provisioning software eliminates capacity waste by allowing clients to purchase only the disk capacity they actually need, only as they actually need it for written data. Fast RAID 5 boosts RAID 5 performance to within 10% of RAID 1 but with significantly less capacity overhead; Fast RAID 6 (RAID MP) delivers enhanced protection while maintaining performance levels within 15% of RAID 10 and with capacity overheads comparable to popular RAID 5. HP ProLiant DL980 Universal Database Solution The development of the HP ProLiant DL980 Universal Database solution with Serviceguard for Linux focused on the building and testing of a large-scale, highly available Oracle Database Enterprise Edition 11gR2 for Red Hat Enterprise Linux 6.2 x86-64 environments. The large-scale platform consisted of HP ProLiant DL980 G7 servers and an HP 3PAR StoreServ storage array. High-availability was provided by HP Serviceguard for Linux, redundant 8Gb FC links for SAN connectivity, and two redundant 10GbE links for both private and public networks. To take advantage of the resources of the standby node, steps were included to configure an optional secondary database. High-availability testing focused on HP Serviceguard s ability to monitor the health of the primary database instance and manage the failover of the instance to a standby node in the event of an active node failure. Failover tests were performed with the database under a heavy OLTP load and in an idle state. The environment was isolated within a private network with access only through an HP ProLiant DL380 G7 gateway access server running Microsoft Windows Server 2008 R2. This server also provided the management console, DNS and NTP services for the environment. A second DL380 G7 installed with Red Hat Enterprise Linux Server 6.2 was configured as the client load generator and housed the database workload software. The networks were comprised of two redundant 10GbE links for public and private communications and a single 1GbE link for system management. The redundant network links were set up in an active-passive mode using the Red Hat Linux NIC bonding driver. The public network was dedicated for client load and the private network was dedicated for Serviceguard cluster communications. The management LAN connected the server s ilo system management ports and the management ports for the switches supporting the environment. The Storage Area Network (SAN) comprised of two 8Gb fibre channel paths between the DL980 servers, fibre channel switches, and the 3PAR StoreServ array. Zoning was configured on the SAN switches to limit the server to storage fibre channel connections to two paths. The redundant FC links were set up using the Linux multipath driver in an activeactive mode providing both redundancy and load balancing for the servers. See Figure 1 for the SAN and LAN diagram. 5

6 Figure 1. HP Universal Database solution LAN and SAN diagram The HP ProLiant DL980 G7 servers were installed with Red Hat Enterprise Linux version 6.2 x86-64 and configured as HP Serviceguard cluster nodes. Both servers were equipped with eight Intel Xeon E (2.4GHz/10-core/30MB/130W) processors, 1TB of memory, two dual port 8Gb FC HBAs, two dual port 10GbE NICs, and four internal 300GB disk drives configured as two RAID1 sets for the boot disk and the /apps directory. The HP 3PAR StoreServ storage array provided the shared SAN storage for our solution. A 3PAR management domain was set up with 2TB of storage for the solution. Two host profiles were created using the World Wide Names (WWNs) of the server s HBAs and the 3PAR controller/ports. A Serviceguard Lock LUN (quorum disk) and the Oracle Automatic Storage Management (ASM) LUNs were set up and presented to both host profiles. See Table 2 for the 3PAR LUN configuration used in this solution. Table 2. 3PAR LUN configuration LUN ID Size RAID type Description 1 256MB RAID5 Serviceguard Cluster Lock LUN 2, 3, 4, and 5 100GB RAID5 Primary Database (PROD) Data ASM LUNs 6, 7, 8, and 9 25GB RAID1 Primary Database (PROD) Log ASM LUNs 10, 11, 12, and GB RAID5 Secondary Database (TEST) Data ASM LUNs 14, 15, 16, and 17 30GB RAID1 Secondary Database (TEST) Log ASM LUNs 6

7 HP Serviceguard for Linux software was used to create the cluster and the HP Serviceguard Oracle Toolkit for Linux was used to create the three cluster packages for the solution. The ASM multi-node package was the first package created and ran on both nodes to manage the Oracle Automatic Storage Management (ASM) instances within the cluster. The primary PROD database instance failover package was then created to manage the PROD database instance in the cluster. The secondary TEST database instance failover package was created last to manage the TEST database instance in the cluster. Both the PROD and the TEST packages were configured with dependence on the ASM package and to run exclusively on a node ensuring that only one database instance will be active on a single node at any given time. The PROD package was also configured with a higher priority than the TEST package thus ensuring that Serviceguard for Linux will halt the TEST package in favor of the PROD package failover. Figure 2 shows a fully operational Serviceguard cluster with Node1 running the PROD database and Node2 running the TEST database. Figure 3 shows what happens to Node2 when Node1 fails. Figure 2. Fully operational cluster Figure 3. Results of a node 1 failure 7

8 The Oracle Grid Infrastructure 11gR2 software was installed on both DL980 servers to provide the Oracle ASM services to manage the ASM volumes for both PROD and TEST database instances. The Oracle Enterprise Database 11gR2 software was installed in two separate Oracle Homes on each DL980 server, one for the PROD database, and one for the TEST database. The database workload software was used to build the custom databases and to generate the OLTP load in order to stress the system while performing failover testing. Building the solution This section provides the steps to build the environment for the HP ProLiant DL980 Universal Database solution. This information can greatly reduce the time to plan and deploy your own large scale, highly available Oracle database solution for Red Hat Enterprise Linux environment. In this solution the build sequence follows these steps: Software and hardware components Hardware setup Install Red Hat Enterprise Linux Install Oracle Grid Infrastructure Install the Oracle Enterprise Database Install HP Serviceguard for Linux Set up the HP Serviceguard for Linux cluster Set up the HP Serviceguard for Linux ASM multinode package Set up the HP Serviceguard for Linux PROD failover package Set up the HP Serviceguard for Linux TEST failover package Software and hardware components Acquire all the software, licenses, and hardware required to build the solution. A list of the HP equipment used in this solution is provided in the Appendix I: Bill of Materials. Software Red Hat Enterprise Linux version 6 Update 2 64-bit HP Serviceguard for Linux HP Serviceguard Oracle Toolkit for Linux A Oracle Grid Infrastructure 11gR2 Oracle Enterprise Database 11gR2 ( ) Hardware Two HP ProLiant DL980 G7 servers 8x Intel Xeon E (2.4GHz/10-core/30MB/130W) processors 1TB of memory 2x HP 82Q 8Gb Dual Port PCI-e FC HBAs 2x HP NC552SFP dual port 10GbE NICs 4x internal 300GB disk drives HP 3PAR StoreServ Two HP G 1/10GbE switches configured with redundant IRF 10Gb interconnects Two HP 8Gb SN6000 FC switches Hardware setup There are several options for setting up the hardware; HP factory built, HP Services sets up equipment at the customer site, or customer builds. See Figure 4 for the LAN and SAN wiring diagram for network and fibre channel connectivity used in the solution. 8

9 Figure 4. LAN and SAN wiring diagram SAN configurations The SAN consists of two HP SN6000 8/24 FC switches and an HP 3PAR StoreServ storage array. Redundant FC paths were configured for each DL980 G7 server to provide high-availability for shared storage I/O. Two dual port 8Gb FC HBAs were installed in each server but only one port per HBA was used in this configuration. Each FC switch was configured with two single initiator (host WWNs) to target (3PAR controller-port WWNs) zones. The zones provided two single paths per server. This topology provides for complete redundancy across both servers, both FC switches, and both 3PAR controllers. See Table 3 for an overview of the FC zoning configuration. For additional information see the HP 3PAR StoreServ Storage best practices guide. Table 3. FC switch zoning FC switch Zone name Configuration 1 Node1Path1 World Wide Names (WWN): DL980-1 HBA1 Port1 3PAR Controller 1 Port1 1 Node2Path1 World Wide Names (WWN): DL980-2 HBA1 Port1 3PAR Controller 1 Port1 9

10 FC switch Zone name Configuration 2 Node1Path2 World Wide Names (WWN): DL980-1 HBA2 Port1 3PAR Controller 2 Port1 2 Node2Path2 World Wide Names (WWN): DL980-2 HBA2 Port1 3PAR Controller 2 Port1 The 3PAR Management Console was used to configure the host profiles and shared LUNs for the Serviceguard lock LUN and the ASM disks. Each host file was configured using the WWNs for their HBAs and the 3PAR controller ports. The LUNs were created and presented to both hosts. See Table 4 for the 3PAR storage configuration. Table 4. 3PAR SAN storage configuration Host profile Controller #/Port # Host WWNs LUN ID #/LUN name/raid type DL980_1 1/1 2/1 DL980_2 1/1 2/1 DL980-1 HBA1 Port1 DL980-1 HBA2 Port1 DL980-2 HBA1 Port1 DL980-2 HBA2 Port1 1/Prod_lock/256MB/RAID5 2/Prod_data_ASM0000/100GB/RAID5 3/Prod_data_ASM0001/100GB/RAID5 4/Prod_data_ASM0002/100GB/RAID5 5/Prod_data_ASM0003/100GB/RAID5 6/Prod_log_ASM0000/25GB/RAID1 7/Prod_log_ASM0001/25GB/RAID1 8/Prod_log_ASM0002/25GB/RAID1 9/Prod_log_ASM0003/25GB/RAID1 10/Test_data_ASM0000/110GB/RAID5 11/Test_data_ASM0001/110GB/RAID5 12/Test_data_ASM0002/110GB/RAID5 13/Test_data_ASM0003/110GB/RAID5 14/Test_log_ASM0000/30GB/RAID1 15/Test_log_ASM0001/30GB/RAID1 16/Test_log_ASM0002/30GB/RAID1 17/Test_log_ASM0003/30GB/RAID1 Same as above Install Red Hat Enterprise Linux Server Red Had Enterprise Linux Server 6.2 was installed using ilo remote console. The installation ISO image was mounted as a virtual drive. When the server was started it treated the ISO file as a bootable DVD and automatically started the installation process. During the installation the 3PAR LUNs were specified for multipathing which eliminated the need to set up multipath after the installation. This was possible because the zoning for the FC switches and the SAN storage was previously setup. See Appendix A: Red Hat Enterprise Linux Server 6.2 installation for steps used in this solution to install the Linux operating system. Post-installation steps After the operating system was successfully installed the following post-installation steps were performed: Stage the installation media Modify the kernel parameters and set the ulimits Create the Oracle user Install the 10GbE driver Set up NIC bonding Set up FC multipath 10

11 Best practices for setting up DL980 G7 and Red Hat for Oracle Below are generally used best practices for availability and performance on Oracle 11gR2 implementations. Recommended BIOS settings Enable/disable Hyper-Threading setting under System Options Processor Options for performance testing of your applications. Increased performance is dependent on the individual workload types. Disable Intel Virtualization technology and Intel VT-d settings under System Options Processor Options. Select Maximum Performance setting under Power Management Options HP Power Profile Select No Package State setting under Power Management Options Advanced Power Management Options Minimum Processor Idle Power Core State. This prevents processors from powering down components when idle. Recommended hardware settings Distribute FC HBA and 10GbE NIC cards evenly across both I/O cages using the x8 designated PCI-x slots I/O slot 1 is a first generation PCI-x slot and should not be used for any FC HBA cards Install DIMMs evenly across all memory banks for optimum memory interleaving performance Red Hat kernel parameters Set the kernel parameters as recommended in the installation manual for Oracle Grid Infrastructure and Enterprise database. Install Oracle GRID Infrastructure Oracle Automatic Storage Management (ASM) 11gR2 is now part of Oracle Grid Infrastructure 11gR2. ASM provides volume and file system management for Oracle databases. A single ASM instance supports one or more database instances. ASM is organized into logical disk groups, which is a collection of ASM disks. The database files are evenly striped across all the ASM disks presented to the disk group for performance. Oracle ASM disk groups can be configured using normal or external redundancy to determine how the data will be striped across ASM disks within a disk group. Normal redundancy will stripe and mirror everything (SAME) to protect against a single disk loss and is typically used for storage without hardware RAID capabilities. External redundancy will stripe data across the ASM disks and is typically used for storage arrays that provide their own hardware RAID capabilities. Oracle ASM operates in an instance configuration much the same as a database instance does. For more information regarding Oracle ASM, see the Oracle 11gR2 Database Concepts Guide. ASM configuration HP 3PAR storage has its own hardware RAID capabilities and ASM disk groups were configured with external redundancy. It is a best practice to create an ASM disk group with at least four ASM disks to improve I/O performance. For additional information please consult the Oracle Automatic Storage Management Administrator s Guide. In this solution four ASM disk groups were created using four ASM disks each. Two ASM disk groups for the PROD and the TEST database instances. Each database instance included two ASM disk groups one for data and the other for log files. See Appendix B: Oracle GRID Infrastructure 11gR2 installation for steps used in this solution to install Oracle Grid 11gR2. Install Oracle Enterprise Database Oracle Enterprise Database Edition 11gR2 ( ) is the minimum supported version for Linux 6 and above. Install the Oracle Enterprise Database Edition software twice on each server creating ORACLE_HOMEs for the primary database PROD and the secondary database TEST. See Appendix C: Oracle Enterprise Database Edition 11gR2 installation for steps used in this solution to install the Oracle databases. Install HP Serviceguard for Linux This section describes the process for installing HP Serviceguard for Linux and the HP Serviceguard Toolkit for Oracle databases used in this solution. Before installing HP Serviceguard for Linux one must be have read and fully understand the HP Serviceguard for Linux Version A Deployment guide. See Appendix D: HP Serviceguard for Linux A installation for steps used in this solution to install the HP Serviceguard for Linux and the HP Serviceguard Oracle Toolkit. Pre-installation steps Prior to installing Serviceguard for Linux make sure that the following tasks have been completed: Networks defined JDK 7 installed per recommendation for HP Serviceguard Manager for Linux Supported Linux packages installed 11

12 Networks defined Prior to configuring the Serviceguard cluster the networks should be defined. See Table 5 for the Serviceguard network configuration used in this solution. Table 5. Serviceguard network configuration Node IP address Type Description DL Static Public network for client access DL Static Private network for Serviceguard cluster communications DL Static Public network for client access DL Static Private network for Serviceguard cluster communications Any Virtual Serviceguard for Linux floating IP address for the PROD Database package Any Virtual Serviceguard for Linux floating IP address for the TEST Database package Install the required Linux packages on both servers Serviceguard for Linux depends on the following package: xinetd Serviceguard for Linux SNMP subagent requires the following packages: lm_sensors net-snmp Serviceguard Manager for Linux requires the following package: libxp Serviceguard CIM provider requires the following package: tog-pegasus To verify if a package is installed run the following command: # rpm qa fgrep <package> If not installed, locate the package on the DVD under the PACKAGE directory and install it: # rpm ivh <package><version>.rpm 12

13 Set up HP Serviceguard for Linux cluster After the HP Serviceguard for Linux and the HP Serviceguard Oracle Toolkit have been successfully installed the next step is to set up the cluster. Two options are available to create the cluster; Serviceguard for Linux command line interface (CLI) or the Serviceguard for Linux Manager a graphical user interface (GUI). See Appendix E: HP Serviceguard for Linux cluster setup for steps used in this solution to set up the Serviceguard for Linux cluster. In this solution the following items were specified during the setup of the cluster: Cluster name: Cluster nodes: Prod_cluster Subnet: Type: Subnet Configuration: dl980-1, dl980-2 Heartbeat Node Network Address Dl980-1 bond Dl980-2 bond Cluster Lock Type Lock LUN Path Lock LUN DL980-1 /dev/mapper/mpathcp1, DL980-2 /dev/mapper/mpathcp1 Set up HP Serviceguard for Linux ASM multinode package Serviceguard for Linux multinode packages are designed to run on one or more nodes at a time and do not failover. They simply start or stop the application on the node or the cluster. The ASM instances fit this model perfectly and were configured using the Serviceguard Oracle Toolkit for Linux multinode script to build the configuration file for the ASM packages. The ASM instances were configured to startup without mounting any disk groups. The ASM multinode package will be set up to run on both nodes and manage all the disk groups. When a database package is started or halted it will mount or dismount the ASM disk groups associated with that particular database instance. See Appendix F: HP Serviceguard for Linux ASM package setup for steps used in this solution to building the ASM multinode package. Set up HP Serviceguard for Linux PROD failover package Serviceguard for Linux failover packages are designed to run on one node at a time and when a node failure occurs the package is moved over to a surviving node within the cluster. The PROD package configuration file was created using the Serviceguard Oracle Toolkit for Linux. This failover package will need to be customized in order to monitor and manage the PROD database instance. The PROD package was configured to have a dependency on the ASM instance, run exclusively to ensure that only one database instance can run on a single node at a given time, and have a higher priority than the TEST database package to ensure that Serviceguard for Linux will shut down the TEST instance prior to starting the PROD instance in the event of a failover. See Appendix G: HP Serviceguard for Linux PROD package setup for steps used in this solution to building the PROD failover package. Set up HP Serviceguard for Linux TEST failover package The TEST package configuration file was created using the Serviceguard Oracle Toolkit for Linux. This failover package will need to be customized in order to monitor and manage the TEST database instance. The TEST package was configured to have a dependency on the ASM instance, run exclusively to ensure that only one database instance can run on a single node at a given time, and have a lower priority then the PROD database package to ensure that Serviceguard for Linux will shut down the TEST instance prior to starting the PROD instance in the event of a failover. See Appendix H: HP Serviceguard for Linux TEST package setup for steps used in this solution to building the TEST failover package. 13

14 High-Availability testing Test description Our testing of the HP ProLiant DL980 Universal Database Solution focused on the downtime experienced by the database during a failover. Hours of testing were conducted while certain failure scenarios were administered to get an indication of the different failover times; tests were also conducted when the database was in an idle and loaded state. After each failover test the workload was restarted to verify that the database instance properly failed over to the other node. The following failure scenarios were tested: Halting the active node using Serviceguard Manager for Linux halt node command Soft reboot of the active node using the Linux reboot command Hard Reset of the active node using the ilo system reset command The following database states were also tested with each failover scenario: Active node database idle/standby node only ASM running Active node database running a workload/standby node only ASM running Active node database running a workload/standby node with secondary database idle Active node database running a workload/standby node with secondary database running a workload Database and workload defined The database workload software was used to build the databases and to generate a workload to stress the systems for the failover testing. In our solution we configured the lab software in a client-server mode with the server side components installed on DL980-1 and the client side components installed on a designated client load generator server. With the PROD database running we rebuilt PROD with a 100GB OLTP database separating the data files and the logs using the PROD_DATA and PROD_LOG ASM disk groups. The PROD database was shut down and the TEST database was started before rebuilding the TEST database with the same parameters. The workload generated 1000 OLTP users for a duration of 10 minutes. This could have easily been 5000 OLTP users for a longer time but a shorter test was much more efficient and had similar results. There was very little tuning since our focus was on the database failover. The only turning was to accommodate the large number of users needed to generate a substantial workload to test failover scenarios of an Oracle database instance with Serviceguard for Linux. The following system information was captured during the 1000 OLTP user workload using SAR and IOSTAT. The average CPU utilization was around 60% during the run; 14% for system and 45% for user. The workload was fairly even across all 80 cores. The memory consumed grew from 77GB and leveled off after 5 minutes to 190GB. Figure 5. CPU utilization and memory consumed 14

15 The network utilization was heavy for the public network with writes averaging 4.2MB/s and reads averaging 1.3MB/s. The private network utilization was very low with read and writes averaging around 190B/s. Figure 6. Private and public networks The I/O for the data disk group averaged 156MB/s for writes and 48MB/s for reads. The log disk group averaged 19MB/s for writes and 6MB/s for reads. Figure 7. Data disk and log disk groups Test results All the tests have been verified to protect the Oracle ASM instance and the Oracle database before, during, and after a failover. Some of the specific recommendations that are a result of the extensive availability testing are: The FC SAN environment should be configured by zoning a single initiator to a single target. The udev rules file needs to be created since ASMLib is not available for Linux 6. The ASM PFILE needs to be created and move to the ASM $ORACLE_HOME/dbs directory and it should be configured to not mount any disk groups. The Private network used for cluster communication utilizes less than 0.001% of the available 10GbE bandwidth. A redundant pair of 1GbE links could fully support the Private network at a lower cost. See Table 6 for a list of all the failover tests performed and their results. Outage results or downtime will vary depending on database size and recovery scenarios. Table 6. Failover test results Active node Action performed Primary DB status Standby node/ secondary DB status Outage results mm:ss Comments Node Halted Primary Database Idle ASM only 01:00 Serviceguard Manager for Linux was used to halt the active node Node Halted Primary Database Idle Secondary Database idle 01:18 Additional time for shutting down the 2nd Database 15

16 Active node Action performed Primary DB status Standby node/ secondary DB status Outage results mm:ss Comments Node Halted 1K user workload ASM only 02:39 Additional time for database recovery Node Halted 1K user workload Secondary Database idle 03:07 Additional time for shutting down the 2nd Database Node Halted Operating System Soft Reboot Operating System Soft Reboot 1K user workload Primary Database Idle 1K user workload 1K user workload 03:14 Additional time for shutting down the 2nd Database during an active load ASM only 01:01 Operating System soft reboot command ASM only 02:21 Additional time for database recovery Operating System Soft Reboot 1K user workload Secondary Database idle 02:13 Additional time for shutting down the 2nd Database Operating System Soft Reboot Server Hard Reset Server Hard Reset 1K user workload Primary Database Idle 1K user workload 1K user workload 03:05 Additional time for shutting down the 2nd Database during an active load ASM only 01:12 Server hard reset issued through ilo ASM only 02:13 Additional time for database recovery Server Hard Reset 1K user workload Secondary Database idle 02:51 Additional time for shutting down the 2nd Database Server Hard Reset 1K user workload 1K user workload 02:44 Database recovery time will vary Conclusion This HP ProLiant DL980 Universal Database Solution is an important part of the overall HP large-scale and highly-available reference architecture portfolio. Developed to provide high performance for I/O intensive transactional databases and compute intensive Decision Support Systems. This delivers business continuity for heavy loads, faster user response times, and increased throughput over traditional configurations. This solution delivers this at less than half the acquisition cost, takes up less data center footprint, and consumes one-quarter the power and cooling when compared to configurations delivering similar performance. Key success factors in our extensive testing include: Successfully set up a high-availability configuration for an Oracle database environment using HP Serviceguard for Linux, HP DL980 G7 servers, HP 3PAR StoreServ storage, and HP LAN and SAN switches Defined the proper setup and validation procedures Conducted failover testing and recorded the outage time for the database Determined business continuance after each test The HP ProLiant DL980 Universal Database Solution defined in this document was based on the HP ProLiant DL980 G7 server, HP 3PAR StoreServ storage array, HP Serviceguard for Linux, Red Hat Enterprise Linux, and Oracle Enterprise Database. Steps for building the solution were also included to reduce the time to deploy a similar configuration. The HP ProLiant DL980 Universal Database Solution can also be flexible, supporting different operating systems, storage arrays, and databases. This document can be used as a steering guide to assist in creating a configuration much like the one constructed by HP s Oracle engineering team to evaluate and validate this solution. 16

17 Appendix A: Red Hat Enterprise Linux Server 6.2 installation Install Red Hat Enterprise Linux Server 6.2 Red Had Enterprise Linux Server 6.2 was installed using ilo remote console. The installation ISO image was mounted as a virtual drive. When the server was started it treated the ISO file as a bootable DVD and automatically started the installation process. During the installation the 3PAR LUNs were specified for multipathing which eliminated the need to set up multipath after the installation. This was possible because the zoning for the FC switches and the SAN storage was previously set up. In the final installation steps the following server components were specified: Base system Compatibility Libraries Networking Tools Servers System admin tools Web services Web server Web servlet engine Desktop Desktop Graphical admin tools X Windows system Development Additional development Development tools Post-installation steps After the operating system was successfully installed the following post installation steps were performed: Stage the installation media Modify the kernel parameters and set the ulimits Create the Oracle user Install the 10GbE driver Set up NIC bonding Set up FC multipath Download and stage the installation media Both servers 10GbE device driver Java Development Kit (JDK) 7 Oracle Grid Infrastructure 11gR2 Oracle Enterprise Database 11gR2 ( ) Primary server HP Serviceguard for Linux HP Serviceguard Oracle Toolkit for Linux A

18 Modify the kernel parameters and set the ulimits Edit the sysctl.conf file to support up to 5000 user connections. # vi /etc/sysctl.conf kernel.shmmni = 4096 kernel.msgmni = 64 kernel.sem = net.ipv4.ip_local_port_range = net.core.rmem_default = net.core.rmem_max = net.core.wmem_default = net.core.wmem_max = fs.file-max = fs.aio-max-nr = Edit the rc.local file and add the following command to retain the settings after a system restart. # vi /etc/rc.local sysctl p Edit the limits.conf file to set the user limits. # vi /etc/security/limits.conf * hard nofile * soft nofile 4096 * hard nproc * soft nproc 2047 Edit the 90-nproc.conf file to increase the maximum number of nproc. # vi /etc/security/limit.d/90-nproc.conf * soft nproc 8191 Create the Oracle user The Oracle user is the owner of the ASM, PROD, and TEST instances and the /apps file system. # groupadd -g 600 dba # useradd -u 601 -g dba -m oracle # passwd oracle # chown R oracle:dba /apps Install 10GbE device driver Red Hat Enterprise Linux Server 6.2 operating system did not recognize the 10GbE NIC hardware and device drivers were not installed during the initial installation. Download the HP 10GbE driver for Linux on both systems and read the installation instructions before attempting to install the device driver. Setup NIC bonding NIC bonding is defined as grouping two or more network cards to form a single connection to achieve link redundancy for network high-availability. The public and private networks were configured with NIC bonding utilizing 10GbE links in an active-passive mode. The public network was set up for client access and the private network was set up for cluster communications. Each DL980 G7 server came standard with four 1GbE embedded network interfaces eth0 through eth3. Two dual port 10GbE network cards were installed and provided interfaces for eth4 and eth5 for one NIC and eth6 and eth7 for the other. Create the NIC bonding configuration files for the private network ( ) and the public network ( ) on both servers. When using dual-port network cards, NIC bonding should combine ports from different NICs to remove a single point of failure. It is recommended to make backup copies before modifying the network interfaces for eth4 through eth7 to utilize the NIC bonding devices. 18

19 Server1 For the private network: # vi /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE="bond0" IPADDR=" " NETMASK=" " NETWORK=" " BROADCAST=" " ONBOOT="yes" BOOTPROTO="none" USERCTL="no" BONDING_OPTS="miimon=100 mode=1" For the public network: # vi /etc/sysconfig/network-scripts/ifcfg-bond1 DEVICE="bond1" IPADDR=" " NETMASK=" " NETWORK=" " BROADCAST=" " ONBOOT="yes" BOOTPROTO="none" USERCTL="no" BONDING_OPTS="miimon=100 mode=1" Server2 Identical to the private network for Server1 except for the IP address: IPADDR=" " Identical to the public network for Server1 except for the IP address: Both servers IPADDR=" " Modify the network config files. The HWADDR or MAC addresses are uniquely assigned for each interface: # vi /etc/sysconfig/network-scripts/ifcfg-eth4 DEVICE="eth4" USERCTL="no" ONBOOT="yes" MASTER="bond0" SLAVE="yes" BOOTPROTO="none" # vi /etc/sysconfig/network-scripts/ifcfg-eth5 DEVICE="eth5" USERCTL="no" ONBOOT="yes" MASTER="bond1" SLAVE="yes" BOOTPROTO="none" # vi /etc/sysconfig/network-scripts/ifcfg-eth6 DEVICE="eth6" USERCTL="no" ONBOOT="yes" MASTER="bond0" SLAVE="yes" BOOTPROTO="none" # vi /etc/sysconfig/network-scripts/ifcfg-eth7 DEVICE="eth7" USERCTL="no" ONBOOT="yes" 19

20 MASTER="bond1" SLAVE="yes" BOOTPROTO="none" Create the bonding conf file to map network device driver to network configuration file: # vi /etc/modprobe.d/bonding.conf alias eth0 netxen_nic alias eth4 be2net alias eth5 be2net alias eth6 be2net alias eth7 be2net alias bond0 bonding alias bond1 bonding After making all the necessary modifications reboot the systems and verify the network bonding driver with the following commands: # cat /proc/net/bonding/bond0 # cat /proc/net/bonding/bond1 Set up FC multipath Device Mapper configuration Device Mapper is a key component for a high-availability SAN configuration. Device Mapper Multipath enables a server to route disk I/O over multiple FC paths to provide redundant links to the shared SAN storage. Device Mapper runs a service called multipathd and is responsible for configuring and managing the devices by specifying a single multipath device to perform link redundancy. By default, the disk I/Os are also load balanced across both paths to improve bandwidth. The infrastructure initially was provided with a base installation of Red Hat 6.2. The solution requires several Linux server components to be installed. Instead of installing the optional server components manually it would be more efficient to re-install the operating system at this time. Prior to the re-install, the WWNs (Worldwide Names) for the FC HBAs were identified and the FC switch zoning and 3PAR storage was configured. This provided a way to select the SAN storage and set up multipathing during the re-installation phase. See Table 7 for the device mapper configuration used in this solution. For manually configuring Multipath, please refer to the Red Hat Enterprise Linux 6 DM Multipath configuration guide. Table 7. Device Mapper configuration Device Mapper name Multipath disk files Description /dev/mapper/mapthc /dev/mapper/mpathe /dev/mapper/mpathf /dev/mapper/mpathg /dev/mapper/mpathh /dev/mapper/mpathi /dev/mapper/mpathj /dev/mapper/mpathk sdc sdd sdg sdh sdi sdj sdk sdl sdn sdm sdo sdp sdq sdr sds sdt Serviceguard Lock Disk 256MB / RAID1 PROD_DATA_ASM GB / RAID 5 PROD_DATA_ASM GB / RAID 5 PROD_DATA_ASM GB / RAID 5 PROD_DATA_ASM GB / RAID 5 PROD_LOG_ASM GB / RAID 1 PROD_LOG_ASM GB / RAID 1 PROD_LOG_ASM GB / RAID 1 20

21 Device Mapper name Multipath disk files Description /dev/mapper/mpathl /dev/mapper/mpathm /dev/mapper/mpathn /dev/mapper/mpatho /dev/mapper/mpathp /dev/mapper/mpathq /dev/mapper/mpathr /dev/mapper/mpaths /dev/mapper/mpatht sdu sdv sdw sdae sdx sdaf sdy sdag sdz sdah sdaa sdai sdab sdaj sdac sdak sdad sdal PROD_LOG_ASM GB / RAID 1 TEST_DATA_ASM GB / RAID 5 TEST_DATA_ASM GB / RAID 5 TEST_DATA_ASM GB / RAID 5 TEST_DATA_ASM GB / RAID 5 TEST_LOG_ASM GB / RAID 1 TEST_LOG_ASM GB / RAID 1 TEST_LOG_ASM GB / RAID 1 TEST_LOG_ASM GB / RAID 1 Setup the UDEV rules file Oracle Grid Infrastructure does not provide the ASMLib package for Linux 6. ASMLib is a support library for the Automatic Storage Management to directly access disks. To resolve this issue use the Linux UDEV facility to provide Oracle permissions and ownership for device mapper files. The udev tool is a user-space file system that provides a dynamic mapping of device filenames to hardware devices. Create the udev permission file to enable Oracle permissions and ownership of the device mapper files. Enter the mpath name of the volume, the oracle user and group as well as a permission value of 660. To determine the mpath name use the command multipath ll to display the mpath to disk mappings. A system reboot will initiate this configuration. This must be done prior to installing Grid Infrastructure software. Create the 12-dm-permissions.rules file to set the udev permission: # vi /etc/udev/rules.d/12-dm-permissions.rules ENV{DM_NAME}=="mpathe", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathf", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathg", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathh", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathi", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathj", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathk", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathl", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathm", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathn", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpatho", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathp", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathq", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpathr", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpaths", OWNER:="oracle", GROUP:="dba", MODE:="660" ENV{DM_NAME}=="mpatht", OWNER:="oracle", GROUP:="dba", MODE:="660" 21

22 Appendix B: Oracle GRID Infrastructure 11gR2 installation Install Oracle Grid Infrastructure 11gR2 on both DL980 servers. Create the ASM disk groups for the PROD database instance on one server and the ASM disk groups for the TEST database instance on the other server. Oracle Grid Infrastructure only allows for a single ASM disk group to be created during the installation and the other disk groups will need to be created afterwards. Please refer to the Oracle Grid Infrastructure Installation Guide 11g Release 2 for Linux for further details. Server1 Install Oracle Grid Infrastructure software and create the ASM disk groups for the PROD database instance. The following answers were provided to the installation wizard: Select Installation Option Disk Group Name: Redundancy: AU Size: Disk Discovery Path: Add Disks Create the ORACLE ASM environment file. Configure Oracle Grid Infrastructure for a Standalone Server PROD_DATA External 1 MB /dev/mapper/* /dev/mapper/mpathe, /dev/mapper/mpathf, /dev/mapper/mpathg, /dev/mapper/mpathh Login as the oracle user and change to the oracle home directory and then create the asm.env file. $ cd $ vi asm.env export ORACLE_HOME=/apps/oracle/product/11.2.0/grid export ORACLE_BASE=/apps/oracle export PATH=$PATH:$ORACLE_HOME/bin export ORACLE_SID=+ASM Source the newly created asm.env file and create the PROD_LOG ASM Disk Group. $../asm.env $ sqlplus "/ as sysasm" SQL> CREATE DISKGROUP PROD_LOG EXTERNAL REDUNDANCY DISK '/dev/mapper/mpathi', '/dev/mapper/mpathj', '/dev/mapper/mpathk, '/dev/mapper/mpathl' ATTRIBUTE 'au_size' = '1M', 'compatible.asm' = '11.2', 'compatible.rdbms' = '11.2' / SQL> alter diskgroup PROD_LOG mount; SQL> select name, total_mb from v$asm_diskgroup order by name; NAME TOTAL_MB PROD_DATA PROD_LOG SQL> select name, path from v$asm_disk order by name; NAME PATH PROD_DATA_0000 /dev/mapper/mpathe PROD_DATA_0001 /dev/mapper/mpathf PROD_DATA_0002 /dev/mapper/mpathg PROD_DATA_0003 /dev/mapper/mpathh PROD_LOG_0000 /dev/mapper/mpathi PROD_LOG_0001 /dev/mapper/mpathj PROD_LOG_0002 /dev/mapper/mpathk PROD_LOG_0003 /dev/mapper/mpathl 22

23 Server2 Install Oracle Grid Infrastructure software and create the ASM disk groups for the TEST database instance. The following answers were provided to the installation wizard: Select Installation Option Disk Group Name: Redundancy: AU Size: Disk Discovery Path: Add Disks Create the Oracle ASM environment file. Configure Oracle Grid Infrastructure for a Standalone Server TEST_DATA External 1 MB /dev/mapper/* /dev/mapper/mpathm, /dev/mapper/mpathn, /dev/mapper/mpatho, /dev/mapper/mpathp Login as the oracle user and change to the oracle home directory and then create the asm.env file. $ cd $ vi asm.env export ORACLE_HOME=/apps/oracle/product/11.2.0/grid export ORACLE_BASE=/apps/oracle export PATH=$PATH:$ORACLE_HOME/bin export ORACLE_SID=+ASM Source the newly created asm.env file and create the TEST_LOG ASM disk group. $../asm.env $ sqlplus "/ as sysasm" SQL> CREATE DISKGROUP TEST_LOG EXTERNAL REDUNDANCY DISK '/dev/mapper/mpathq', '/dev/mapper/mpathr', '/dev/mapper/mpaths, '/dev/mapper/mpatht' ATTRIBUTE 'au_size' = '1M', 'compatible.asm' = '11.2', 'compatible.rdbms' = '11.2' / SQL> alter diskgroup TEST_LOG mount; SQL> select name, total_mb from v$asm_diskgroup order by name; NAME TOTAL_MB TEST_DATA TEST_LOG SQL> select name, path from v$asm_disk order by name; NAME PATH TEST_DATA_0000 /dev/mapper/mpathm TEST_DATA_0001 /dev/mapper/mpathn TEST_DATA_0002 /dev/mapper/mpatho TEST_DATA_0003 /dev/mapper/mpathp TEST_LOG_0000 /dev/mapper/mpathq TEST_LOG_0001 /dev/mapper/mpathr TEST_LOG_0002 /dev/mapper/mpaths TEST_LOG_0003 /dev/mapper/mpatht Modify the ASM startup for Serviceguard for Linux The ASM instances will need to be modified so Serviceguard for Linux can manage the mounting and dismounting of the ASM disk groups. Configure both ASM instances to start without mounting any disk groups. By default ASM creates a SPFILE on the ASM disk group defined during the installation. Create a PFILE from the SPFILE and store it in the ORACLE_HOME/dbs directory. The ASM PFILE will be explicitly called by Serviceguard for Linux to start the ASM instances. 23

24 Both servers Startup the ASM instance and create a PFILE in the ASM ORACLE_HOME/dbs directory. Login as the oracle user and source the asm.env file. Log into the ASM instance and create the PFILE and then shut down the instance. $ sqlplus / as sysasm SQL> create pfile='/apps/oracle/product/11.2.0/grid/dbs/init+asm.ora' from spfile; SQL> shutdown Edit the PFILEs to prevent ASM from mounting disk groups. $ vi /apps/oracle/product/11.2.0/grid/dbs/init+asm.ora +ASM.asm_diskgroups= #Manual Mount Validate the ASM instances. It is recommended at this time to validate that both ASM instances work as planned. This will limit the scope of troubleshooting to the Grid Infrastructure installation phase. After successfully completing all the following steps continue to the next phase installing Oracle Enterprise database. Verify that the ASM instances can start up without mounting any disk groups. Both servers Start up the ASM instances using the newly created PFILE and verify that no disk groups were mounted. $ sqlplus / as sysasm SQL> startup pfile=/apps/oracle/product/11.2.0/grid/dbs/init+asm.ora; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA TEST_LOG Server2 Manually mount and dismount all disk groups. $ sqlplus / as sysasm SQL> alter diskgroup PROD_DATA, PROD_LOG, TEST_DATA, TEST_LOG mount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA MOUNTED PROD_LOG MOUNTED TEST_DATA MOUNTED TEST_LOG MOUNTED SQL> alter diskgroup PROD_DATA, PROD_LOG, TEST_DATA, TEST_LOG dismount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA TEST_LOG 24

25 Server1 Mount and dismount all disk groups. $ sqlplus / as sysasm SQL> alter diskgroup PROD_DATA, PROD_LOG, TEST_DATA, TEST_LOG mount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA MOUNTED PROD_LOG MOUNTED TEST_DATA MOUNTED TEST_LOG MOUNTED SQL> alter diskgroup PROD_DATA, PROD_LOG, TEST_DATA, TEST_LOG dismount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA TEST_LOG Mount the PROD instance disk groups to prepare for the next section, installing Oracle Enterprise database. SQL> alter diskgroup PROD_DATA, PROD_LOG mount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA MOUNTED PROD_LOG MOUNTED TEST_DATA TEST_LOG Server2 Mount the TEST instance disk groups to prepare for the next section, installing Oracle Enterprise database. $ sqlplus / as sysasm SQL> alter diskgroup TEST_DATA, TEST_LOG mount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA MOUNTED TEST_LOG MOUNTED 25

26 Appendix C: Oracle Enterprise Database Edition 11gR2 installation On server1, install the Oracle Enterprise Database Edition 11gR2 software in two separate ORACLE_HOMEs. Create the temporary PROD database instance in one and install the software only for the TEST database instance. On server2, install the Oracle Enterprise Database Edition 11gR2 software in two separate ORACLE_HOMEs identical to server1. Create the temporary TEST database instance in one and install the software only for the PROD database instance. Copy the Oracle parameter files created during the initial setup of the PROD and TEST database instance over to their corresponding ORACLE_HOMEs on the standby server. The temporary database instances will be used to validate that the databases can be manually moved between servers prior to installing Serviceguard for Linux. The final PROD and TEST databases will be created after installing the database workload software. Reference the following Oracle documentation for installing Oracle database: Oracle Database Installation Guide 11g Release ) for Linux and support note Oracle Database on UNIX AIX, HP-UX, Linux, Mac OS X, Solaris, Tru64 UNIX Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2). The following steps outline the process for installing the Oracle database instances in our solution. Server1 Install the Oracle software and create the PROD database. Invoke the runinstaller as the oracle user and select the following options. Select Installation Option: Grid Installation Options: Select Install Type: Select DB Edition: Oracle Base: Software Location: Global db name: SID: Specify Storage Options: Select ASM Disk Group: Create and configure a database Single instance db installation Advanced install Enterprise Edition /apps/oracle /apps/oracle/product/11.2.0/prod prod.aps.com prod Oracle Automatic Storage Management PROD_DATA Install Oracle software with the software only option to create the ORACLE_HOME for the TEST database. Invoke the runinstaller as the oracle user and select the following options. Select Installation Option: Grid Installation Options: Select DB Edition: Oracle Base: Software Location: Software only Single instance db installation Enterprise Edition /apps/oracle /apps/oracle/product/11.2.0/test Server2 Install the Oracle software and create the TEST database. This is identical to installing the Oracle software and creating the PROD database on Server1 except for the following: Software Location: Global db name: SID: Select ASM Disk Group: /apps/oracle/product/11.2.0/test test.aps.com test TEST_DATA 26

27 Install Oracle software with the software only option to create the ORACLE_HOME for the PROD database. This is identical to installing the Oracle software and creating the TEST ORACLE_HOME on Server1 except for the following: Software Location: /apps/oracle/product/11.2.0/prod Both servers Copy the parameter files <ORACLE_HOME>/dbs/init<SID>.ora to the corresponding <ORACLE_HOME>/dbs located on the other server. Create the PROD and TEST environment file in the Oracle user s home directory on both servers. $vi /home/oracle/prod.env export ORACLE_HOME=/apps/oracle/product/11.2.0/prod export ORACLE_BASE=/apps/oracle export PATH=$PATH:$ORACLE_HOME/bin export ORACLE_SID=prod $vi /home/oracle/test.env export ORACLE_HOME=/apps/oracle/product/11.2.0/test export ORACLE_BASE=/apps/oracle export PATH=$PATH:$ORACLE_HOME/bin export ORACLE_SID=test Validate the database instances It is recommended at this time to validate that both database instances work as planned. This will limit the scope of troubleshooting to the Oracle Enterprise database installation phase. After successfully completing all the following steps continue to Appendix D: HP Serviceguard for Linux A installation. The following steps were used to verify that the PROD and TEST database instances will run on either server. Server1 Login as the oracle user and source the newly create prod.env file. $ cd $../prod.env Login to the PROD database instance, verify that it is up and then shut down the database. $ sqlplus / as sysdba SQL> show sga SQL> shutdown Source the ASM environment file and dismount the PROD_DATA and PROD_LOG ASM disk groups. $../asm.env $ sqlplus / as sysasm SQL> alter diskgroup PROD_DATA, PROD_LOG dismount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA TEST_LOG Server2 Login as the oracle user and source the newly create test.env file. $../test.env Login to the TEST database instance, verify that it is up and then shut down the database. $ sqlplus / as sysdba SQL> show sga SQL> shutdown 27

28 Source the ASM environment file and dismount the TEST_DATA and TEST_LOG ASM disk groups. $../asm.env $ sqlplus / as sysasm SQL> alter diskgroup TEST_DATA, TEST_LOG dismount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA TEST_LOG Source the ASM environment file and mount the PROD database disk groups. $../asm.env $ sqlplus / as sysasm SQL> alter diskgroup PROD_DATA, PROD_LOG mount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA MOUNTED PROD_LOG MOUNTED TEST_DATA TEST_LOG Source the prod.env file, startup the PROD database instance, and then shut down the database. $../prod.env $ sqlplus / as sysdba SQL> startup SQL> shutdown Source the ASM environment file, dismount the PROD database disk groups, and then shut down the instance. $../asm.env $ sqlplus / as sysasm SQL> alter diskgroup PROD_DATA, PROD_LOG dismount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA TEST_LOG SQL> shutdown Server1 Source the ASM environment file and mount the TEST database disk groups. $../asm.env $ sqlplus / as sysasm SQL> alter diskgroup TEST_DATA, TEST_LOG mount; SQL> select name, state from v$asm_diskgroup order by name; NAME STATE PROD_DATA PROD_LOG TEST_DATA MOUNTED TEST_LOG MOUNTED Source the test.env file, startup the TEST database instance, and then shut down the database. $../test.env $ sqlplus / as sysdba SQL> startup SQL> shutdown 28

29 Source the ASM environment file, dismount the TEST database disk groups, and then shut down the ASM instance. $../asm.env $ sqlplus / as sysasm SQL> alter diskgroup TEST_DATA, TEST_LOG dismount; SQL> shutdown Appendix D: HP Serviceguard for Linux A installation Install Serviceguard for Linux Server1 Mount the ISO file, and run the installation script. # mkdir /dvd # mount -o loop /stage/sglx1120/sglx1120_bb iso /dvd # cd / #./dvd/cmeasyinstall The Serviceguard for Linux installation wizard will first check for all the pre-requisite files and packages required for installation. If any are missing, install the missing files and re-run the installation script. After a successful installation the logfile can be found at /tmp/cmeasyinstall.log. Append the root user s profile to include the Serviceguard for Linux environment. # vi.bash_profile PATH=$PATH:/usr/local/cmcluster/bin export PATH. /etc/cmcluster.conf Modify MAN pages. # vi /etc/man.config MANPATH /usr/local/cmcluster/doc/man Create the cmclnodelist file for root level access between nodes. # vi $SGCONF/cmclnodelist dl980-1 root #PROD_CLUSTER, node1 dl980-2 root #PROD_CLUSTER, node2 Update the hosts file. It is recommended to setup the hosts file for the cluster as a backup for the DNS server. Turn off IPTABLES. # vi /etc/hosts dl980-1.aps.com dl dl980-2.aps.com dl prod.aps.com prod test.aps.com test By default, IPTABLES blocks access to the Serviceguard Manager for Linux. Stop IPTABLES prior to accessing the Serviceguard Manager for Linux. # service iptables save # service iptables stop # chkconfig iptables off Install HP Serviceguard Oracle Toolkit for Linux Mount the ISO file and install the rpm: # mount -o loop /stage/sglx1120/oracle_toolkit_bb iso /dvd Install the toolkit RPM. # cd /dvd/redhat/oracletoolkit/x86_64 # rpm -ivh serviceguard-oracle-toolkit-a redhat.noarch.rpm 29

30 Appendix E: HP Serviceguard for Linux cluster setup In the next steps, you will create the cluster, define the node membership, and configure the cluster heartbeat and cluster lock LUN device. From an Internet browser invoke the HP System Management Homepage, Login as the root user. Go to the Tools tab. Click the Serviceguard Manager link to launch the Serviceguard Manager for Linux. Click the Create Cluster button on the right. Enter the Cluster Name, for example Prod_cluster and enter checkmarks in the boxes for both nodes, for example: dl980-1, dl980-2 Go to the Network tab. Enter in the Subnets section, for example: Subnet: , Type: Heartbeat Enter in the Select Subnet Configuration section, for example: Node Network Address Dl980-1 bond Dl980-2 bond Go to the Lock tab For the Cluster Lock Type, select Lock LUN. Enter the Lock LUN Path for each node, for example: DL980-1 /dev/mapper/mpathcp1 DL980-2 /dev/mapper/mpathcp1 Select Finish. Note When using Device Mapper Multipath, the path to the cluster Lock LUN, for example /dev/mapper/mpathcp1, must be the same on each node. Select Check Configuration. Look for any errors and resolve them before going on to Appendix F: HP Serviceguard for Linux ASM package setup. Note You may get a warning about the default NODE_TIMEOUT value. This warning can be ignored here, but refer to the documentation when finalizing your cluster. Select Apply Configuration and select OK in the pop-up dialog box. Note During the creation of the cluster the Serviceguard for Linux files will be copied over to Server2. You must modify Server2 s root user s profile, MAN pages, cmclnodelist, hosts, and set the IPTABLES to match Server1. 30

31 To verify the cluster configuration, run the following options from the Administration menu of the HP Serviceguard Manager for Linux Summary page to test that each node can run the cluster in the event that the other node fails: a. Administration -> Run Cluster b. Administration -> Halt Node c. Administration -> Run Node See Figure 8 for viewing the Serviceguard Manager for Linux screens for the cluster configuration. Figure 8. Serviceguard Manager for Linux screens showing the cluster configuration 31

32 Appendix F: HP Serviceguard for Linux ASM package setup In this solution the following step were used to create the HP Serviceguard for Linux ASM multinode package to monitor and manage the Oracle ASM instances on both nodes. Create the package configuration file. # mkdir $SGCONF/asm_pkg # cd $SGCONF/asm_pkg # cmmakepkg -m sg/multi_node -m tkit/oracle/oracle asm_pkg.conf Edit the multinode package configuration file prior to building the package. # vi asm_pkg.conf package_name asm_pkg package_type multi_node node_name * auto_run yes node_fail_fast_enabled no tkit/oracle/oracle/tkit_dir ${SGCONF}/asm_pkg tkit/oracle/oracle/instance_type ASM tkit/oracle/oracle/oracle_admin oracle tkit/oracle/oracle/asm yes tkit/oracle/oracle/asm_diskgroup PROD_DATA tkit/oracle/oracle/asm_diskgroup PROD_LOG tkit/oracle/oracle/asm_diskgroup TEST_DATA tkit/oracle/oracle/asm_diskgroup TEST_LOG tkit/oracle/oracle/asm_home /apps/oracle/product/11.2.0/grid tkit/oracle/oracle/asm_user oracle tkit/oracle/oracle/asm_sid +ASM tkit/oracle/oracle/listener no tkit/oracle/oracle/pfile ${ASM_HOME}/dbs/init${ASM_SID}.ora tkit/oracle/oracle/monitor_processes asm_pmon_${asm_sid} tkit/oracle/oracle/monitor_processes asm_dbw0_${asm_sid} tkit/oracle/oracle/monitor_processes asm_ckpt_${asm_sid} tkit/oracle/oracle/monitor_processes asm_smon_${asm_sid} tkit/oracle/oracle/monitor_processes asm_lgwr_${asm_sid} tkit/oracle/oracle/monitor_processes asm_gmon_${asm_sid} tkit/oracle/oracle/monitor_processes asm_rbal_${asm_sid} tkit/oracle/oracle/maintenance_flag yes tkit/oracle/oracle/monitor_interval 30 tkit/oracle/oracle/time_out 30 tkit/oracle/oracle/parent_environment no tkit/oracle/oracle/cleanup_before_startup no tkit/oracle/oracle/user_shutdown_mode abort tkit/oracle/oracle/kill_asm_foregrounds yes service_name oracle_asm_service service_cmd "$SGCONF/scripts/tkit/oracle/tkit_module.sh Oracle_monitor" service_restart none service_fail_fast_enabled no service_halt_timeout 300 service_name oracle_asm_hang_service service_cmd "$SGCONF/scripts/tkit/oracle/tkit_module.sh oracle_hang_monitor 30 failure" service_restart none service_fail_fast_enabled no service_halt_timeout 300 Build the ASM multinode package. # cmapplyconf -P asm_pkg.conf 32

33 Start the ASM multinode package to complete the configuration: From your browser, on the HP Serviceguard Manager Summary page, select the ASM package. From the Administration menu, select Run Package. See Figure 9 for the Serviceguard Manager for Linux screens displaying the ASM package configuration. Note You may need to start the cluster if it is not already running. To run the cluster, go to the Administration menu and select Run Cluster. Validate the ASM package. It is recommended to stop and start the ASM package to verify that it works as planned. From your browser, on the HP Serviceguard Manager Summary page, select the ASM package. From the Administration menu, select Halt Package and then Start Package. Figure 9. Configuration of ASM package on Serviceguard Manager for Linux screens 33

34 Appendix G: HP Serviceguard for Linux PROD package setup In this solution the following steps were used to create the HP Serviceguard for Linux PROD failover package to monitor and manage the Oracle PROD database instance within the cluster. Set up and configure the database failover package. # mkdir $SGCONF/prod_pkg # cd $SGCONF/prod_pkg # cmmakepkg -m tkit/oracle/oracle prod_pkg.conf The prod_pkg.conf file must be edited before it can be used to build the PROD package. # vi prod_pkg.conf package_name prod_pkg package_type failover node_name * auto_run yes failback_policy manual tkit/oracle/oracle/tkit_dir ${SGCONF}/prod_pkg tkit/oracle/oracle/instance_type database tkit/oracle/oracle/oracle_home /apps/oracle/product/11.2.0/prod tkit/oracle/oracle/oracle_admin oracle tkit/oracle/oracle/sid_name prod tkit/oracle/oracle/start_mode open tkit/oracle/oracle/asm yes tkit/oracle/oracle/asm_diskgroup PROD_DATA tkit/oracle/oracle/asm_diskgroup PROD_LOG tkit/oracle/oracle/asm_home /apps/oracle/product/11.2.0/grid tkit/oracle/oracle/asm_user oracle tkit/oracle/oracle/asm_sid +ASM tkit/oracle/oracle/listener yes tkit/oracle/oracle/listener_name LISTENER tkit/oracle/oracle/listener_restart 2 tkit/oracle/oracle/pfile ${ORACLE_HOME}/dbs/init${SID_NAME}.ora tkit/oracle/oracle/monitor_processes ora_pmon_${sid_name} tkit/oracle/oracle/monitor_processes ora_dbw0_${sid_name} tkit/oracle/oracle/monitor_processes ora_ckpt_${sid_name} tkit/oracle/oracle/monitor_processes ora_smon_${sid_name} tkit/oracle/oracle/monitor_processes ora_lgwr_${sid_name} tkit/oracle/oracle/monitor_processes ora_reco_${sid_name} tkit/oracle/oracle/monitor_processes ora_mman_${sid_name} tkit/oracle/oracle/monitor_processes ora_psp0_${sid_name} tkit/oracle/oracle/monitor_processes ora_dbrm_${sid_name} tkit/oracle/oracle/monitor_processes ora_vktm_${sid_name} tkit/oracle/oracle/monitor_processes ora_rbal_${sid_name} tkit/oracle/oracle/monitor_processes ora_asmb_${sid_name} tkit/oracle/oracle/maintenance_flag yes tkit/oracle/oracle/monitor_interval 30 tkit/oracle/oracle/time_out 30 tkit/oracle/oracle/parent_environment no tkit/oracle/oracle/cleanup_before_startup no tkit/oracle/oracle/user_shutdown_mode abort tkit/oracle/oracle/kill_asm_foregrounds yes tkit/oracle/oracle/db_service all service_name oracle_proddb_service service_cmd "$SGCONF/scripts/tkit/oracle/tkit_module.sh oracle_monitor" service_restart none service_fail_fast_enabled no service_halt_timeout 300 service_name oracle_proddb_listener_service service_cmd "$SGCONF/scripts/tkit/oracle/tkit_module.sh oracle_monitor_listener" service_restart none service_fail_fast_enabled no 34

35 service_halt_timeout 300 service_name oracle_proddb_hang_service service_cmd "$SGCONF/scripts/tkit/oracle/tkit_module.sh oracle_hang_monitor 30 failure" service_restart none service_fail_fast_enabled no service_halt_timeout 300 priority 10 dependency_name asm_dep dependency_condition asm_pkg = up dependency_location same_node dependency_name prod_dep dependency_condition test_pkg = down dependency_location same_node monitored_subnet moinitored_subnet_access full ip_subnet ip_subnet_node dl980-1 ip_subnet_node dl980-2 ip_address Build the PROD package. # cmapplyconf -P prod_pkg.conf Start the PROD package to complete the configuration. From your browser, on the HP Serviceguard Manager Summary page, select the PROD package. From the Administration menu, select Run Package. Note The PROD package is dependent on the ASM multinode package to be running before it can be started on that node. See Figure 10 to view the Serviceguard for Linux PROD package configuration. Validate the PROD packages. It is recommended to run the PROD packages on both nodes to verify that it works as planned. From your browser, on the HP Serviceguard Manager Summary page, select the PROD package. From the Administration menu, select Move Package and then select Node2. Halt the PROD package to prepare for building and verifying the TEST package. From your browser, on the HP Serviceguard Manager Summary page, select the PROD package. From the Administration menu, select Halt Package. 35

36 Figure 10. Configuration of the PROD package shown in Serviceguard Manager for Linux screens 36

37 Appendix H: HP Serviceguard for Linux TEST package setup In this solution the following steps were used to create the HP Serviceguard for Linux TEST failover package to monitor and manage the Oracle TEST database instance within the cluster. The processes are the same as creating the PROD package only the unique steps and settings are listed below. # mkdir $SGCONF/test_pkg # cd $SGCONF/test_pkg # cmmakepkg -m tkit/oracle/oracle test_pkg.conf The test_pkg.conf file must be edited before it can be used to build the TEST package. # vi test_pkg.conf package_name test_pkg auto_run no failback_policy manual failover_policy manual tkit/oracle/oracle/tkit_dir ${SGCONF}/test_pkg tkit/oracle/oracle/oracle_home /apps/oracle/product/11.2.0/test tkit/oracle/oracle/sid_name test tkit/oracle/oracle/asm_diskgroup TEST_DATA tkit/oracle/oracle/asm_diskgroup TEST_LOG service_name oracle_testdb_service service_name oracle_testdb_listener_service service_name oracle_testdb_hang_service priority no_priority dependency_name test_dep dependency_condition prod_pkg = down ip_address Build the TEST package. # cmapplyconf -P test_pkg.conf Start the TEST package to complete the configuration. From your browser, on the HP Serviceguard Manager Summary page, select the TEST package. From the Administration menu, select Run Package. Note The TEST package is dependent on the ASM multinode package to be running before it can be started on that node. It is also dependent on the PROD package not running on that node before running the TEST package. 37

38 Figure 11 shows the Serviceguard for Linux TEST package configuration. Validate the TEST packages. It is recommended to run the TEST packages on both nodes to verify that it works as planned. From your browser, on the HP Serviceguard Manager Summary page, select the TEST package. From the Administration menu, select Move Package and then select Node2. Figure 11. Configuration of the TEST package on Serviceguard Manager for Linux screens 38

39 Appendix I: Bill of Materials Table 8 shows the equipment and components used in the HP ProLiant DL980 Universal Database Solution. Table 8. Bill of materials for the HP ProLiant DL980 Universal Database Solution Quantity Product Description HP rack and accessories 1 AF002A HP Universal Rack G2 Shock Rack 1 AF090A HP 10K Rack Airflow Optimization Kit 1 AF054A HP G2 Sidepanel Kit B24 HP 16A High Voltage Modular PDU 2 AF593A HP 3.6m C19 Nema L6-20P NA/JP Pwr Crd HP DL980 G7 2 AM451A HP ProLiant DL980 G7 CTO system-e7 proc 2 AM450A HP DL980 CPU Installation Assembly for E L21 HP DL980 G7 E FIO 4-processor Kit B21 HP DL980 G7 E processor Kit 16 A0R60A HP DL980G7 (E7) Memory Cartridge 256 A0R58A HP DL980 8GB 2Rx4 PC3L-10600R-9 Kit 2 AM434A HP DL980 LP PCIe I/O Expansion Module 4 AJ764A HP 82Q 8Gb Dual Port PCI-e FC HBA B21 HP NC552SFP 10GbE 2P Svr Adapter B21 HP 10Gb Short Range SFP Option B21 HP DL GB 6G SAS 10K 2.5 DP ENT HDD B21 HP Slim 12.7mm SATA DVDRW Optical Kit B21 HP 1G Flash Backed Cache 8 AM470A HP DL W CS Plat Ht Plg Pwr Supply B21 HP DL580/DL585/DL980 G7 Power Cable Kit HP 3PAR StoreServ QR585C HP 3PAR StoreServ single phase Rack Config Base 2 QR638C HP 3PAR StoreServ GHz Controller Node 12 QR591A HP 3PAR StoreServ Port 8Gb/s Fibre Channel Host/Disk Adapter 2 QR592C HP 3PAR StoreServ disk Drive Chassis 2 QR598A HP 3PAR StoreServ Rackmount Kit for 40-disk Drive Chassis 20 QW902A HP 3PAR StoreServ x300GB 6Gb/s SFF 15K SAS Drive Mag 4 QL266B HP 3PAR 10M 50/125 (LC-LC) Fiber Cable 39

40 Quantity Product Description Fibre Channel Switches 2 AW575B HP SN6000 Stackable 8Gb 24-port FC Switch 6 QK734A 5m PremierFlex OM4 LC/LC Multi-Mode Optical Cable 40 AJ718A HP 8 Gbps Short Wave FC SFP+ Ethernet Switches 2 JC100A HP G Switch 2 JD362A HP 5800/A W AC Power Supply 2 JC092B HP port 10GbE SFP+ Module 4 BK837A HP 0.5 m PremierFlex OM3+ LC/LC Optical Cable 40

41 For more information HP ProLiant DL980 Universal Database Solution website hp.com/go/udb HP Serviceguard for Linux website hp.com/go/sglx HP Serviceguard for Linux Deployment Guide HP Serviceguard Disaster Recovery Solutions Brochure HP ProLiant DL980 G7 website hp.com/go/dl980 ProLiant DL980 G7 server QuickSpecs HP ProLiant DL980 G7 server with HP PREMA Architecture HP 3PAR StoreServ website hp.com/go/3par HP 3PAR StoreServ storage brochure HP 3PAR Thin Technologies Solution Brief HP 5800 Switch Series HP SN6000 series FC switches HP and Oracle alliance hp.com/go/oracle HP portal for information on Oracle solutions hporacle.com Open Source and Linux from HP hp.com/go/linux Oracle 11gR2 Grid and Database documentation oracle.com/pls/db112/homepage To help us improve our documents, please provide feedback at hp.com/solutions/feedback Sign up for updates hp.com/go/getupdated Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Oracle and Java are registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group. 4AA4-6631ENW, May 2013

HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array

HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array Reference architecture Table of contents Executive summary... 3 Introduction... 4 HP ProLiant DL980

More information

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available Phone: (603)883-7979 [email protected] Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test

More information

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

HP SN1000E 16 Gb Fibre Channel HBA Evaluation HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance

More information

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment

More information

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

How To Write An Article On An Hp Appsystem For Spera Hana

How To Write An Article On An Hp Appsystem For Spera Hana Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

White Paper. Dell Reference Configuration

White Paper. Dell Reference Configuration White Paper Dell Reference Configuration Deploying Oracle Database 11g R1 Enterprise Edition Real Application Clusters with Red Hat Enterprise Linux 5.1 and Oracle Enterprise Linux 5.1 On Dell PowerEdge

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Scalable NAS for Oracle: Gateway to the (NFS) future

Scalable NAS for Oracle: Gateway to the (NFS) future Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change

More information

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform 1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.

More information

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture Technical white paper HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture Table of contents Executive summary... 2 Solution overview... 3 Solution components... 4 Storage... 5 Compute...

More information

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering Technical white paper HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering Table of contents Executive summary 2 Fast Track reference

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

PolyServe Matrix Server for Linux

PolyServe Matrix Server for Linux PolyServe Matrix Server for Linux Highly Available, Shared Data Clustering Software PolyServe Matrix Server for Linux is shared data clustering software that allows customers to replace UNIX SMP servers

More information

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Subtitle Table of contents Overview... 2 Key findings... 3 Solution

More information

HP Serviceguard for Linux Order and Configuration Guide

HP Serviceguard for Linux Order and Configuration Guide HP Serviceguard for Linux Order and Configuration Guide Introduction... 2 What s new... 2 Special note on configurations... 2 Cluster configuration elements... 3 System software... 3 HP Serviceguard for

More information

Virtualized Oracle 11g/R2 RAC Database on Oracle VM: Methods/Tips Kai Yu Oracle Solutions Engineering Dell Inc

Virtualized Oracle 11g/R2 RAC Database on Oracle VM: Methods/Tips Kai Yu Oracle Solutions Engineering Dell Inc Virtualized Oracle 11g/R2 RAC Database on Oracle VM: Methods/Tips Kai Yu Oracle Solutions Engineering Dell Inc 2 About Author Kai Yu 16 years with Oracle technology Work in Dell Oracle Solutions Engineering

More information

ORACLE DATABASE HIGH AVAILABILITY STRATEGY, ARCHITECTURE AND SOLUTIONS

ORACLE DATABASE HIGH AVAILABILITY STRATEGY, ARCHITECTURE AND SOLUTIONS ORACLE DATABASE HIGH AVAILABILITY STRATEGY, ARCHITECTURE AND SOLUTIONS DOAG Nuremberg - 17/09/2013 Kirill Loifman Oracle Certified Professional DBA www: dadbm.com Twitter: @loifmkir ELEMENTS OF HIGH AVAILABILITY

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

Bosch Video Management System High Availability with Hyper-V

Bosch Video Management System High Availability with Hyper-V Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Windows 6.1 February 2014 Symantec Storage Foundation and High Availability Solutions

More information

Cisco Active Network Abstraction Gateway High Availability Solution

Cisco Active Network Abstraction Gateway High Availability Solution . Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

Proof Point: Example Clustered Microsoft SQL Configuration on the HP ProLiant DL980

Proof Point: Example Clustered Microsoft SQL Configuration on the HP ProLiant DL980 Technical white paper Proof Point: Example Clustered Microsoft SQL Configuration on the HP ProLiant DL980 Table of contents Introduction 2 Server Configuration 3 Microsoft SQL Server Overview 3 Network

More information

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P. P2000 P4000 29/07/2010 1 HP D2200SB STORAGE BLADE Twelve hot plug SAS drives in a half height form factor P410i Smart Array controller onboard with 1GB FBWC Expand local storage capacity PCIe x4 to adjacent

More information

Certification: HP ATA Servers & Storage

Certification: HP ATA Servers & Storage HP ExpertONE Competency Model Certification: HP ATA Servers & Storage Overview Achieving an HP certification provides relevant skills that can lead to a fulfilling career in Information Technology. HP

More information

Red Hat Cluster Suite

Red Hat Cluster Suite Red Hat Cluster Suite HP User Society / DECUS 17. Mai 2006 Joachim Schröder Red Hat GmbH Two Key Industry Trends Clustering (scale-out) is happening 20% of all servers shipped will be clustered by 2006.

More information

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation 1. Overview of NEC PCIe SSD Appliance for Microsoft SQL Server Page 2 NEC Corporation

More information

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK KEY FEATURES PROVISION FROM BARE- METAL TO PRODUCTION QUICKLY AND EFFICIENTLY Controlled discovery with active control of your hardware Automatically

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation report prepared under contract with HP Executive Summary Solid state storage is transforming the entire

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering 1 Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering Reference Architecture Guide By Eduardo Freitas May 2011 Month Year

More information

Deploying a 48,000-user Exchange Server 2010 Environment with Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500

Deploying a 48,000-user Exchange Server 2010 Environment with Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500 1 Deploying a 48,000-user Exchange Server 2010 Environment with Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500 Reference Architecture Guide Leo Nguyen April 2011 Month Year Feedback

More information

NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions

NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions 1 NEC Corporation Technology solutions leader for 100+ years Established 1899, headquartered in Tokyo First Japanese joint

More information

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that

More information

Eliminate SQL Server Downtime Even for maintenance

Eliminate SQL Server Downtime Even for maintenance Eliminate SQL Server Downtime Even for maintenance Eliminate Outages Enable Continuous Availability of Data (zero downtime) Enable Geographic Disaster Recovery - NO crash recovery 2009 xkoto, Inc. All

More information

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

Compellent Storage Center

Compellent Storage Center Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

More information

HP ProLiant Storage Server family. Radically simple storage

HP ProLiant Storage Server family. Radically simple storage HP ProLiant Storage Server family Radically simple storage The HP ProLiant Storage Server family delivers affordable, easy-to-use network attached storage (NAS) solutions that simplify storage management

More information

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide

More information

Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper

Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper Application Note Abstract This document describes how to enable multi-pathing configuration using the Device Mapper service

More information

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products

More information

Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering 1 Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering Reference Architecture Guide By Jeff Chen May 2011 Month Year Feedback Hitachi Data Systems

More information

HP StorageWorks Modular Smart Array 1000 Small Business SAN Kit Hardware and Software Demonstration

HP StorageWorks Modular Smart Array 1000 Small Business SAN Kit Hardware and Software Demonstration Presenter Name/Title: Frank Arrazate, Engineering Project Manager Hardware Installation Hi, my name is Frank Arrazate. I am with Hewlett Packard Welcome to the hardware and software installation session

More information

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business Technical Report Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users Reliable and affordable storage for your business Table of Contents 1 Overview... 1 2 Introduction... 2 3 Infrastructure

More information

Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5. Version 1.0

Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5. Version 1.0 Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5 Version 1.0 November 2008 Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5 1801 Varsity Drive Raleigh NC 27606-2072 USA Phone: +1 919 754

More information

HRG Assessment: Stratus everrun Enterprise

HRG Assessment: Stratus everrun Enterprise HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at

More information

Optimizing Large Arrays with StoneFly Storage Concentrators

Optimizing Large Arrays with StoneFly Storage Concentrators Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time

More information

HP 3PAR storage technologies for desktop virtualization

HP 3PAR storage technologies for desktop virtualization Maximize virtual desktop ROI without risking service levels HP 3PAR storage technologies for desktop virtualization Solution brief Desktop virtualization pushes the cost, efficiency, and management benefits

More information

An Oracle White Paper July 2012. Expanding the Storage Capabilities of the Oracle Database Appliance

An Oracle White Paper July 2012. Expanding the Storage Capabilities of the Oracle Database Appliance An Oracle White Paper July 2012 Expanding the Storage Capabilities of the Oracle Database Appliance Executive Overview... 2 Introduction... 2 Storage... 3 Networking... 4 Determining the best Network Port

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

HP agile migration of Oracle databases across HP 3PAR StoreServ Storage systems

HP agile migration of Oracle databases across HP 3PAR StoreServ Storage systems Technical white paper HP agile migration of Oracle databases across HP 3PAR StoreServ Storage systems HP best practices and recommendations for nondisruptive Oracle database migration using HP 3PAR Peer

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Storage Area Network Configurations for RA8000/ESA12000 on Windows NT Intel

Storage Area Network Configurations for RA8000/ESA12000 on Windows NT Intel Storage Area Network Configurations for RA8000/ESA12000 on Application Note AA-RHH6B-TE Visit Our Web Site for the Latest Information At Compaq we are continually making additions to our storage solutions

More information

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

SAN TECHNICAL - DETAILS/ SPECIFICATIONS SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance

More information

Datasheet. FUJITSU Storage ETERNUS SF Storage Cruiser V16.0 ETERNUS SF AdvancedCopy Manager V16.0 ETERNUS SF Express V16.0

Datasheet. FUJITSU Storage ETERNUS SF Storage Cruiser V16.0 ETERNUS SF AdvancedCopy Manager V16.0 ETERNUS SF Express V16.0 Datasheet FUJITSU Storage ETERNUS SF Storage Cruiser V16.0 ETERNUS SF AdvancedCopy Manager V16.0 ETERNUS SF Express V16.0 Central console and advanced management functions for ETERNUS DX storage environments..

More information

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

WHITEPAPER: Understanding Pillar Axiom Data Protection Options WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

Windows Host Utilities 6.0.2 Installation and Setup Guide

Windows Host Utilities 6.0.2 Installation and Setup Guide Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277

More information

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7. Step-by-Step Guide to configure on Intel Server Systems R2224GZ4GC4 Software Version: DSS ver. 7.00 up01 Presentation updated: April 2013 www.open-e.com 1 www.open-e.com 2 TECHNICAL SPECIFICATIONS OF THE

More information

VTrak 15200 SATA RAID Storage System

VTrak 15200 SATA RAID Storage System Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data

More information

Oracle Database Disaster Recovery Using Dell Storage Replication Solutions

Oracle Database Disaster Recovery Using Dell Storage Replication Solutions Oracle Database Disaster Recovery Using Dell Storage Replication Solutions This paper describes how to leverage Dell storage replication technologies to build a comprehensive disaster recovery solution

More information

3PAR Fast RAID: High Performance Without Compromise

3PAR Fast RAID: High Performance Without Compromise 3PAR Fast RAID: High Performance Without Compromise Karl L. Swartz Document Abstract: 3PAR Fast RAID allows the 3PAR InServ Storage Server to deliver higher performance with less hardware, reducing storage

More information

Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers

Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers Technical white paper Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers Table of contents Abstract... 2 Introduction to Red Hat Enterprise Linux 6... 2 New features... 2 Recommended ProLiant

More information

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to

More information

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW Dell PowerEdge M-Series Blade Servers Simplifying IT The Dell PowerEdge M-Series blade servers address the challenges of an evolving IT environment by delivering

More information

HP ProLiant Cluster for MSA1000 for Small Business... 2. Hardware Cabling Scheme... 3. Introduction... 3. Software and Hardware Requirements...

HP ProLiant Cluster for MSA1000 for Small Business... 2. Hardware Cabling Scheme... 3. Introduction... 3. Software and Hardware Requirements... Installation Checklist HP ProLiant Cluster for HP StorageWorks Modular Smart Array1000 for Small Business using Microsoft Windows Server 2003 Enterprise Edition November 2004 Table of Contents HP ProLiant

More information

VERITAS Backup Exec 9.0 for Windows Servers

VERITAS Backup Exec 9.0 for Windows Servers WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS

More information

HP Departmental Private Cloud Reference Architecture

HP Departmental Private Cloud Reference Architecture Technical white paper HP Departmental Private Cloud Reference Architecture Table of contents Introduction to Virtualization, HP Smart Bundles, and the HP Departmental Private Cloud Reference Architecture

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

SanDisk ION Accelerator High Availability

SanDisk ION Accelerator High Availability WHITE PAPER SanDisk ION Accelerator High Availability 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Introduction 3 Basics of SanDisk ION Accelerator High Availability 3 ALUA Multipathing

More information

High Availability Databases based on Oracle 10g RAC on Linux

High Availability Databases based on Oracle 10g RAC on Linux High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Dell High Availability and Disaster Recovery Solutions Using Microsoft SQL Server 2012 AlwaysOn Availability Groups

Dell High Availability and Disaster Recovery Solutions Using Microsoft SQL Server 2012 AlwaysOn Availability Groups Dell High Availability and Disaster Recovery Solutions Using Microsoft SQL Server 2012 AlwaysOn Availability Groups Dell servers and storage options available for AlwaysOn Availability Groups deployment.

More information