Oracle Database 11g Release 2 Performance: Protocol Comparison Using Clustered Data ONTAP 8.1.1
|
|
|
- Darlene Bryan
- 10 years ago
- Views:
Transcription
1 Technical Report Oracle Database 11g Release 2 Performance: Protocol Comparison Using Clustered Data ONTAP Saad Jafri, NetApp November 2012 TR-4109 Abstract This technical report compares the performance of the Oracle 11g R2 Real Application Cluster (RAC) using an OLTP workload running in NetApp clustered Data ONTAP and 7-Mode environments over Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), software iscsi, and Oracle Direct NFS (DNFS). This report demonstrates the readiness of clustered Data ONTAP environments using these protocols to provide a high-performance environment for Oracle Databases.
2 TABLE OF CONTENTS 1 Executive Summary Introduction...4 Audience Test Configurations Summary of Test Results Results and Analysis OLTP Workload...6 Detailed Test Results Configuration Details Generic Configuration...8 Oracle RAC Configuration Using Data ONTAP Operating in 7-Mode...8 Oracle RAC Configuration Using clustered Data ONTAP Conclusion Appendixes Best Practice Summary for Data ONTAP Operating in 7-Mode Hardware Storage Layouts for All Testing Configurations Oracle Initialization Parameters for RAC Configuration Testing Linux Kernel Parameters for RAC Configuration Testing Other Linux OS Settings References: LIST OF TABLES Table 1) Hardware for the Oracle RAC configuration using Data ONTAP operating in 7-Mode with either FC over 8GB or FCoE, software iscsi, and DNFS over 10GbE Table 2) Hardware for the Oracle RAC configuration using clustered Data ONTAP with either FC over 8GB or FCoE, software iscsi, and DNFS over 10GbE Table 3) Best practice summary for Data ONTAP operating in 7-Mode Table 4) Database server hardware specifications for Oracle RAC testing Table 5) NetApp FAS6240 storage system specifications Table 6) Oracle initialization parameters for RAC configuration testing Table 7) ASM instance initialization parameters Table 8) Linux nondefault kernel parameters for RAC configuration testing Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
3 Table 9) Linux shell limits for Oracle LIST OF FIGURES Figure 1) Protocol comparison using clustered Data ONTAP Figure 2) Oracle RAC configuration using Data ONTAP operating in 7-Mode with 8GB FC network connections for FC tests....9 Figure 3) Oracle RAC configuration using Data ONTAP operating in 7-Mode with 10GbE network connections for FCoE, software iscsi, and Oracle DNFS tests Figure 4) Oracle RAC configuration using clustered Data ONTAP with 8GB FC network connections for FC tests Figure 5) Oracle RAC configuration using clustered Data ONTAP with 10GbE network connections for FCoE, software iscsi, and Oracle DNFS tests Figure 6) Volume layout for Oracle RAC testing using FC, FCoE, and software iscsi and Data ONTAP operating in 7-Mode Figure 7) Volume layout for Oracle RAC testing using DNFS and Data ONTAP operating in 7-Mode Figure 8) Volume layout for Oracle RAC testing using FC and clustered Data ONTAP Figure 9) Volume layout for Oracle RAC testing using FCoE, software iscsi, and clustered Data ONTAP Figure 10) Volume layout for Oracle RAC testing using DNFS and clustered Data ONTAP Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
4 1 Executive Summary 1.1 Introduction NetApp provides up-to-date documentation describing its products to help its customers select the most appropriate solutions for their IT infrastructure. This technical report contains performance information and tuning recommendations for Oracle Database 11g Release 2 (Oracle 11g R2) running in a clustered Data ONTAP environment. NetApp customers can use this updated information to make informed decisions about the optimal storage protocol for their Oracle Database deployments. The test configuration environments described in this report consist of the following elements: Oracle 11g R2 RAC on Red Hat Enterprise Linux 5 U6 NetApp storage systems running on clustered Data ONTAP and 7-Mode environments. The protocols used in this technical report are Fibre Channel (FC) using 8GB connections; Fibre Channel over Ethernet (FCoE); software iscsi; and Oracle Direct Network File System (DNFS) running over 10GbE connections. DNFS is provided by Oracle 11g R2. It runs as part of the Oracle Database and is optimized specifically for database workloads. Therefore DNFS can be used only to access the Oracle Database files. This report demonstrates that NetApp storage systems provide a solid environment for running the Oracle RAC database regardless of the protocol used. It also demonstrates that a clustered Data ONTAP environment is capable of providing a highly comparable performance that NetApp customers expect from 7-Mode, but with all the benefits that a clustered storage environment delivers. 1.2 Audience This report is intended for the following audiences: NetApp customers and employees who are investigating deploying their Oracle Databases using NetApp storage running on clustered Data ONTAP environments. NetApp customers and employees who are investigating the appropriate storage protocol for use in deployment of their Oracle Databases, using NetApp storage running on clustered Data ONTAP environments. This report can also be used as a performance-tuning reference. 2 Test Configurations These tests focused on measuring and comparing the performance of the Oracle 11g R2 RAC databases that used an online transaction processing (OLTP) workload connected to NetApp FAS6240 storage controllers running, in either 7-Mode or clustered Data ONTAP configuration. All test configurations used one of the following protocols: FC using 8GB connections; FCoE using 10GbE connections; software iscsi using 10GbE connections; or Oracle DNFS using 10GbE connections between the database servers and the FAS6240 storage controllers. The Oracle DNFS protocol was used from within the database servers. Oracle DNFS is an optimized NFS client that provides faster and more scalable access to NFS storage located on network-attached storage (NAS) devices. DNFS is accessible over TCP/IP. DNFS is built directly into the database kernel and provides better performance compared to the NFS driver in the host system by bypassing the operating system and generating only the requests required to complete the tasks at hand. The FCoE, software iscsi, and Oracle DNFS tested configurations used QLogic GbE converged network adapters (CNAs) in the database servers connected to NetApp 10GbE unified target adapters 4 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
5 (UTAs) installed in the FAS6240 storage controllers. The FC tested configurations used QLogic GbE CNAs in the database servers connected to onboard 8GB FC ports in the FAS6240 controllers. The tests evaluated the following configurations: FC using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240 storage controllers configured in 7-Mode FC using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240 storage controllers configured in clustered Data ONTAP FCoE using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240 storage controllers configured in 7-Mode FCoE using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240 storage controllers configured in clustered Data ONTAP Software iscsi using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240 storage controllers configured in 7-Mode Software iscsi using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240 storage controllers configured in clustered Data ONTAP DNFS using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240 storage controllers configured in 7-Mode DNFS using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS3270A storage controllers configured in clustered Data ONTAP The NetApp Data ONTAP operating system was used in conjunction with 64-bit aggregates to demonstrate the performance capabilities offered to NetApp customers. The tests that used 10GbE network connections were conducted by using jumbo frames. An OLTP workload was used for the performance tests. This workload simulated 750 users interacting with 16,000 product warehouses in an order-processing application. The client processes for the OLTP application were executed on a separate application server (client-server mode). The number of order entry transactions (OETs) completed per minute was used as the primary metric to measure application throughput. The input/output (I/O) mix for the OLTP workload was approximately 60% reads and 40% writes. 3 Summary of Test Results As the clustered Data ONTAP configurations become available, it is likely that the Oracle user base will have questions about the performance implications of moving to clustered Data ONTAP. The test results in this report can help answer the following questions about the transition of Oracle Databases from 7-Mode to clustered Data ONTAP and about the virtualization of Oracle Database environments that use both 7-Mode and clustered Data ONTAP. What is the performance impact of migrating a four-node RAC implementation from 7-Mode to clustered Data ONTAP, regardless of the protocol used? Answer: The performance in clustered Data ONTAP was found to be comparable to the performance in 7- Mode after installing and configuring clustered Data ONTAP 8.1.1in the previously used FAS6240A supporting the four-node RAC implementation in 7-Mode. What is the performance impact when comparing FC using 8GB connections for migrating a fournode RAC implementation from 7-Mode to clustered Data ONTAP? Answer: When running the four-node RAC implementation using FC and 8GB connections on clustered Data ONTAP, the performance in terms of OETs processed was within 6% of 7-Mode. 5 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
6 What is the performance impact when comparing FCoE using 10GbE connections for migrating a four-node RAC implementation from 7-Mode to clustered Data ONTAP? Answer: When running the four-node RAC implementation using FCoE and 10GbE connections on clustered Data ONTAP the performance in terms of OETs processed was within 7% of 7-Mode. What is the performance impact when comparing software iscsi using 10GbE connections for migrating a four-node RAC implementation from 7-Mode to clustered Data ONTAP? Answer: When running the four-node RAC implementation using software iscsi and 10GbE connections on clustered Data ONTAP 8.1.1, the performance in terms of OETs processed was within 1% of 7-Mode. What is the performance impact when comparing using Oracle DNFS and 10GbE connections for migrating a four-node RAC implementation from 7-Mode to clustered Data ONTAP? Answer: When running the four-node RAC implementation using FCoE and 10GbE connections on clustered Data ONTAP 8.1.1, the performance in terms of OETs processed was within 7% of 7-Mode. What is the performance impact of FC, FCoE, software iscsi, and Oracle DNFS protocols used for running a four-node RAC implementation in clustered Data ONTAP? Answer: When running the four-node RAC implementation on clustered Data ONTAP 8.1.1, the performance in terms of OETs processed was within 1% between FC and FCoE, and within 3% of FC and FCoE when using Oracle DNFS. Software iscsi performance in terms of OETs processed was within 9% of FC and FCoE, and within 11% of Oracle DNFS. To summarize, the test results indicate that clustered Data ONTAP is an ideal option for providing a high-performance storage environment to support Oracle Databases, regardless of what protocol is utilized. The remainder of this report presents the details of the test configurations and the full set of test results and analysis. 4 Results and Analysis Before analyzing the test results, it is important to understand the testing methodology and the workload employed. A consistent testing methodology was employed for all test cases. This methodology used an OLTP workload to demonstrate the capabilities of the configurations using clustered Data ONTAP and 7-Mode. 4.1 OLTP Workload The database created for the OLTP workload uses a data model designed for OET processing. The OLTP database was approximately 3.5TB in size and contained 16,000 warehouses. A workload that simulated 750 users and 16,000 active warehouses was used. The client processes for the OLTP application were executed on a separate application server (client-server mode). To determine the workload, a series of tests was run using different numbers of users, and through an iterative process the load generated was increased against the database. It was observed that the read latencies reported by the database exceeded 10ms, which was chosen as an upper limit of acceptable performance. It was decided to have a load that consisted of 750 users accessing the database. This load was held constant for all tests to have a uniform comparison of application throughput. These tests did not have any bottlenecks on the host, network or NetApp storage platform. 6 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
7 A mix of different types of transactions was used during each OLTP test run. These transaction types included order entries, payments, order status, delivery, and stock level. The number of OETs completed per minute was the primary metric used to measure application throughput. The I/O mix for the OLTP workload was approximately 60% reads and 40% writes. 4.2 Detailed Test Results Figure 1 shows the total number of OETs reported by the Oracle Database for each protocol. The following results were observed: For FC protocol, OETs processed in clustered Data ONTAP were within 6% of OETs processed in 7- Mode. For FCoE protocol, OETs processed in clustered Data ONTAP were within 7% of OETs processed in 7-Mode. For software iscsi, OETs processed in clustered Data ONTAP were within 1% of OETs processed in 7-Mode. For Oracle DNFS, OETs processed in clustered Data ONTAP were within 3% of OETs processed in 7-Mode. When comparing protocols running only in clustered Data ONTAP, OETs processed were within 1% between FC and FCoE and within 3% of FC and FCoE when using Oracle DNFS. OETs that were processed using software iscsi were within 9% of FC and FCoE, and within 11% of Oracle DNFS. The results of these tests show that the performance of the four-node Oracle RAC environment with clustered Data ONTAP was comparable to its performance in 7-Mode. These results indicate that clustered Data ONTAP 8.1.1is an apt storage platform to support this Oracle environment. For all tests, the maximum acceptable single-block read latencies was defined as 10ms for clustered Data ONTAP and 7-Mode. Figure 1) Protocol comparison using clustered Data ONTAP Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
8 5 Configuration Details 5.1 Generic Configuration Wherever possible, similar configurations for the test environments were used, including: Identical hardware Similar storage layouts Similar Oracle initialization parameters Similar Linux OS settings This section explains the test configurations in detail. The configurations followed the best practices recommended by NetApp and Oracle. 5.2 Oracle RAC Configuration Using Data ONTAP Operating in 7-Mode The tests for this configuration used standard 10GbE for FCoE, software iscsi, and Oracle DNFS with a QLogic GbE CNA in the Oracle RAC servers connected to NetApp 10GbE UTAs. The adapters were installed in the FAS6240A storage controllers and were connected through two Cisco Nexus 5020 switches. For FC tests, the same QLogic GbE CNAs in the Oracle RAC servers were used and connected to onboard 8GB FC ports in the FAS6240A storage controllers, through two Cisco Nexus 5020 switches. For the tests that were run on Data ONTAP operating in 7-Mode, the following configurations were used: FC using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240A storage controllers over 10GB FC connections FCoE using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240A storage controllers over 10GbE connections Software iscsi using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240A storage controllers over 10GbE connections DNFS using a four-node Oracle RAC implementation configured on four physical servers accessing the NetApp FAS6240A storage controllers over 10GbE connections To drive the workload, the client processes for the OLTP application were executed on a separate application server (client-server mode) from the Oracle RAC database servers. Network Configurations Figure 2 and Figure 3 show the network configurations used for tests, that involved a four-node RAC configuration deployed on four physical servers in a Data ONTAP Mode environment using 8GB FC and 10GbE network connections. Table 2 describes the storage network hardware shown in the figures. The only physical difference in the configurations is the storage interconnect used on the FAS6240A storage controllers. 8 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
9 Figure 2) Oracle RAC configuration using Data ONTAP operating in 7-Mode with 8GB FC network connections for FC tests. 9 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
10 Figure 3) Oracle RAC configuration using Data ONTAP operating in 7-Mode with 10GbE network connections for FCoE, software iscsi, and Oracle DNFS tests. Table 1) Hardware for the Oracle RAC configuration using Data ONTAP operating in 7-Mode with either FC over 8GB or FCoE, software iscsi, and DNFS over 10GbE. Hardware Database server Switch infrastructure for data and Oracle RAC interconnect traffic Description Four Fujitsu Primergy RX300 S6 dual six-core 2.4GHz. 48GB RAM RHEL 5 U6 One dual-port 10 GbE QLogic 8152 CNA per server for Oracle data traffic One dual-port 10GbE Intel 82599EB NIC per server for Oracle interconnect traffic Two Cisco Nexus GbE switches for Oracle RAC data traffic One Cisco Nexus GbE switch for Oracle RAC interconnect traffic 10 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
11 Hardware NetApp controllers. Storage shelves. Description FAS6240 controllers Active-active configuration using multipath high availability (MPHA) One NetApp dual-port 10GbE UTA per storage controller for FCoE Software iscsi and Oracle DNFS tests Two onboard 8GB FC ports per controller for FC tests Four DS4243 SAS shelves per controller 450GB 15k RPM Storage Network Configuration Jumbo frames were used in this network configuration for all 10GbE connections. During these tests, an MTU size of 9,000 was set for all storage interfaces on the host, for all interfaces on the NetApp controllers, and for the ports involved on the switch. A single private subnet was used to segment Ethernet traffic between the host and the storage controllers, and a single private subnet was used to segment Ethernet traffic between database servers for Oracle RAC interconnect communications between database servers. Oracle ASM Configuration for FC, FCoE, and Software iscsi For all block-based protocol testing, including FC, FCoE and software iscsi, Oracle ASM was configured to house the Oracle Database data files, redo logs, and temporary files. For each block-based protocol, a disk group was created consisting of LUNs residing on both FAS6240A storage controllers, to balance the traffic across the two controllers. The default Oracle ASM settings were used when these disk groups were created. Oracle binary files for each RAC server were put on their own LUN, formatted with ext3 file system. All LUNs presented to Oracle ASM were configured using RHEL 5.6 native multipathing, according to NetApp best practices outlined in Linux Host Utilities 6.1 available on the NetApp Support site. NFS Mounts and DNFS Configuration With Oracle DNFS, the NFS mount points and network paths are specified in a new configuration file, oranfstab. However, the NFS mounts must still be specified in the /etc/fstab file because Oracle cross-checks the entries in this file with the oranfstab file. If any of the NFS mounts between /etc/fstab and oranfstab do not match, DNFS does not use those NFS mount points. When conducting tests using Oracle RAC and Data ONTAP operating in 7-Mode, a set of NFS mount points was created on the RAC database nodes that allowed the RAC nodes to access the database files uniformly across the FAS6240A storage controllers. The following mount points were defined in the /etc/fstab file on the Linux hosts supporting RAC nodes 1 through 4. RAC node 1 megaops-21-e3a:/vol/nfs_orabin1 /u01/app nfs megaops-21-e3a:/vol/nfs_ocr_vote1 /u02/ocr_vote1 nfs megaops-22-e3a:/vol/nfs_ocr_vote2 /u03/ocr_vote2 nfs megaops-21-e3a:/vol/nfs_ocr_vote3 /u04/ocr_vote3 nfs megaops-21-e3a:/vol/nfs_oradata1 /u05/oradata1 nfs 11 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
12 megaops-22-e3a:/vol/nfs_oradata2 /u05/oradata2 nfs megaops-21-e3a:/vol/nfs_oralog1 /u05/oralog1 nfs megaops-22-e3a:/vol/nfs_oralog2 /u05/oralog2 nfs megaops-21-e3a:/vol/nfs_oratemp1 /u05/oratemp1 nfs megaops-22-e3a:/vol/nfs_oratemp2 /u05/oratemp2 nfs RAC node 2 megaops-22-e3a:/vol/nfs_orabin2 /u01/app nfs megaops-21-e3a:/vol/nfs_ocr_vote1 /u02/ocr_vote1 nfs megaops-22-e3a:/vol/nfs_ocr_vote2 /u03/ocr_vote2 nfs megaops-21-e3a:/vol/nfs_ocr_vote3 /u04/ocr_vote3 nfs megaops-21-e3a:/vol/nfs_oradata1 /u05/oradata1 nfs megaops-22-e3a:/vol/nfs_oradata2 /u05/oradata2 nfs megaops-21-e3a:/vol/nfs_oralog1 /u05/oralog1 nfs megaops-22-e3a:/vol/nfs_oralog2 /u05/oralog2 nfs megaops-21-e3a:/vol/nfs_oratemp1 /u05/oratemp1 nfs megaops-22-e3a:/vol/nfs_oratemp2 /u05/oratemp2 nfs RAC node 3 megaops-21-e3a:/vol/nfs_orabin3 /u01/app nfs megaops-21-e3a:/vol/nfs_ocr_vote1 /u02/ocr_vote1 nfs megaops-22-e3a:/vol/nfs_ocr_vote2 /u03/ocr_vote2 nfs megaops-21-e3a:/vol/nfs_ocr_vote3 /u04/ocr_vote3 nfs megaops-21-e3a:/vol/nfs_oradata1 /u05/oradata1 nfs megaops-22-e3a:/vol/nfs_oradata2 /u05/oradata2 nfs megaops-21-e3a:/vol/nfs_oralog1 /u05/oralog1 nfs 12 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
13 megaops-22-e3a:/vol/nfs_oralog2 /u05/oralog2 nfs megaops-21-e3a:/vol/nfs_oratemp1 /u05/oratemp1 nfs megaops-22-e3a:/vol/nfs_oratemp2 /u05/oratemp2 nfs RAC node 4 megaops-22-e3a:/vol/nfs_orabin4 /u01/app nfs megaops-21-e3a:/vol/nfs_ocr_vote1 /u02/ocr_vote1 nfs megaops-22-e3a:/vol/nfs_ocr_vote2 /u03/ocr_vote2 nfs megaops-21-e3a:/vol/nfs_ocr_vote3 /u04/ocr_vote3 nfs megaops-21-e3a:/vol/nfs_oradata1 /u05/oradata1 nfs megaops-22-e3a:/vol/nfs_oradata2 /u05/oradata2 nfs megaops-21-e3a:/vol/nfs_oralog1 /u05/oralog1 nfs megaops-22-e3a:/vol/nfs_oralog2 /u05/oralog2 nfs megaops-21-e3a:/vol/nfs_oratemp1 /u05/oratemp1 nfs megaops-22-e3a:/vol/nfs_oratemp2 /u05/oratemp2 nfs The following excerpts are from the oranfstab file that was used during the RAC performance testing to define the mount points that were to be used by the Oracle DNFS client. The oranfstab file excerpts that follow were used by RAC nodes 1 through 4. RAC node 1 server: megaops-21-e3a path: local: path: local: export: /vol/nfs_oralog1 mount: /u05/oralog1 export: /vol/nfs_oradata1 mount: /u05/oradata1 export: /vol/nfs_oratemp1 mount: /u05/oratemp1 server: megaops-22-e3a path: local: path: local: export: /vol/nfs_oralog2 mount: /u05/oralog2 export: /vol/nfs_oradata2 mount: /u05/oradata2 export: /vol/nfs_oratemp2 mount: /u05/oratemp2 RAC node 2 server: megaops-21-e3a path: local: path: local: export: /vol/nfs_oralog1 mount: /u05/oralog1 export: /vol/nfs_oradata1 mount: /u05/oradata1 export: /vol/nfs_oratemp1 mount: /u05/oratemp1 server: megaops-22-e3a 13 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
14 path: local: path: local: export: /vol/nfs_oralog2 mount: /u05/oralog2 export: /vol/nfs_oradata2 mount: /u05/oradata2 export: /vol/nfs_oratemp2 mount: /u05/oratemp2 RAC node 3 server: megaops-21-e3a path: local: path: local: export: /vol/nfs_oralog1 mount: /u05/oralog1 export: /vol/nfs_oradata1 mount: /u05/oradata1 export: /vol/nfs_oratemp1 mount: /u05/oratemp1 server: megaops-22-e3a path: local: path: local: export: /vol/nfs_oralog2 mount: /u05/oralog2 export: /vol/nfs_oradata2 mount: /u05/oradata2 export: /vol/nfs_oratemp2 mount: /u05/oratemp2 RAC node 4 server: megaops-21-e3a path: local: path: local: export: /vol/nfs_oralog1 mount: /u05/oralog1 export: /vol/nfs_oradata1 mount: /u05/oradata1 export: /vol/nfs_oratemp1 mount: /u05/oratemp1 server: megaops-22-e3a path: local: path: local: export: /vol/nfs_oralog2 mount: /u05/oralog2 export: /vol/nfs_oradata2 mount: /u05/oradata2 export: /vol/nfs_oratemp2 mount: /u05/oratemp2 For more information about DNS configuration, refer to the Oracle Database Installation Guide for Oracle 11g R2. Storage Configuration For these tests, Data ONTAP operating in 7-Mode was used for the storage controllers. For details on the storage layout, see the figures in the appendix section Storage Layouts for All Testing Configurations. 5.3 Oracle RAC Configuration Using clustered Data ONTAP The tests for this configuration used standard 10GbE for FCoE, software iscsi, and Oracle DNFS with a QLogic GbE CNA in the Oracle RAC servers connected to NetApp 10GbE UTAs. The adapters were installed in the FAS6240A storage controllers and were connected through a Cisco Nexus 5020 switch. For FC tests, the same QLogic GbE CNA in the Oracle RAC servers was connected to on-board 8GB FC ports in the FAS6240A storage controllers through a Nexus 5020 switch. For the tests using clustered Data ONTAP 8.1.1, the following configurations were used: FC using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240A storage controllers over 10GB FC connections FCoE using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240A storage controllers over 10GbE connections Software iscsi using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240A storage controllers over 10GbE connections 14 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
15 DNFS using a four-node Oracle RAC implementation configured on four physical servers accessing the FAS6240A storage controllers over 10GbE connections To drive the workload, the client processes for the OLTP application were executed on a separate application server (client-server mode) from the Oracle RAC database servers. Network Configurations Figure 4 and Figure 5 show the network configurations used for the tests involving a four-node RAC configuration deployed on four physical servers in a clustered Data ONTAP environment that uses 8GB FC and 10GbE network connections respectively. Table 2 describes the storage network hardware shown in the diagrams. The only physical difference in the configurations is the storage interconnect used on the FAS6240A storage controllers. Figure 4) Oracle RAC configuration using clustered Data ONTAP with 8GB FC network connections for FC tests. 15 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
16 Figure 5) Oracle RAC configuration using clustered Data ONTAP with 10GbE network connections for FCoE, software iscsi, and Oracle DNFS tests. Table 2) Hardware for the Oracle RAC configuration using clustered Data ONTAP with either FC over 8GB or FCoE, software iscsi, and DNFS over 10GbE. Hardware Database server Switch infrastructure for data and Oracle RAC interconnect traffic Switch infrastructure for clustered Data ONTAP traffic Description Four Fujitsu Primergy RX300 S6 dual six-core 2.4GHz 48GB RAM RHEL 5 U6 One dual-port 10 GbE QLogic 8152 CNA per server for Oracle data traffic One dual-port 10GbE Intel 82599EB NIC per server for Oracle interconnect traffic Two Cisco Nexus GbE switches for Oracle RAC data traffic One Cisco Nexus GbE switch for Oracle RAC interconnect traffic Two Cisco Nexus GbE switches 16 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
17 Hardware NetApp controllers Storage shelves Description FAS6240 controllers clustered Data ONTAP configuration using multipath high availability One NetApp 10GbE UTA per storage controller for FCoE, software iscsi, and Oracle DNFS tests Two onboard 8GB FC ports per controller for FC tests Four DS4243 SAS shelves per controller 450GB 15k RPM Storage Network Configuration Jumbo frames were used in this network configuration for all 10GbE connections. During these tests, an MTU size of 9,000 was set for all storage interfaces on the host, for all interfaces on the NetApp controllers, and for the ports involved on the switch. A single private subnet was used to segment Ethernet traffic between the host and the storage controllers, and a single private subnet was used to segment Ethernet traffic between database servers for Oracle RAC interconnect communications between database servers. Oracle ASM Configuration for FC, FCoE, and Software iscsi For all block-based protocol testing, including FC, FCoE, and software iscsi, Oracle ASM was configured to house the Oracle Database data files, redo logs, and temporary files. A disk group was created for each block-based protocol, consisting of LUNs residing on both FAS6240A storage controllers to balance the traffic across the two controllers. The default Oracle ASM settings were used when these disk groups were created. Oracle binary files for each RAC server were put on their own LUN, formatted with ext3 file system. All LUNs presented to Oracle ASM were configured by using RHEL 5.6 native multipathing in accordance with the NetApp best practices outlined in Linux Host Utilities 6.1, available at the NetApp Support site. NFS Mounts and DNFS Configuration With Oracle DNFS, the NFS mount points and network paths are specified in a new configuration file, oranfstab. However, the NFS mounts must still be specified in the /etc/fstab file, because Oracle cross-checks the entries in this file with the oranfstab file. If any of the NFS mounts between /etc/fstab and oranfstab do not match, DNFS does not use those NFS mount points. For tests that used Oracle RAC and clustered Data ONTAP 8.1.1, a set of NFS mount points was created on the RAC database nodes that allowed RAC nodes to access the database files uniformly across the FAS6240A storage controllers. In clustered Data ONTAP 8.1.1, the NFS mount points on the RAC nodes were specified by using a combination of the logical network interface and clustered Data ONTAP junction path that provides access to the different Oracle configuration and database files. The following mount points were defined in the /etc/fstab file on the Linux hosts supporting RAC nodes 1 through 4. RAC node 1 e3a_n1:/nfs_orabin1 /u01/app nfs e3a_n1:/nfs_ocr_vote1 /u02/ocr_vote1 nfs 17 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
18 e3a_n2:/nfs_ocr_vote2 /u03/ocr_vote2 nfs e3a_n1:/nfs_ocr_vote3 /u04/ocr_vote3 nfs e3a_n1:/nfs_oradata1 /u05/oradata1 nfs e3a_n2:/nfs_oradata2 /u05/oradata2 nfs e3a_n1:/nfs_oralog1 /u05/oralog1 nfs e3a_n2:/nfs_oralog2 /u05/oralog2 nfs e3a_n1:/nfs_oratemp1 /u05/oratemp1 nfs e3a_n2:/nfs_oratemp2 /u05/oratemp2 nfs RAC node 2 e3a_n2:/nfs_orabin2 /u01/app nfs e3a_n1:/nfs_ocr_vote1 /u02/ocr_vote1 nfs e3a_n2:/nfs_ocr_vote2 /u03/ocr_vote2 nfs e3a_n1:/nfs_ocr_vote3 /u04/ocr_vote3 nfs e3a_n1:/nfs_oradata1 /u05/oradata1 nfs e3a_n2:/nfs_oradata2 /u05/oradata2 nfs e3a_n1:/nfs_oralog1 /u05/oralog1 nfs e3a_n2:/nfs_oralog2 /u05/oralog2 nfs e3a_n1:/nfs_oratemp1 /u05/oratemp1 nfs e3a_n2:/nfs_oratemp2 /u05/oratemp2 nfs RAC node 3 e3a_n1:/nfs_orabin3 /u01/app nfs e3a_n1:/nfs_ocr_vote1 /u02/ocr_vote1 nfs e3a_n2:/nfs_ocr_vote2 /u03/ocr_vote2 nfs e3a_n1:/nfs_ocr_vote3 /u04/ocr_vote3 nfs e3a_n1:/nfs_oradata1 /u05/oradata1 nfs 18 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
19 e3a_n2:/nfs_oradata2 /u05/oradata2 nfs e3a_n1:/nfs_oralog1 /u05/oralog1 nfs e3a_n2:/nfs_oralog2 /u05/oralog2 nfs e3a_n1:/nfs_oratemp1 /u05/oratemp1 nfs e3a_n2:/nfs_oratemp2 /u05/oratemp2 nfs RAC node 4 e3a_n2:/nfs_orabin4 /u01/app nfs e3a_n1:/nfs_ocr_vote1 /u02/ocr_vote1 nfs e3a_n2:/nfs_ocr_vote2 /u03/ocr_vote2 nfs e3a_n1:/nfs_ocr_vote3 /u04/ocr_vote3 nfs e3a_n1:/nfs_oradata1 /u05/oradata1 nfs e3a_n2:/nfs_oradata2 /u05/oradata2 nfs e3a_n1:/nfs_oralog1 /u05/oralog1 nfs e3a_n2:/nfs_oralog2 /u05/oralog2 nfs e3a_n1:/nfs_oratemp1 /u05/oratemp1 nfs e3a_n2:/nfs_oratemp2 /u05/oratemp2 nfs The following are excerpts from the oranfstab file that was used during the RAC performance testing to define the mount points for use by the Oracle DNFS client. The oranfstab file excerpts that follow were used by RAC nodes 1 through 4. RAC node 1 server: e3a_n1 path: local: path: local: export: /nfs_oralog1 mount: /u05/oralog1 export: /nfs_oradata1 mount: /u05/oradata1 export: /nfs_oratemp1 mount: /u05/oratemp1 server: e3a_n2 path: local: path: local: export: /nfs_oralog2 mount: /u05/oralog2 export: /nfs_oradata2 mount: /u05/oradata2 export: /nfs_oratemp2 mount: /u05/oratemp2 RAC node 2 server: e3a_n1 19 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
20 path: local: path: local: export: /nfs_oralog1 mount: /u05/oralog1 export: /nfs_oradata1 mount: /u05/oradata1 export: /nfs_oratemp1 mount: /u05/oratemp1 server: e3a_n2 path: local: path: local: export: /nfs_oralog2 mount: /u05/oralog2 export: /nfs_oradata2 mount: /u05/oradata2 export: /nfs_oratemp2 mount: /u05/oratemp2 RAC node 3 server: e3a_n1 path: local: path: local: export: /nfs_oralog1 mount: /u05/oralog1 export: /nfs_oradata1 mount: /u05/oradata1 export: /nfs_oratemp1 mount: /u05/oratemp1 server: e3a_n2 path: local: path: local: export: /nfs_oralog2 mount: /u05/oralog2 export: /nfs_oradata2 mount: /u05/oradata2 export: /nfs_oratemp2 mount: /u05/oratemp2 RAC node 4 server: e3a_n1 path: local: path: local: export: /nfs_oralog1 mount: /u05/oralog1 export: /nfs_oradata1 mount: /u05/oradata1 export: /nfs_oratemp1 mount: /u05/oratemp1 server: e3a_n2 path: local: path: local: export: /nfs_oralog2 mount: /u05/oralog2 export: /nfs_oradata2 mount: /u05/oradata2 export: /nfs_oratemp2 mount: /u05/oratemp2 For more information about DNS configuration, refer to the Oracle Database Installation Guide for Oracle 11g R2. Storage Configuration For these tests, clustered Data ONTAP 8.1.1was used for the storage controllers. For details on the storage layout, see the figures in the appendix section Storage Layouts for All Testing Configurations. 6 Conclusion NetApp has a long history of providing high-performance storage systems for Oracle Database environments. With the advent of clustered Data ONTAP, NetApp continues to develop leading-edge storage systems that offer the ability to scale out both capacity and performance in support of NetApp customers current and future Oracle Database environments. The test results presented in this report show that clustered Data ONTAP provides excellent performance in Oracle environments regardless of the protocols used. 20 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
21 Appendixes Best Practice Summary for Data ONTAP Operating in 7-Mode This section summarizes the best practices listed in this report for each of the configurations that were tested. Table 3 contains the parameter name, value, and description, and the protocol for which it was used. Table 3) Best practice summary for Data ONTAP operating in 7-Mode. Parameter Name Protocol Value Description rsize DNFS NFS mount option wsize DNFS NFS mount option nfs.tcp.recvwindowsize DNFS Storage controller TCP receive window size nfs.tcp.xfersize DNFS TCP transfer window size Hardware Table 4 describes the database server hardware specifications for Oracle RAC testing. Table 4) Database server hardware specifications for Oracle RAC testing. Component System type Operating system Processor Memory Details Fujitsu Primergy RX300 S6 RHEL 5 U6 Two six-core 2.4GHz 48GB Oracle Database version Network connectivity One dual-port 10GbE QLogic 8152 CNA (for data traffic) One dual-port 10GbE Intel 82599EB NIC (for Oracle RAC interconnect traffic) Table 5 describes the FAS6240 storage system specifications. Table 5) NetApp FAS6240 storage system specifications. Component System type Operating system Processor Memory NVRAM Disks Details (in each storage controller) NetApp FAS6240 HA pair Clustered Data ONTAP and 7-Mode Dual-quad core Intel 2.53GHz 48GB 4GB Four DS4243 SAS shelves per controller 450GB 15k RPM 21 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
22 Component Network devices Details (in each storage controller) 10GbE UTA Storage Layouts for All Testing Configurations FC, FCoE, and Software iscsi Layout for 7-Mode Testing Using Oracle RAC Figure 6 shows the layout used for FC, FCoE, and software iscsi. At the database level, there were four Oracle ASM disks: DATA. Includes two LUNs, one located in the blk_oradata1 volume on storage controller 1 and the other in the blk_oradata2 volume on storage controller 2 containing Oracle data files. LOG. Includes two LUNs, one located in the blk_oralog1 volume on storage controller 1 and the other in the blk_oralog2 volume on storage controller 2 containing Oracle online redo logs. TEMP. Includes two LUNs, one located in the blk_oratemp1 volume on storage controller 1 and the other in the blk_oratemp2 volume on storage controller 2 containing Oracle temporary files. OCRVOTE. Includes three LUNs, two located in the blk_ocrvote1 volume on storage controller1 and the other in the blk_ocrvote2 volume on storage controller 2 containing the Oracle registry files and voting disks. The Oracle binary files for each Oracle RAC node resided in dedicated LUNs with ext3 file system. To evenly balance the Oracle binary files across both NetApp storage controllers, LUNs containing the binary files for RAC nodes 1 and 3 were on controller 1 and LUNs containing the binary files for RAC nodes 2 and 4 were on controller 2. All LUNs were presented to Oracle as block devices. That is, a single partition was created on each LUN. Each LUN was partitioned so that the starting sector of the partition began on a 4K boundary to provide proper LUN alignment. Refer to knowledge base article KB ID: , How to create aligned partitions in Linux for use with NetApp LUNs, VMDKs, VHDs and other virtual disk containers, available on the NetApp Support site. 22 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
23 Figure 6) Volume layout for Oracle RAC testing using FC, FCoE, and software iscsi and Data ONTAP operating in 7-Mode. 23 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
24 DNFS Layout for 7-Mode Testing Using Oracle RAC Figure 7 shows the layout used for DNFS. The Oracle data files are distributed evenly across both NetApp storage controllers in the nfs_oradata1 and nfs_oradata2 volumes. The Oracle online redo logs, temporary files, binaries for each RAC node, OCR, and vote disks are also balanced across the two controllers. Figure 7) Volume layout for Oracle RAC testing using DNFS and Data ONTAP operating in 7-Mode. 24 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
25 FC, FCoE, and Software iscsi Layout for Clustered Data ONTAP Testing Using Oracle RAC Figure 8 show the layout used for FC, FCoE, and software iscsi. At the database level, there were four ASM disks groups: DATA. Includes two LUNs, one located in the blk_oradata1 volume on storage controller 1 and the other in the blk_oradata2 volume on storage controller 2 containing Oracle data files. LOG. Includes two LUNs, one located in the blk_oralog1 volume on storage controller 1 and the other in the blk_oralog2 volume on storage controller 2 containing Oracle online redo logs. TEMP. Includes two LUNs, one located in the blk_oratemp1 volume on storage controller 1 and the other in the blk_oratemp2 volume on storage controller 2 containing Oracle temporary files. OCRVOTE. Includes three LUNs, two located in the blk_ocrvote1 volume on storage controller1 and the other in the blk_ocrvote2 volume on storage controller 2 containing the Oracle registry files and voting disks. The Oracle binary files for each Oracle RAC node resided in dedicated LUNs with ext3 file system. To evenly balance the Oracle binary files across both the NetApp storage controllers, LUNs containing the binary files for RAC nodes 1 and 3 were on controller 1 and LUNs containing the binary files for RAC nodes 2 and 4 were on controller 2. All LUNs were presented to Oracle as block devices. That is, a single partition was created on each LUN. Each LUN was partitioned so that the starting sector of the partition began on a 4K boundary to provide proper LUN alignment. Refer to the knowledge base article KB ID , How to create aligned partitions in Linux for use with NetApp LUNs, VMDKs, VHDs and other virtual disk containers, available on the NetApp Support site. Additionally, in order to present the LUNs and volumes to the Oracle RAC database servers, a Vserver and logical interfaces (LIFs) were created in clustered Data ONTAP on each storage controller, as shown in Figures 8 through Figure 10. A Vserver serves as a logical entity to combine volumes or LUNs across multiple storage controllers and present the storage as a single namespace externally to the hosts. LIFs are assigned to the Vserver to carry data over the physical network connections. For the tests in this report, two LIFs per controller were created and assigned to a single Vserver. The volumes and LUNs designated to contain the Oracle related data or binary files were assigned to the Vserver and then presented to the Oracle RAC database servers. 25 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
26 Figure 8) Volume layout for Oracle RAC testing using FC and clustered Data ONTAP DNFS Layout for 7-Mode Testing Using Oracle RAC Figure 9 shows the layout used for DNFS. The Oracle data files are distributed evenly across both NetApp storage controllers in the nfs_oradata1 and nfs_oradata2 volumes. The Oracle online redo logs, temporary files, binaries for each RAC node, OCR, and vote disks are also distributed across the two controllers. 26 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
27 Figure 9) Volume layout for Oracle RAC testing using FCoE, software iscsi, and clustered Data ONTAP Figure 10) Volume layout for Oracle RAC testing using DNFS and clustered Data ONTAP Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
28 Oracle Initialization Parameters for RAC Configuration Testing Table 6) Oracle initialization parameters for RAC configuration testing. Parameter Name Value Description _allocate_creation_order TRUE During allocation, files should be examined in the order in which they were created _in_memory_undo FALSE Make in-memory undo for top-level transactions 28 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
29 Parameter Name Value Description _undo_autotune FALSE Enable autotuning of undo_retention cluster_database TRUE RAC database compatible Database is completely compatible with this software version control_files +DATA /control_001, +DATA/control_002 (for blocks tests) /u05/oradata1/control_001, /u05/oradata1/control_002 (for DNFS tests) Control file location db_16k_cache_size 16384M Size of cache for 16K buffers db_2k_cache_size 256M Size of cache for 2K buffers db_4k_cache_size 256M Size of cache for 4K buffers db_block_size 8192 Size of database block in bytes db_cache_size 5120M Size of DEFAULT buffer pool for standard block-size buffers db_files 300 Maximum allowable number of database files db_name tpcc Database name specified in CREATE DATABASE diagnostic_dest /u01/app/oracle Diagnostic files location dml_locks 2000 One DML lock for each table modified in a transaction filesystemio_options setall I/O operations on file system files log_buffer Redo circular buffer size open_cursors 1204 Maximum number of open cursors parallel_max_servers 100 Maximum parallel query servers per instance plsql_optimize_level 2 Optimization level that will be used to compile PL/SQL library units processes 1200 User processes recovery_parallelism 40 Number of server processes to use for parallel recovery remote_listener oraperf-scan:1521 Name of remote listener sessions 1824 User and system sessions shared_pool_size 4096M Size in bytes of shared pool 29 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
30 Parameter Name Value Description spfile +DATA/spfiletpcc.ora (for blocks tests) /u05/oradata1/spfiletpcc.ora (for DNFS tests) Location of SPFILE statistics_level typical Statistics level undo_management AUTO If TRUE, the instance runs in SMU mode; otherwise it runs in RBU mode undo_retention Undo retention in seconds Initialization Parameters for ASM Instance Table 7) ASM instance initialization parameters. Parameter Name Value Description asm_diskgroups +DATA, +LOG, +TEMP, +OCRVOTE Disk groups to mount automatically asm_diskstring /dev/mapper; Location of mulipathed block devices for ASM disk groups asm_powerlimit 1 Number of parallel relocations for disk rebalancing diagnostic_dest /u01/app/grid Diagnostics base directory instance_type asm Type of instance to be executed large_pool_size Size in bytes of the large pool memory_max_target Maximum size for the memory target. memory_target Size in bytes of the memory target shared_pool_size Size in bytes of the shared pool Linux Kernel Parameters for RAC Configuration Testing Table 8) Linux nondefault kernel parameters for RAC configuration testing. Parameter Name Value Description sunrpc.tcp_slot_table_entries 128 Maximum number of outstanding asynch I/O calls kernel.sem Semaphores net.ipv4.ip_local_port_range Local port range used by TCP and UDP net.core.rmem_default Default TCP receive window size (default buffer size) net.core.rmem_max Maximum TCP receive window size (maximum buffer size) 30 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
31 Parameter Name Value Description net.core.wmem_default Default TCP send window size (default buffer size) net.core.wmem_max Maximum TCP send window size (maximum buffer size) fs.file-max Maximum number of file handles that the Linux kernel allocates fs.aio-max-nr Maximum number of allowable concurrent requests net.ipv4.tcp_rmem Receive buffer size parameters net.ipv4.tcp_wmem Send buffer size parameters net.ipv4.tcp_syncookies 0 Disables TCP syncookies net.ipv4.tcp_timestamps 0 Disables TCP timestamps net.ipv4.tcp_sack 0 Disables TCP selective ACKs net.ipv4.tcp_no_metrics_save 1 Do not save characteristics of the last connection in the flow cache net.ipv4.tcp_moderate_rcvbuf 0 Disables TCP received buffer autotuning fs.file-max Maximum number of file handles that the Linux kernel allocates fs.aio-max-nr Maximum number of allowable concurrent requests vm.min_free_kbytes vm.swapiness 100 Other Linux OS Settings Table 9) Linux shell limits for Oracle. Configuration File Settings /etc/security/limits.conf oracle soft nproc 2047 oracle hard nproc oracle soft nofile 1024 oracle hard nofile References: The following references were used in this report: Oracle Database Installation Guide NetApp KB article , How to create aligned partitions in Linux for use with NetApp LUNs, VMDKs, VHDs and other virtual disk containers 31 Oracle Database 11g Release 2 Performance: Protocol Comparison Using clustered Data ONTAP 8.1.1
32 earchid= Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature versions described in this document are supported for your specific environment. The NetApp IMT defines the product components and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each customer's installation in accordance with published specifications. NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this information or the implementation of any recommendations or techniques herein is a customer s responsibility and depends on the customer s ability to evaluate and integrate them into the customer s operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, and Data ONTAP are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Cisco Catalyst and Cisco Nexus are registered trademarks of Cisco Systems, Inc. Intel is a registered trademark of Intel Corporation. Linux is a registered trademark of Linus Torvalds. Oracle is a registered trademark of Oracle Corporation. All other brands or products are trademarks or registered 32 Oracle Database trademarks 11g Release of their 2 Performance: respective holders Protocol and Comparison should be treated Using as clustered such. TR Data ONTAP 8.1.1
How To Evaluate Netapp Ethernet Storage System For A Test Drive
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
Using NetApp Unified Connect to Create a Converged Data Center
Technical Report Using NetApp Unified Connect to Create a Converged Data Center Freddy Grahn, Chris Lemmons, NetApp November 2010 TR-3875 EXECUTIVE SUMMARY NetApp extends its leadership in Ethernet storage
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
AIX NFS Client Performance Improvements for Databases on NAS
AIX NFS Client Performance Improvements for Databases on NAS October 20, 2005 Sanjay Gulabani Sr. Performance Engineer Network Appliance, Inc. [email protected] Diane Flemming Advisory Software Engineer
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture
EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using NFS and DNFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published
Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Netapp @ 10th TF-Storage Meeting
Netapp @ 10th TF-Storage Meeting Wojciech Janusz, Netapp Poland Bogusz Błaszkiewicz, Netapp Poland Ljubljana, 2012.02.20 Agenda Data Ontap Cluster-Mode pnfs E-Series NetApp Confidential - Internal Use
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Dell EqualLogic Best Practices Series
Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Oracle 11g Release 2 Based Decision Support Systems with Dell EqualLogic 10GbE iscsi SAN A Dell Technical Whitepaper Storage
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2
Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2 An Oracle White Paper January 2004 Linux Filesystem Performance Comparison
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel
EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel A Detailed Review EMC Information Infrastructure Solutions Abstract
Ultimate Guide to Oracle Storage
Ultimate Guide to Oracle Storage Presented by George Trujillo [email protected] George Trujillo Twenty two years IT experience with 19 years Oracle experience. Advanced database solutions such
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Windows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
Microsoft Windows Server 2012 Hyper-V Storage Performance: Measuring SMB 3.0, iscsi, and FC Protocols
Technical Report Microsoft Windows Server 2012 Hyper-V Storage Performance: Measuring SMB 3.0, iscsi, and FC Protocols Dan Chilton, NetApp May 2013 TR-4175 Abstract With Windows Server 2012 Hyper-V and
Windows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
Oracle Database 11g Direct NFS Client. An Oracle White Paper July 2007
Oracle Database 11g Direct NFS Client An Oracle White Paper July 2007 NOTE: The following is intended to outline our general product direction. It is intended for information purposes only, and may not
White Paper. Dell Reference Configuration
White Paper Dell Reference Configuration Deploying Oracle Database 11g R1 Enterprise Edition Real Application Clusters with Red Hat Enterprise Linux 5.1 and Oracle Enterprise Linux 5.1 On Dell PowerEdge
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Introducing NetApp FAS2500 series. Marek Stopka Senior System Engineer ALEF Distribution CZ s.r.o.
Introducing NetApp FAS2500 series Marek Stopka Senior System Engineer ALEF Distribution CZ s.r.o. Complex data storage portfolio Corporate Data Centers Cloud Data Centers Flash Arrays for extreme performance
QoS & Traffic Management
QoS & Traffic Management Advanced Features for Managing Application Performance and Achieving End-to-End Quality of Service in Data Center and Cloud Computing Environments using Chelsio T4 Adapters Chelsio
Nimble Storage for Oracle Database on OL6 & RHEL6 with Fibre Channel or iscsi
BEST PRACTICES GUIDE Nimble Storage for Oracle Database on OL6 & RHEL6 with Fibre Channel or iscsi B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R O R A C L E 1 Document Revision
mplementing Oracle11g Database over NFSv4 from a Shared Backend Storage Bikash Roy Choudhury (NetApp) Steve Dickson (Red Hat)
mplementing Oracle11g Database over NFSv4 from a Shared Backend Storage Bikash Roy Choudhury (NetApp) Steve Dickson (Red Hat) Overview Client Architecture Why NFS for a Database? Oracle Database 11g RAC
Successfully Deploying Alternative Storage Architectures for Hadoop Gus Horn Iyer Venkatesan NetApp
Successfully Deploying Alternative Storage Architectures for Hadoop Gus Horn Iyer Venkatesan NetApp Agenda Hadoop and storage Alternative storage architecture for Hadoop Use cases and customer examples
Experience in running relational databases on clustered storage
Experience in running relational databases on clustered storage Ruben.Gaspar.Aparicio_@_cern.ch CERN, IT Department CHEP 2015, Okinawa, Japan 13/04/2015 Agenda Brief introduction Our setup Caching technologies
The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib Sayan Saha, Sue Denham & Lars Herrmann 05/02/2011 On 22 March 2011, Oracle posted the following
Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops
Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
A High-Performance Storage and Ultra-High-Speed File Transfer Solution
A High-Performance Storage and Ultra-High-Speed File Transfer Solution Storage Platforms with Aspera Abstract A growing number of organizations in media and entertainment, life sciences, high-performance
Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010
Best Practices for Data Sharing in a Grid Distributed SAS Environment Updated July 2010 B E S T P R A C T I C E D O C U M E N T Table of Contents 1 Abstract... 2 1.1 Storage performance is critical...
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200
Configuration Maximums
Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
Configuration Maximums VMware Infrastructure 3
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
Datasheet NetApp FAS8000 Series
Datasheet NetApp FAS8000 Series Respond more quickly to changing IT needs with unified scale-out storage and industry-leading data management KEY BENEFITS Support More Workloads Run SAN and NAS workloads
Demartek June 2012. Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment
June 212 FCoE/iSCSI and IP Networking Adapter Evaluation Evaluation report prepared under contract with Corporation Introduction Enterprises are moving towards 1 Gigabit networking infrastructures and
Private cloud computing advances
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology Evaluation report prepared under contract with Brocade Executive Summary As CIOs
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
FOR SERVERS 2.2: FEATURE matrix
RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
Testing Oracle 10g RAC Scalability
Testing Oracle 10g RAC Scalability on Dell PowerEdge Servers and Dell/EMC Storage Oracle 10g Real Application Clusters (RAC) software running on standards-based Dell PowerEdge servers and Dell/EMC storage
High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper
High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Experience and Lessons learnt from running High Availability Databases on Network Attached Storage
Experience and Lessons learnt from running High Availability Databases on Network Attached Storage Manuel Guijarro, Ruben Gaspar et al CERN IT/DES CERN IT-DES, CH-1211 Geneva 23, Switzerland [email protected],
FCoE Deployment in a Virtualized Data Center
FCoE Deployment in a irtualized Data Center Satheesh Nanniyur ([email protected]) Sr. Staff Product arketing anager QLogic Corporation All opinions expressed in this presentation are that of
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
VMware vsphere 5.1 Advanced Administration
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
Windows Server 2012 授 權 說 明
Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor
Clustered Data ONTAP 8.3
Clustered Data ONTAP 8.3 SAN Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277 Web:
3 Red Hat Enterprise Linux 6 Consolidation
Whitepaper Consolidation EXECUTIVE SUMMARY At this time of massive and disruptive technological changes where applications must be nimbly deployed on physical, virtual, and cloud infrastructure, Red Hat
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i. Performance Report
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i Performance Report V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1
Doubling the I/O Performance of VMware vsphere 4.1
White Paper Doubling the I/O Performance of VMware vsphere 4.1 with Broadcom 10GbE iscsi HBA Technology This document describes the doubling of the I/O performance of vsphere 4.1 by using Broadcom 10GbE
Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers. Version 1.0
Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers Version 1.0 March 2009 Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Inel
SPC BENCHMARK 1 EXECUTIVE SUMMARY
SPC BENCHMARK 1 EXECUTIVE SUMMARY NETAPP,INC. NETAPP FAS3170 SPC-1 V1.10.1 Submitted for Review: June 10, 2008 Submission Identifier: A00066 EXECUTIVE SUMMARY Page 2 of 7 EXECUTIVE SUMMARY Test Sponsor
Introduction to MPIO, MCS, Trunking, and LACP
Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the
NetApp for Oracle Database
NetApp Verified Architecture NetApp for Oracle Database Enterprise Ecosystem Team, NetApp November 2012 NVA-0002 Version 2.0 Status: Final TABLE OF CONTENTS 1 NetApp Verified Architecture... 4 2 NetApp
VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS
Vblock Solution for SAP: SAP Application and Database Performance in Physical and Virtual Environments Table of Contents www.vce.com V VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE
SAN Implementation Course SANIW; 3 Days, Instructor-led
SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
Comparing the Network Performance of Windows File Sharing Environments
Technical Report Comparing the Network Performance of Windows File Sharing Environments Dan Chilton, Srinivas Addanki, NetApp September 2010 TR-3869 EXECUTIVE SUMMARY This technical report presents the
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)
White Paper Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability) White Paper July, 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public
Deployments and Tests in an iscsi SAN
Deployments and Tests in an iscsi SAN SQL Server Technical Article Writer: Jerome Halmans, Microsoft Corp. Technical Reviewers: Eric Schott, EqualLogic, Inc. Kevin Farlee, Microsoft Corp. Darren Miller,
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
Configuration Maximums VMware vsphere 4.0
Topic Configuration s VMware vsphere 4.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.0. The limits presented in the
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
A Survey of Shared File Systems
Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions
New!! - Higher performance for Windows and UNIX environments
New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)
Configuration Maximums VMware vsphere 4.1
Topic Configuration s VMware vsphere 4.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.1. The limits presented in the
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Introduction to Gluster. Versions 3.0.x
Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution
Technical Report Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution Joel W. King, NetApp September 2012 TR-4110 TABLE OF CONTENTS 1 Executive Summary... 3 1.1 Overview...
<Insert Picture Here> Oracle Cloud Storage. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska
Oracle Cloud Storage Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska Oracle Cloud Storage Automatic Storage Management (ASM) Oracle Cloud File System ASM Dynamic
Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.
How To Write An Article On An Hp Appsystem For Spera Hana
Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
Scalable NAS for Oracle: Gateway to the (NFS) future
Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
ABC of Storage Security. M. Granata NetApp System Engineer
ABC of Storage Security M. Granata NetApp System Engineer Encryption Challenges Meet Regulatory Requirements No Performance Impact Ease of Installation Government and industry regulations mandate protection
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
