1 Oracle Database 11g Direct NFS Client An Oracle White Paper July 2007
2 NOTE: The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle s products remains at the sole discretion of Oracle. Oracle Database 11g - Direct NFS Client Page 2
3 Oracle Database 11g - Direct NFS Client Introduction...4 Direct NFS Client Overview...4 Benefits of Direct NFS Client...4 Direct NFS Client Performance, Scalability, and High Availability...5 Direct NFS Client Cost Savings...5 Direct NFS Client - Administration Made Easy...5 Direct NFS Client Configuration...6 Direct NFS Client A Performance Study...7 Performance Study Overview...7 Performance Study - Database Overview...8 Performance Study DSS-Style Performance Analysis...8 Better Scalability...9 Bonding Across Heterogeneous NIC...9 Reduced Resource Utilization...10 Performance Study - OLTP Performance Analysis...12 Performance Study Summary Analysis...13 Conclusion...14 Oracle Database 11g - Direct NFS Client Page 3
4 Oracle Database 11g - Direct NFS Client INTRODUCTION Networked-Attached Storage (NAS) systems have become commonplace in enterprise data centers. This widespread adoption can be credited in large part to the simple storage provisioning and inexpensive connectivity model when compared to blockprotocol Storage Area Network technology (e.g. FCP SAN, iscsi SAN). Emerging NAS technology, such as Clustered NAS, offers high-availability aspects not available with Direct Attached Storage (DAS). Furthermore, the cost of NAS appliances has decreased dramatically in recent years. NAS appliances and their client systems typically communicate via the Network File System (NFS) protocol. NFS allows client systems to access files over the network as easily as if the underlying storage was directly attached to the client. Client systems use the operating system provided NFS driver to facilitate the communication between the client and the NFS server. While this approach has been successful, drawbacks such as performance degradation and complex configuration requirements have limited the benefits of using NFS and NAS for database storage. Oracle Database 11g Direct NFS Client integrates the NFS client functionality directly in the Oracle software. Through this integration, Oracle is able to optimize the I/O path between Oracle and the NFS server providing significantly superior performance. In addition, Direct NFS Client simplifies, and in many cases automates, the performance optimization of the NFS client configuration for database workloads. DIRECT NFS CLIENT OVERVIEW Standard NFS client software, provided by the operating system, is not optimized for Oracle Database file I/O access patterns. With Oracle Database 11g, you can configure Oracle Database to access NFS V3 NAS devices directly using Oracle Direct NFS Client, rather than using the operating system kernel NFS client. Oracle Database will access files stored on the NFS server directly through the integrated Direct NFS Client eliminating the overhead imposed by the operating system kernel NFS. These files are also accessible via the operating system kernel NFS client thereby allowing seamless administration. Benefits of Direct NFS Client Direct NFS Client overcomes many of the challenges associated with using NFS with the Oracle Database. Direct NFS Client outperforms traditional NFS clients, is simple Oracle Database 11g - Direct NFS Client Page 4
5 to configure, and provides a standard NFS client implementation across all hardware and operating system platforms. Direct NFS Client Performance, Scalability, and High Availability Direct NFS Client includes two fundamental I/O optimizations to increase throughput and overall performance. First, Direct NFS Client is capable of performing concurrent direct I/O, which bypasses any operating system level caches and eliminates any operating system write-ordering locks. This decreases memory consumption by eliminating scenarios where Oracle data is cached both in the SGA and in the operating system cache and eliminates the kernel mode CPU cost of copying data from the operating system cache into the SGA. Second, Direct NFS Client performs asynchronous I/O, which allows processing to continue while the I/O request is submitted and processed. Direct NFS Client, therefore, leverages the tight integration with the Oracle Database software to provide unparalleled performance when compared to the operating system kernel NFS clients. Not only does Direct NFS Client outperform traditional NFS, it does so while consuming fewer system resources. The results of a detailed performance analysis are discussed later in this paper. Oracle Direct NFS Client currently supports up to 4 parallel network paths to provide scalability and high availability. Direct NFS Client delivers optimized performance by automatically load balancing requests across all specified paths. If one network path fails, then Direct NFS Client will reissue commands over any remaining paths ensuring fault tolerance and high availability. Direct NFS Client Cost Savings Oracle Direct NFS Client uses simple Ethernet for storage connectivity. This eliminates the need for expensive, redundant host bus adaptors (e.g., Fibre Channel HBA) or Fibre Channel switches. Also, since Oracle Direct NFS Client implements multi-path I/O internally, there is no need to configure bonded network interfaces (e.g. EtherChannel, 802.3ad Link Aggregation) for performance or availability. This results in additional cost savings, as most NIC bonding strategies require advanced Ethernet switch support. Direct NFS Client - Administration Made Easy In many ways, provisioning storage for Oracle Databases via NFS is easier than with other network storage architectures. For instance, with NFS there is no need for storage-specific device drivers (e.g. Fibre Channel HBA drivers) to purchase, configure and maintain, no host-based volume management or host file system maintenance and most importantly, no raw devices to support. The NAS device provides an optimized file system and the database server administrator simply mounts it on the host. The result is simple file system access that supports all Oracle file types. Oracle Direct NFS Client builds upon that simplicity by making NFS even simpler. Oracle Database 11g - Direct NFS Client Page 5
6 One of the primary challenges of operating system kernel NFS administration is the inconsistency in managing configurations across different platforms. Direct NFS Client eliminates this problem by providing a standard NFS client implementation across all platforms supported by the Oracle Database. This also makes NFS a viable solution even on platforms that don t natively support NFS, e.g. Windows. NFS is a shared file system, and can therefore support Real Application Cluster (RAC) databases as well as single instance databases. Without Oracle Direct NFS Client, administrators need to pay special attention to the NFS client configuration to ensure a stable environment for RAC databases. Direct NFS Client recognizes when an instance is part of a RAC configuration and automatically optimizes the mount points for RAC, relieving the administrator from manually configuring the NFS parameters. Further, Oracle Direct NFS Client requires a very simple network configuration, as the I/O paths from Oracle Database servers to storage are simple, private, non-routable networks and no NIC bonding is required. Direct NFS Client Configuration To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts. The mount options used in mounting the file systems are not relevant, as Direct NFS Client manages the configuration after installation. Direct NFS Client can use a new configuration file oranfstab or the mount tab file (/etc/mtab on Linux) to determine the mount point settings for NFS storage devices. It may be preferrable to configure the Direct NFS Client in the mount tab file if you have multiple Oracle installations that use the same NAS devices. Oracle first looks for the mount settings in $ORACLE_HOME/dbs/oranfstab, which specifies the Direct NFS Client settings for a single database. Next, Oracle looks for settings in /etc/oranfstab, which specifies the NFS mounts available to all Oracle databases on that host. Finally, Oracle reads the mount tab file (/etc/mtab on Linux) to identify available NFS mounts. Direct NFS Client will use the first entry found if duplicate entries exist in the configuration files. Example 1 below shows an example entry in the oranfstab. Finally, to enable Direct NFS Client, you must replace the standard Oracle Disk Manager (ODM) library with one that supports Direct NFS Client. Example 2 below highlights the commands to enable the Direct NFS Client ODM library. Oracle Database 11g - Direct NFS Client Page 6
7 Example 1: Sample oranfstab File server: MyNFSServer1 path: path: path: path: export: /vol/oradata1 mount: /mnt/oradata1 Example 2: Enabling the Direct NFS Client ODM Library prompt> cd $ORACLE_HOME/lib prompt> cp libodm11.so libodm11.so_stub prompt> ln s libnfsodm11.so libodm11.so DIRECT NFS CLIENT A PERFORMANCE STUDY In this section, we share the results of a performance comparison between operating system kernel and Oracle Direct NFS Client. The study used both OLTP and DSS workloads in order to assess the performance of Oracle Direct NFS Client under different types of workloads. Performance Study Overview The following test systems were configured for this testing: DSS Test System: For this testing, a 4-socket, dual-core x86_64 compatible database server running Linux was connected to an enterpriseclass NAS device via 3 Gigabit Ethernet private, non-routable networks. OLTP Test System: For this testing, a 2-socket, dual-core x86 compatible database server running Linux was connected to an enterpriseclass NAS device via a single Gigabit Ethernet private, non-routable network. The database software used for both tests was Oracle Database 11g. After mounting the NFS file systems, a database was created and loaded with test data. First, database throughput was measured by the test applications connected to the Oracle database configured to use operating system kernel NFS. After a small Oracle Database 11g - Direct NFS Client Page 7
8 amount of reconfiguration, the test applications were once again used to measure the throughput of the Oracle database configured to use Oracle Direct NFS Client. Performance Study - Database Overview The database used for the performance study consisted of an Order Entry schema with the following tables: Customers. The database contained approximately 4 million customer rows in the customer table. This table contains customer-centric data such as a unique customer identifier, mailing address, contact information and so forth. The customer table was indexed with a unique index on the customer identification column and a non-unique index on the customer last name column. The customer table and index required approximately 3GB of disk space. Orders. The database contained an orders table with roughly 6 million rows of data. The orders table had a unique composite index on the customer and order identification columns. The orders table and index required a little over 2 GB of disk space. Line Items. Simulating a customer base with complex transactions, the line item table contained nearly 45 million rows of order line items. The item table had a three-way unique composite index on customer identification, order identification and item identification columns. The item table and index consumed roughly 10 GB of disk space. Product. This table describes products available to order. Along with such attributes as price and description, there are up to 140 characters available for a detailed product description. There were 1 million products in the product table. The product table was indexed with a unique index on the product identification column. The product table and index required roughly 2 GB of disk space. Warehouse. This table maintains product levels at the various warehouse locations as well as detailed information about warehouses. The warehouse table is indexed with a unique composite index of two columns. There were 10 million rows in the warehouse table and combined with its index required roughly 10 GB. History. There was an orders history table with roughly 160 million rows accessed by the DSS-style queries. The history table required approximately 30 GB of disk space. Performance Study DSS-Style Performance Analysis To simulate the I/O pattern most typical of DSS environments, Oracle Parallel Query was used during this test. The workload consisted of queries that required 4 Oracle Database 11g - Direct NFS Client Page 8
9 full scans of the orders history table a workload that scans a little over 640 million rows at a cost of slightly more than 100 GB of physical disk I/O. The network paths from the Oracle Database server to the storage was first configured with the best possible bonded Ethernet interfaces supported by the hardware at hand. After executing the test to collect the operating system kernel NFS performance data, the network interfaces were re-configured to a simple Ethernet configuration without NIC bonding. Next, the workload was once again executed and performance data gathered. Better Scalability Figure 1 shows the clear benefit of Oracle Direct NFS Client for DSS-style disk I/O. Although both the operating system kernel NFS and Oracle Direct NFS Client both delivered 113 MB/s from a single Gigabit network path, adding the second network path to the storage shows the clear advantage of Oracle Direct NFS Client which scaled at a rate of 99%. The operating system NFS with bonded NICs on the other hand delivered only 70% scalability. The net effect was approximately 40% better performance with Direct NFS Client than with the operating system kernel NFS. Most importantly, the Direct NFS Client case was much simpler to configure and could have easily been achieved using very inexpensive (e.g. non-managed) Ethernet switches. Parallel Query Full Scan Throughput Oracle Direct NFS vs Operating System NFS with Bonding MB/s OS NFS, 1 Net Direct NFS, 1 Net OS NFS, 2 Bonded NICs Direct NFS, 2 Nets Figure 1: DSS-Style Throughput Comparison Oracle Direct NFS Client versus Operating System NFS Bonding Across Heterogeneous NIC As mentioned earlier in this paper, one of the major pitfalls of configuring bonded NICs is the fact that more costly Ethernet switches are required. Another subtle, Oracle Database 11g - Direct NFS Client Page 9
10 yet troubling requirement of bonding is that it is generally necessary to configure homogeneous NICs for each leg of the bonded interface. Depending on what hardware and operating system is being used, homogeneous NICs might stipulate both the same manufacturer and/or same host connectivity (e.g., all PCI or all motherboard). That can result in unusable hardware resources. For instance, most industry standard servers come from the factory with dual-port (and sometimes more) Gigabit Ethernet support right on the motherboard. However, it is generally not possible to incorporate these interfaces into a bonded NIC paired with PCI- Express NICs. This, however, is not an issue with Oracle Direct NFS Client. With Oracle Direct NFS, all network interfaces on a database server can be used for both performance and I/O redundancy, regardless of manufacturer or how they connect to the system (e.g. motherboard, PCI). To illustrate this point, the oranfstab file was modified on the test system to include one of the Gigabit Ethernet interfaces on the motherboard. Configured as such, Oracle Direct NFS Client was load-balancing I/O requests to the NAS device across 2 PCI-Express NICs and one NIC on the motherboard. Figure 2 shows how Direct NFS Client was able to exploit the additional bandwidth delivering 97% scalability to achieve full table scan throughput of 329 MB/s. Full Table Scan Throughput Direct NFS vs Operating System Kernel NFS with Bonding MB/s DNFS KNFS Number of Network Paths to Storage Figure 2: Oracle Direct NFS Client Sequential Read I/O Scalability Reduced Resource Utilization One of the main design goals of Oracle Direct NFS Client is improved resource utilization. Figure 3 shows that with a single network path to storage the operating system kernel NFS requires 12.5% more kernel-mode CPU than Oracle Direct NFS Client. This 12.5% overhead is rather unnecessary since both types of NFS exhibited the same 113 MB/s throughput with 1 network path to storage. Moreover, the percentage of processor utilization in system mode leaps 3.6 fold to 32% when going from one to 2 network paths. Using Oracle Direct NFS Client, Oracle Database 11g - Direct NFS Client Page 10
11 on the other hand, results in only a 2.9 fold increase when going from one to two networks (e.g. from 8 to 23%) a 25% improvement over the operating system kernel result. The test system was configured with a total of 4 network interfaces, including 2 on the motherboard and 2 connected via PCI-Express. In order to reserve one for SQL*Net traffic, a maximum of 3 networks were available for Direct NFS Client testing and 2 for bonded operating system kernel NFS. To that end, Figure 3 also shows that the Direct NFS Client system-mode CPU overhead with 3 network paths to storage was 37%--only 6% more than the 35% cost of 2 network paths with operating system kernel NFS. It may be noted that, with only 6% more overhead, the Direct NFS Client case was delivering 108% more I/O throughput (i.e. 329 vs. 158 MB/s) than the operating system kernel NFS with 2 network paths. Kernel Mode Processor Overhead Direct NFS vs Operating System Kernel NFS %Sys Net Bond 2 Nets DNFS 1 Net DNFS 2 Nets DNFS 3 Nets Figure 3: DSS-Style Testing. CPU Cycles Consumed in Kernel Mode. The improved efficiency of Oracle Direct NFS Client is a very significant benefit. Another way to express this improvement is with the Throughput to CPU Metric (TCM). This metric is calculated by dividing the I/O throughput by the kernel mode processor overhead. Since the ratio is throughput to cost, a larger number represents improvement. Figure 4 shows that Oracle Direct NFS Client offers significant improvement roughly 85% improvement in TCM terms. Oracle Database 11g - Direct NFS Client Page 11
12 (MS per Second) / %Sys Throughput to CPU Efficiency Metric (Higher Value is Better) 4.8 OS Kernel NFS Configuration 8.9 Oracle Direct NFS Figure 4: Throughput to CPU Metric Rating of Oracle Direct NFS Client vs Operating System Kernel NFS Performance Study - OLTP Performance Analysis In order to compare OLTP-style performance variation between operating system kernel NFS and Oracle Direct NFS Client, the schema described above was accessed by a test workload written in Pro*C. The test consists of connecting a fixed number of sessions to the Oracle Database 11g instance. Each Pro*C client loops through a set of transactions simulating the Order Entry system. This test is executed in a manner aimed at saturating the database server. The Linux uptime command reported a load average of nearly 20 throughout the duration of the run in both the operating system kernel NFS and Oracle Direct NFS Client case. Load average represents the sum of processes running, waiting to run (e.g., runable) or waiting for I/O or IPC divided by the number of CPUs on the server. A load average of nearly 20 is an overwhelming load for the database server. With that said, even a server with some idle processor cycles can have extremely high load averages since processes waiting for disk or network I/O are factored into the equation. The OLTP workload was the first to saturate the storage in terms of IOPS, but only in the Direct NFS Client case. Without Direct NFS Client, the workload saturated the database server CPU before reaching the throughput limit of the storage. Due to the improved processor efficiencies of Direct NFS Client, the database server consistently exhibited 3% idle processor bandwidth while delivering higher throughput than the CPU-bound case without Direct NFS Client. Figure 5 shows that the test case configured with Oracle Direct NFS Client was able to deliver 11% more transactional throughput. Oracle Database 11g - Direct NFS Client Page 12
13 OLTP Throughput Oracle Direct NFS vs Operating System Kernel NFS Transactions Per Minute OS Kernel NFS Oracle Direct NFS Configuration Figure 5: OLTP Throughput Comparison. Oracle Direct NFS Client yields 11% OLTP performance increase. Figure 6 shows data from the Oracle statspack reports that can be compared to real-world environments. Figure 6 shows that physical I/O was consistently improved by using Oracle Direct NFS Client. As described above, the OLTP test system was a 2-socket dual-core x86 compatible server, therefore physical I/O rates in excess of 5,000 per second are significant. OLTP Workload: Physical I/O Operations per Second OS Kernel NFS Oracle Direct NFS Read Write Figure 6: Oracle Statspack Data for Physical Read and Write Operations Performance Study Summary Analysis These performance tests show that the use of Oracle Direct NFS Client improves both throughput and server resource utilization. With Oracle Direct NFS Client both DSS and OLTP workloads benefited from reduced kernel-mode processor Oracle Database 11g - Direct NFS Client Page 13
14 utilization. Since DSS workloads demand high-bandwidth I/O, the case for Oracle Direct NFS Client was clearly proven where the gain over operating system kernel NFS was 40%. OLTP also showed improvement. By simply enabling Direct NFS Client, the processor-bound OLTP workload improved by 11%. These performance results are only a portion of the value Oracle Direct NFS Client provides. Oracle Direct NFS Client also improves such features as Direct Path loads with SQL*Loader and External Tables. RMAN also requires high bandwidth I/O so it too will benefit from Oracle Direct NFS Client. CONCLUSION Decreasing prices, simplicity, flexibility, and high availability are driving the adoption of Network-Attached Storage (NAS) devices in enterprise data centers. However, performance and management limitations in the Network File System (NFS) protocol, the de facto protocol for NAS devices, limits its effectiveness for database workloads. Oracle Direct NFS Client, a new feature in Oracle Database 11g, integrates the NFS client directly with the Oracle software. Through this tight integration, Direct NFS Client overcomes the problems associated with traditional operating system kernel based NFS clients. Direct NFS Client simplifies management by providing a standard configuration within a unified interface across various hardware and operating system platforms. The tight integration between Direct NFS Client and the Oracle database vastly improves I/O performance and throughput, while reducing system resource utilization. Finally, Direct NFS Client optimizes multiple network paths to not only provide high availability but to achieve near linear scalability by load balancing I/O across all available storage paths. Oracle Database 11g - Direct NFS Client Page 14
15 Oracle Database 11g Direct NFS Client July 2007 Authors: William Hodak (Oracle), Kevin Closson (HP) Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA U.S.A. Worldwide Inquiries: Phone: Fax: oracle.com Copyright 2007, Oracle. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.
An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always
An Oracle White Paper November 2010 Backup and Recovery with Oracle s Sun ZFS Storage Appliances and Oracle Recovery Manager Introduction...2 Oracle Backup and Recovery Solution Overview...3 Oracle Recovery
An Oracle White Paper December 2013 Advanced Network Compression Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E 2 0 15 Table of Contents Fully Integrated Hardware and Software
Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center Expect enhancements in performance, simplicity, and agility when deploying Oracle Virtual Networking in the data center. ORACLE
Oracle BI Publisher Enterprise Cluster Deployment An Oracle White Paper August 2007 Oracle BI Publisher Enterprise INTRODUCTION This paper covers Oracle BI Publisher cluster and high availability deployment.
An Oracle White Paper July 2013 Introducing the Oracle Home User Introduction Starting with Oracle Database 12c Release 1 (12.1), Oracle Database on Microsoft Windows supports the use of an Oracle Home
An Oracle White Paper November 2012 Oracle Real-Time Scheduler Benchmark Demonstrates Superior Scalability for Large Service Organizations Introduction Large service organizations with greater than 5,000
An Oracle White Paper July 2012 Expanding the Storage Capabilities of the Oracle Database Appliance Executive Overview... 2 Introduction... 2 Storage... 3 Networking... 4 Determining the best Network Port
Oracle Insurance General Agent Hardware and Software Requirements Version 8.0 April 2009 Table of Contents OIGA Hardware and Software Requirements... 3 OIGA Installation Configurations... 3 Oracle Insurance
General Overview What is Oracle s Virtual Compute Appliance? Oracle s Virtual Compute Appliance is an integrated, wire once, software-defined infrastructure system designed for rapid deployment of both
Rapid Bottleneck Identification A Better Way to do Load Testing An Oracle White Paper June 2009 Rapid Bottleneck Identification A Better Way to do Load Testing. RBI combines a comprehensive understanding
Performance and Scalability Benchmark: Siebel CRM Release 8.0 Industry Applications on HP BL460c/BL680c Servers running Microsoft Windows Server 2008 Enterprise Edition and SQL Server 2008 (x64) An Oracle
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 2 to 8 database servers From 3 to 14 Sun Oracle Exadata Storage Servers Up to 5.3 TB of Exadata QDR (40 Gb/second) InfiniBand Switches Uncompressed
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide An Oracle White Paper October 2010 Disclaimer The following is intended to outline our general product direction.
An Oracle White Paper August 2010 Oracle Database Auditing: Performance Guidelines Introduction Database auditing has become increasingly important as threats to applications become more sophisticated.
Oracle On Demand Infrastructure: Virtualization with Oracle VM An Oracle White Paper November 2007 Oracle On Demand Infrastructure: Virtualization with Oracle VM INTRODUCTION Oracle On Demand Infrastructure
An Oracle White Paper July 2013 Accelerating Database Infrastructure Using Oracle Real Application Clusters 11g R2 and QLogic FabricCache Adapters Executive Overview Thousands of companies world-wide use
Guide to Database as a Service (DBaaS) Part 2 Delivering Database as a Service to Your Organization Introduction There are often situations in which you need to spin up a new database. But in a traditional
An Oracle White Paper February 2010 Rapid Bottleneck Identification - A Better Way to do Load Testing Introduction You re ready to launch a critical Web application. Ensuring good application performance
Monitoring and Diagnosing Production Applications Using Oracle Application Diagnostics for Java An Oracle White Paper December 2007 Monitoring and Diagnosing Production Applications Using Oracle Application
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand
Oracle Net Services for Oracle10g An Oracle White Paper May 2005 Oracle Net Services INTRODUCTION Oracle Database 10g is the first database designed for enterprise grid computing, the most flexible and
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
An Oracle White Paper October 2013 Realizing the Superior Value and Performance of Oracle ZFS Storage Appliance Executive Overview... 2 Introduction... 3 Delivering Superior Performance at a Lower Price...
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using NFS and DNFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published
Next Generation Siebel Monitoring: A Real World Customer Experience An Oracle White Paper June 2010 Next Generation Siebel Monitoring: A Real World Customer Experience Table of Contents Introduction...
An Oracle White Paper May 2013 Oracle Audit Vault and Database Firewall 12.1 Sizing Best Practices Introduction... 1 Component Overview... 2 Sizing Hardware Requirements... 3 Audit Vault Server Sizing...
An Oracle White Paper July, 2012 Evolution from the Traditional Data Center to Exalogic: 1 Disclaimer The following is intended to outline our general product capabilities. It is intended for information
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
An Oracle White Paper July 2011 Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud Executive Summary... 3 Introduction... 4 Hardware and Software Overview... 5 Compute Node... 5 Storage
Oracle Recovery Manager 10g An Oracle White Paper November 2003 Oracle Recovery Manager 10g EXECUTIVE OVERVIEW A backup of the database may be the only means you have to protect the Oracle database from
Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce
An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
An Oracle White Paper September 2010 Oracle Database Smart Flash Cache Introduction Oracle Database 11g Release 2 introduced a new database feature: Database Smart Flash Cache. This feature is available
An Oracle White Paper November 2012 Oracle Utilities Mobile Workforce Management Benchmark Demonstrates Superior Scalability for Large Field Service Organizations Introduction Large utility field service
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 1 to 8 database servers From 1 to 14 Sun Oracle Exadata Storage Servers Each Exadata Storage Server includes 384 GB of Exadata Smart Flash Cache
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change
An Oracle White Paper March 2010 Oracle Transparent Data Encryption for SAP Introduction Securing sensitive customer data has become more and more important in the last years. One possible threat is confidential
An Oracle White Paper May 2011 Distributed Development Using Oracle Secure Global Desktop Introduction One of the biggest challenges software development organizations face today is how to provide software
Migrating Non-Oracle Databases and their Applications to Oracle Database 12c O R A C L E W H I T E P A P E R D E C E M B E R 2 0 1 4 1. Introduction Oracle provides products that reduce the time, risk,
An Oracle White Paper September 2012 Oracle Database and the Oracle Database Cloud 1 Table of Contents Overview... 3 Cloud taxonomy... 4 The Cloud stack... 4 Differences between Cloud computing categories...
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability
An Oracle White Paper January 2011 Using Oracle's StorageTek Search Accelerator Executive Summary...2 Introduction...2 The Problem with Searching Large Data Sets...3 The StorageTek Search Accelerator Solution...3
Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Table of Contents 0 Introduction 1 The Test Environment 1 Best
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
An Oracle White Paper September 2013 Lowering Storage Costs with the World's Fastest, Highest Capacity Tape Drive Executive Overview... 1 Introduction... 1 Unmatched Capacity and Performance... 3 Lowering
Highmark Unifies Identity Data With Oracle Virtual Directory An Oracle White Paper January 2009 Highmark Unifies Identity Data With Oracle Virtual Directory Executive Summary... 3 The Challenge: A Single
Achieving Sarbanes-Oxley Compliance with Oracle Identity Management An Oracle White Paper September 2005 Achieving Sarbanes-Oxley Compliance with Oracle Identity Management INTRODUCTION The Sarbanes-Oxley
An Oracle Technical White Paper March 2014 Using Symantec NetBackup with VSS Snapshot to Perform a Backup of SAN LUNs in the Oracle ZFS Storage Appliance Introduction... 2 Overview... 3 Oracle ZFS Storage
Migrating from Unix to Oracle on Linux Sponsored by Red Hat An Oracle and Red Hat White Paper September 2003 Migrating from Unix to Oracle on Linux Executive Overview... 3 Unbreakable Linux and Low-Cost
An Oracle White Paper February 2014 Oracle Exalogic Elastic Cloud: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information
Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015 Introduction 1 Netra Modular System 2 Oracle SDN Virtual Network Services 3 Configuration Details
An Oracle White Paper September 2011 Upgrade Methods for Upgrading to Oracle Database 11g Release 2 Introduction... 1 Database Upgrade Methods... 3 Database Upgrade Assistant (DBUA)... 3 Manual Upgrade...
An Oracle White Paper July 2011 Oracle Desktop Virtualization Simplified Client Access for Oracle Applications Overview Oracle has the world s most comprehensive portfolio of industry-specific applications
Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau Agenda Session Objectives Feature Overview Technology Overview Compellent Differentiators Competition Available Resources Questions
An Oracle White Paper July 2009 Oracle Application Development Tools Statement of Direction: Oracle Forms, Oracle Reports and Oracle Designer Disclaimer The following is intended to outline our general
An Oracle White Paper March 2013 Oracle s Single Server Solution for VDI Introduction The concept of running corporate desktops in virtual machines hosted on servers is a compelling proposition. In contrast
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Oracle Database Backup Service Secure Backup in the Oracle Cloud Today s organizations are increasingly adopting cloud-based IT solutions and migrating on-premises workloads to public clouds. The motivation
An Oracle White Paper January 2014 Managed Storage Services Designed to Meet Your Custom Needs for Availability, Reliability and Security A complete Storage Solution Oracle Managed Cloud Services (OMCS)
An Oracle White Paper September 2012 Performance with the Oracle Database Cloud Multi-tenant architectures and resource sharing 1 Table of Contents Overview... 3 Performance and the Cloud... 4 Performance
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments Increase Virtualization Density and Eliminate I/O Bottlenecks with QLogic High-Speed Interconnects Key Findings Support for increased workloads,
An Oracle White Paper October 2013 Oracle Data Integrator 12c (ODI12c) - Powering Big Data and Real-Time Business Analytics Introduction: The value of analytics is so widely recognized today that all mid
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
Your consent to our cookies if you continue to use this website.