HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array"

Transcription

1 Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array Reference architecture Table of contents Executive summary... 3 Introduction... 4 HP ProLiant DL980 G7 NUMA server PAR StoreServ Storage cluster technology... 6 HP 3PAR StoreServ 7450 Storage array... 7 HP supported options sample scenarios... 9 Solution components Architectural diagram Capacity and sizing DL980 server configurations HP 3PAR StoreServ SSD IOPS Workload description I/O characterization workload Oracle database workload tool Workload tuning considerations Workload data/configuration results Oracle OLTP peak transactions and IOPS Thin Provisioning to Full Provisioning comparison results Large block throughput for BI workloads Best practices Analysis and recommendations Server configuration best practices Storage configuration best practices Database configuration best practices Bill of materials Reference architecture diagram Reference architecture BOM Summary Implementing a proof-of-concept... 31

2 Appendix Appendix A Red Hat 6.4 kernel tunables /etc/sysctl.conf Appendix B Grub configuration for disabling cstates Appendix C IRQ affinity script for /etc/rc.local Appendix D HBA NUMA mapping and IRQ map Appendix E UDEV configurations Appendix F Storage information Appendix G Check or set operating system tracing parameter Appendix H Oracle parameters Appendix I HP ProLiant DL980 PCIe card loading order For more information

3 Executive summary The HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array offers the latest, most robust reference architecture developed under the HP UDB portfolio of mission-critical, database solutions. This is the latest HP Unified Database solution providing extreme OLTP database performance with exceptional management capabilities and the addition of the HP 3PAR StoreServ 7450 All-flash array with its rich feature set. The HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array: Delivers I/O capabilities over 1M IOPS at less than 1 millisecond response time and a throughput of 10 GB/second. Supports more processors, memory, and I/O than previous systems. Reduced overhead with minimum LUN paths and inter-node communication. Provides support for a host of features such as High Availability (HA), Thin Provisioning, disaster recovery and much more. The heart of this solution is mission-critical high performance and flexibility. UDB processing is powered by the robust and flexible HP ProLiant DL980 G7 server, the industry s most popular, highly reliable, and comprehensively scalable eightsocket x86 server. The HP ProLiant DL980 G7 Non-Uninform Memory Access (NUMA) server leverages HP s history of innovation designing mission-critical servers in RISC, EPIC, and UNIX environments. This design capitalizes on over a hundred availability features to deliver a portfolio of resilient and highly reliable scale-up x86 servers. Performance demands for database implementations continue to escalate. Requirements for higher transaction speed and greater capacity also continue to increase. As servers such as the HP ProLiant DL980 G7 deliver more and more performance, the storage subsystem must evolve to support this growth. The HP 3PAR StoreServ 7450 All-flash storage system meets these performance demands head-on; its all-flash storage array provides extreme storage IOPS and throughput. With this single rack solution, over one million IOPS have been validated at a throughput capability of 10.5 GB/sec. The solution supports growth using additional flash drives or complete storage arrays, adding capacity as needed. As database implementations have grown to require extreme IOPS performance to meet today s demanding business environments, the UDB solution meets these needs by leveraging the industry leading HP 3PAR StoreServ technology. The Oracle and HP 3PAR StoreServ 7450 All-flash array solution allows for flexibility with storage HA configurations for use of RAID 1, RAID 5, and RAID 6. Flexibility in SSD choice, between 100GB SLC, 200GB SLC and 400GB MLC, means that customers can customize different configuration options. The HP UDB Reference Architecture is performance tested for 16, 32, and 48 SSD combinations for each array. The resulting configuration is combined with a database. This database can be a single instance, multiple single instances, a high availability, clustered solution, or a disaster recovery solution using versions 11gR2 or 12c. This paper is written specifically for this UDB implementation with Oracle database. Customers today require high performing, highly available database solutions without the high cost and inflexibility of all- Oracle-stack solutions. The open-architecture HP solution provides these benefits: Allows the customer to use a 100% open system HP hardware solution that has been tested and has solid support. The database choice can be an Open solution database or an Oracle database, enabling easy update, expansion, and integration, as the need arises. Allows the choice of standard HP support options, with the flexibility to tier mission-critical requirements as needed. DL980 G7 NUMA architecture allows for massive flexibility to scale up with a single large database or multiple databases. The HP 3PAR StoreServ 7450 offers, within the array itself, extensive features that surpass offerings of most flash-based database solutions, including, Thin Provisioning, scalability, volume snapshot capability, cloning, online drive and RAID migration, and much more. Customer performance workload characteristics and requirements vary. HP has solutions tailored to provide maximum performance for various workloads without compromising on required availability commitments to the business. Target audience: This HP white paper was designed for IT professionals who use, program, manage, or administer large databases that require high availability and high performance. Specifically, this information is intended for those who design, evaluate, or recommend new IT high performance architectures, and includes details on the following topics: HP Universal Database Solution for extreme performance and capacity HP DL980 G7 and HP 3PAR StoreServ 7450 All-flash array, the newest addition of the UDB solution offerings This reference architecture focuses primarily on the design, configuration, and best practices for deploying a highly available extreme-performance Oracle database solution. The Oracle and Red Hat installations are standard configurations except where explicitly stated in the reference architecture. 3

4 This white paper describes testing performed in July and August Introduction DL980 Universal DB Solution IT departments are under continuous pressure to add value to the business, improve existing infrastructures, enable growth opportunities, and reduce overhead. At the same time, exploding transactional data growth is driving database performance and availability requirements to entirely new levels. The demand for high speed and low latency, along with staggering volumes of transactional data, is prompting the adoption of new storage technologies that range from traditional disk to solid state. Driven by the creation of new, high-value applications, customers are discovering that the Oracle Exadata one-size-fits-all approach one operating system, one database, one vendor doesn t do the job. Rather, Exadata requires extensive tuning, leads to high cost, and results in vendor lock-in. In response, IT departments are looking for an appliance-like solution that provides a common foundation yet offers solidstate storage flexibility with choice of OS and database. Better performance and lower costs are just the beginning of the value that the HP ProLiant DL980 Universal DB Solution optimized for the HP ProLiant DL980 G7 server delivers. Common foundation HP ProLiant DL980 G7 an HP scale-up, resilient, x86 server based on the PREMA Architecture is designed to take full advantage of the latest 10-core Intel Xeon processor E7-4800/2800 product families with Intel QuickPath Interconnect (QPI) technology. Working in concert, they form the foundation for unparalleled transactional performance, scalability, and energy efficiency, plus significantly lower TCO. With all major Linux operating systems and Microsoft Windows supported, the platform collaborates with the OS and software stack to gain the full benefits of the Reliability, Availability and Serviceability (RAS) feature set included in the Intel Xeon processor E7-4800/2800 product families. HP ProLiant DL980 G7 NUMA server The HP ProLiant DL980 G7 server, using the PREMA architecture, is a stellar choice for scale up mission critical solutions such as the HP Universal Database. This 8 socket NUMA server consolidates massive processing in to a single server with multiple NUMA nodes. The DL980 uses Smart CPU caching; with the 10 core processors installed using the Intel Xeon E7 4800/2800 processor families, it is capable of processing with 80 CPU cores and 160 logical cores where Hyper-Threading is applicable. Figure 1. HP ProLiant DL980 G7 Server 4

5 HP ProLiant DL980 G7 NUMA technology The DL980 G7 is ideal for scale-up database implementations. Its modular design provides the flexibility to readily adapt the configuration to meet the demands of your dynamic environment. The architecture supports an appropriately balanced system with more processors, more memory, and more I/O than previous generation x86 systems have provided. However, simply adding processors, memory, and I/O slots is not sufficient to achieve the needed scalability and resiliency. When a database system scales to a larger number of interconnected processors, communication and coordination between processors grows at an exponential rate, creating a system bottleneck. To solve this issue in our 8-socket x86 server, HP looked to the design of our higher-end, mission-critical servers. At the core of the HP PREMA Architecture is a node controller ASIC, derived from technology powering the HP Integrity Superdome 2. The node controller enables two key functionalities: Smart CPU caching and the redundant system fabric. These reduce communication and coordination overhead and enhance system resiliency. Key processes in your system and databases can be individual CPU affinity for most efficient overall processing. Applications which are NUMA-aware potentially have the capability to optimize system performance through NUMA control. For those applications which are not NUMA-aware, the affinity can be set manually. Figure 2 shows an architectural view of the DL980 G7. For the DL980 G7 used in the Oracle and HP 3PAR StoreServ 7450 reference architecture testing, each physical CPU has ten cores. The HP PREMA Architecture groups the processor sockets into multiple QPI islands of two directly connected sockets. This direct connection provides the lowest latencies. Each QPI island connects to two node controllers (labeled XNC in the diagram). The system contains a total of four node controllers. HP Smart CPU Caching is the key to communication between the NUMA nodes. Figure 2. Architecture view of the HP ProLiant DL980 G7 An HP ProLiant DL980 G7 scale-up implementation, using the Intel Xeon processor with an embedded memory controller, implies a cache-coherent, Non-Uniform Memory Access (ccnuma) system. In a ccnuma system, the hardware ensures cache coherency by tracking where the most up-to-date data is for every cache line held in a processor cache. Latencies between processor and memory in a ccnuma system vary depending on the location of these two components in relation to each other. HP s goal in designing the PREMA Architecture was to reduce average memory latency and minimize bandwidth consumption resulting from coherency snoops. The result is less latency for database server processes and I/O processes. The HP node controller (XNC) works with the processor s coherency algorithms to provide system-wide cache coherency. At the same time, it minimizes processor latency to local memory and maximizes usable link bandwidth for all links in the system. The architectural diagram in figure 2 shows the 2-socket QPI islands. A pair of XNC node controllers supports two islands in a 4-socket quad. These quads are then connected to create an 8-socket system. Within a 2-socket-source-snoopy island, all snoops have at most one QPI link hop between the requesting core, the paired socket cache, the smart CPU cache in the node controller, and the memory controller. By tagging remote ownership of memory lines, the node controller targets any remote access to the specific location of the requested memory line. 5

6 With the HP PREMA Architecture smart CPU caching technology, the HP system effectively provides more links connecting processor sockets the equivalent of six QPI links connecting the two quads. A glueless 8-socket system has just four QPI links. In addition, Smart CPU caching uses the links more efficiently because it reduces the overhead of cache coherency snoops. Because of the reduction in local memory latency compared to glueless 8-processor systems, virtual environments can have higher performance on the ProLiant DL980 G7. With NUMA-aware OS support, system performance will scale nearly linearly. DL980 G7 resiliency and redundancy The DL980 G7 was designed with the resiliency to meet the high availability demands of mission critical enterprise environments. A redundant fabric achieves continual uptime. Six redundant data paths, 50% more than most industry-standard products, provide a high level of protection from failures. Multiple areas of redundancy such as power supplies, fans, and clocks provide additional data protection. Read more about the DL980 at the HP product website, hp.com/servers/dl980. In summary, the benefits of using the DL980 for large, scale-up database implementations include: Flexibility Modular design High availability Superior performance 3PAR StoreServ Storage cluster technology The HP 3PAR StoreServ storage systems use a hardware cluster technology to physically store your data, offering ease of management, highly available data volumes, high performance, and rich features required to manage your data efficiently. The HP 3PAR StoreServ systems use pairs of processing nodes in front of many combinations of data drive types, sizes, and RAID configurations. Physical drives are mapped into logical drives. Virtual volumes are created from the logical drives in chunklets of data 1GB in size. The architecture is designed to widely stripe the volume data across pools of storage called common provisioning groups (CPGs). The virtual volumes are then exported to host systems by creating LUN paths to the volumes called vluns. Figure 3 shows how the data changes from a physical mapping to a logical mapping. Figure 3. HP 3PAR StoreServ Cluster Technology 6

7 Key terms for 3PAR architecture The following key terms relate to the 3PAR architecture: Physical disks On the HP 3PAR StoreServ 7450 All-flash array, physical disks refers to the flash-based SSDs used. These include the 100 GB SLC, 200 GB SLC, and 400GB MLC. On the HP 3PAR StoreServ 7400 and 10x00 arrays, physical disks can also refer to standard SAS hard disks and SATA hard disks. Only flash SSDs are supported for the HP 3PAR StoreServ 7450 All-flash array. Chunklets A chunklet is a chunk of contiguous space on the physical disk. For the HP 3PAR StoreServ 7450 All-flash array, this would be 1 GB of space from the SSD. The HP 3PAR StoreServ 7450 nodes and operating system manage the chunklets and assign only one logical device to any one chunklet. Many chunklets can be in a single logical device, but only in one logical device. Logical disks A logical disk is a collection of physical disk chunklets. These are organized in rows of RAID sets. Logical disks are pooled together in common provisioning groups (CPGs). The RAID types supported are RAID 0, RAID 1 RAID 5 and RAID 6. Note RAID 1 and RAID 5 were used in the testing. Common Provisioning Groups (CPG) Pool of logical disks to allocate virtual volumes on demand. In this reference architecture, sixteen virtual disks were created, eight from each HP 3PAR StoreServ 7450 All-flash array. Virtual Volumes Virtual volumes are specifically provisioned volumes by the user. The data is taken from the CPG. Virtual volumes are exported to hosts by associating LUN paths to them (vluns). A virtual volume can be full provisioned or thin provisioned. The processing nodes connected to the physical storage are connected in node pairs. Node pairs are interconnected in a mesh using custom ASICs. The HP 3PAR StoreServ 7450 All-flash arrays used in this reference architecture are four-node units; node pairs 0, 1 and node pairs 2, 3. Each node has 2 x 8Gb FC port connections and an additional expansion card with four more 8Gb FC port connections. Four connections are used on each node, two from the internal ports and two from the expansion ports. HP 3PAR StoreServ 7450 Storage array The newest addition to the HP 3PAR family is the HP 3PAR StoreServ 7450 All-flash array, shown in figure 4. Figure 4. Front view of the HP 3PAR StoreServ 7450 High Availability The HP 3PAR StoreServ 7450 storage array is a highly available redundant solution for enterprise environments. The array offers high availability and redundancy at all levels. All of the nodes are clustered together through custom ASICs for maximum availability. Data paths from the nodes to the disks are all redundant as well as the front end host connections. Solid State Drives The HP 3PAR StoreServ 7450 All-flash array offers three types of SSDs in either SFF or LFF profile. The reference architecture uses the SFF enclosures and drives. HP M GB 6G SAS SFF (2.5-inch) SLC Solid State Drive HP M GB 6G SAS SFF (2.5-inch) SLC Solid State Drive HP 3PAR StoreServ M GB 6Gb SAS SFF(2.5-inch) MLC Solid State Drive 7

8 For this reference architecture, any of the three types of drives can be chosen. It is recommended that they be used in groups of 4 drives per enclosure. This would be 16 drives minimum per drive type. HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array rich product feature set HP Thin Suite (Thin Provisioning, Thin Persistence and Thin Conversion) The HP suite of technologies includes: Thin Provisioning Thin Provisioning allows users to allocate virtual volumes to servers and provision only a fractional or part of the physical storage in the volume. This allows for maximum efficiency in capacity utilization, saving a considerable amount of investment in storage resources being stranded and as data provisioned but not used. Thin Conversion This feature allows users to convert a fully-provisioned set of volumes to thinly-provisioned volumes. For instance, if a volume was created with the intent of using most of the space, but circumstances resulted in most of the space not being used, the volume can be converted to a thin-provisioned volume. This results in tangible space and cost savings. Thin Persistence Thin persistence is a technology within the HP 3PAR StoreServ arrays that detects zero valued data during data transfers. When data not being used in the volume is identified, it can be reallocated to free-to-use status. If data is removed from an application volume and those addresses are set to zero, Thin Persistence can free them. Oracle developed an ASM Storage Reclamation Utility (ASRU) for zeroing out data in an Oracle ASM disk group. This tool can be run and then Thin Persistence will detect the zeros and free up the data. For more information about HP 3PAR Thin Provisioning for Oracle and the ASRU utility, see Best Practices for Oracle and HP 3PAR StoreServ Storage. HP 3PAR Remote Copy HP 3PAR Remote Copy software brings a rich set of features and benefits that can be used to design disaster tolerant solutions that cost-effectively address availability challenges of enterprise environments. HP 3PAR Remote Copy is a uniquely easy, efficient, and flexible replication technology that allows you to protect and share data from any application. Implemented over native IP (through GbE) or Fibre Channel, users may choose either the asynchronous periodic or synchronous mode of operation to design a solution that meets their requirements for recovery point objective (RPO) and recovery time objective (RTO). With these modes, 3PAR Remote Copy allows you to mirror data between any two HP 3PAR StoreServ Storage systems, eliminating the incompatibilities and complexities associated with trying to mirror between the midrange and enterprise array technologies from traditional vendors. Source and target volumes may also be flexibly and uniquely configured to meet your needs, using, for example, different RAID levels, thick or thin volumes or drive types. For more information, refer to Replication Solutions for demanding disaster tolerant environments. HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array special features and functions Table 1 shows specific data points of interest about the HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array solution. Table 1. Features specific to HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array special features and functions Attribute IOPS Usable storage capacity data and redo Storage HA Server HA without performance impact Data loss on single failure Oracle Real Application Cluster required Duplicate copy of database Disaster Recovery Query standby database Result 1M 8k reads 1M 4k reads 26.8 TB RAID 5 400GB MLC drive Two HP 3PAR StoreServ 7450 with 48 SSDs 96 drives total Yes redundant storage nodes and RAID protection Yes redundancy at server Server Protection HP Serviceguard No No Yes (HP 3PAR Remote Copy or Oracle Data Guard) Yes Yes (Remote Copy) 8

9 Attribute Data device retention Database storage Storage Thin Provisioning Storage Thin Persistence Thin Conversion Volume Snapshot Integrated Oracle backup solution Dynamic Optimization Ease of management Operating System choice Database choice Result Yes All flash Yes Yes Yes Yes Yes Yes Open IT tools RHEL, SUSE Linux, Oracle Linux, Windows Flexible for other databases, tested with Oracle 11gR2 using ASM on Grid Infrastructure. HP supported options sample scenarios High availability clustering with HP Serviceguard for Linux or Oracle Real Application Clusters Two options for clustering the UDB Oracle database solution with the HP 3PAR StoreServ 7450 are: 1) running HP Serviceguard for Linux cluster solution, or 2) Oracle Real Applications Cluster for Linux. Because the UDB solution implements the HP ProLiant DL980 G7 server, HP Serviceguard for Linux is a complete HP-supported high availability solution which employs an active-standby cluster and provides great flexibility. Serviceguard can be configured to run multiple databases and has many features that integrate with not only the database but other components of the environment, such as applications and webservers. HP Serviceguard HP Serviceguard for Linux, the high availability clustering software used in this solution, is designed to protect applications and services from planned and unplanned downtime. The HP Serviceguard Solutions for Linux portfolio also includes numerous implementation toolkits that allow you to easily integrate various databases and open source applications into a Serviceguard cluster with three distinct disaster recovery options. For additional information, see the HP Serviceguard for Linux website. Key features of HP Serviceguard for Linux include: Robust monitoring protect against system, software, network and storage faults Advance cluster arbitration and fencing mechanisms prevent data corruption or loss GUI and CLI management interfaces Quick and accurate cluster package creation Refer to the white paper HP ProLiant DL980 Universal Database Solution: HP Serviceguard for Linux and 3PAR StoreServ for Oracle Enterprise Database. Figure 5 is an example of how Oracle and HP 3PAR StoreServ 7450 could be implemented in an HP Serviceguard for Linux Cluster. This example is a two-node active-standby setup in which both servers can be used concurrently by multiple database instances, and also be configured to fail-over critical databases in case of failures. Much more information is available from the HP Serviceguard for Linux website. 9

10 Figure 5. Sample scenario diagram of Oracle and HP 3PAR StoreServ 7450 integration with Serviceguard for Linux Oracle Real Application Clusters Another supported clustering option is Oracle s Real Application Cluster (RAC) with Oracle Enterprise Database and Grid Infrastructure. Oracle RAC clustering technology is a scale out active-active cluster where multiple nodes are running their own instance of the same database allowing multiple server processing on the same database. Scaling out with Oracle RAC is a high availability and performance option. Disaster Recovery with HP 3PAR Remote Copy or Oracle Data Guard HP 3PAR Remote Copy The HP 3PAR Remote Copy software product provides an array based data replication solution on HP 3PAR StoreServ for replicating using HP 3PAR Remote Copy software. Both synchronous and asynchronous replication options are supported. Figure 6 shows an example scenario for disaster recovery replication between two Oracle and HP 3PAR StoreServ 7450 environments. Replication to a remote site can be used for more than disaster recovery. The secondary site can be used for remote database reporting or database development. Use with HP 3PAR Snapshot technology allows for making database copies or even volume copies for remote backup. HP 3PAR StoreServ All-flash array has the unique ability to provide flash level performance and many of the desirable 3PAR management features. 10

11 Figure 6. Sample scenario configuration of an HP 3PAR Remote Copy environment for Oracle and HP 3PAR StoreServ 7450 Oracle Data Guard Oracle Data Guard is an Oracle product that provides data protection and disaster recovery for enterprise environments. Data Guard synchronizes a remote standby database, keeping the data consistent on the standby database. If the production database fails or needs to be taken down for service, Data Guard can switch the standby database to the production role. Data Guard can also be used for database backup and recovery. Solution components Architectural diagram HP ProLiant DL980 G7 server Figure 7 shows an architectural diagram of the tested UDB solution using the DL980 G7 and two HP 3PAR StoreServ 7450 All-flash arrays. The configuration is a good example setup for most scale-up database customers. This configuration is the basis for several other variant configurations which provide the flexibility to meet the need at hand. Our testing used a single HP ProLiant DL980 G7 with 8 physical Xeon E7 2.40GHz processors. Each of these processors has 10 cores totaling 80 cores for the entire server. Turning on Hyper-Threading in the DL980 G7 BIOS will enable two threads per core making 160 logical cores. For this testing, Hyper-Threading was not enabled. The system was equipped with 2TB of quad rank memory. Out of the total system memory, 70% of it was allocated to the operating system shared memory. Also installed in the DL980 G7 were 8 dual port QLogic 8Gb fibre channel cards. The cards are placed within different NUMA nodes for best performance and scalability. 11

12 The environment is fairly simple from the standpoint of number of servers and storage units. The entire solution delivers 1M IOPS and fits into a single rack with room for storage growth. The optional DL380 Gen8 server for 3PAR StoreServ management is not included. See the Bill of Materials section for a rack view and details. The 10GbE network switches are HP 5920 series switches. Figure 7. HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array architectural diagram Two HP 8/24 Fibre Channel SAN switches The storage connection to the server is accomplished using two HP 8/24 Fibre Channel switches in a completely redundant setup. For each host bus adapter (HBA) card, a SAN connection goes from the first port to the first switch and a redundant connection goes from the second HBA port to the second switch. The switches were tightly zoned using single initiator to single target WWN zoning. Each HBA port is connected to a single port on a single HP 3PAR StoreServ 7450 storage node. This was done also considering the NUMA location of the HBA card. The goal is to create multiple paths for HA while also minimizing cross communication between NUMA nodes, the storage nodes and the volumes themselves. Too many paths can create unwanted latencies in the I/O subsystem of the operating system. Tight volume allocation and zoning to nodes improved I/O performance by 20%. HP 3PAR StoreServ node storage arrays The HP 3PAR StoreServ 7450 units used for this testing were 4-node units. Each node pair has two additional disk enclosures. The SSDs were installed equally across the node pairs and expansion units. There were GB SLC drives installed in each unit totaling 48 SSDs per 3PAR StoreServ 7450 array. With two arrays the total maximum number of SSDs tested on a single database was 96 drives in two HP 3PAR StoreServ 7450 All-flash arrays. Server connection layout The DL980 server has both I/O expansion modules installed to accommodate the FC HBA cards needed. For maximum performance, the dual port 8Gb FC HBA cards are spread across three separate NUMA nodes (0, 2, 4). The cards are connected only to x8 slots in the HP ProLiant DL980 G7. This provides I/O throughput bandwidth for the tested solution as well as for additional storage array future expansion. Table 2 shows the HBA card NUMA node assignments and the local CPUs belonging to the NUMA node. This was the tested card placement. To see the DL980 G7 card loading, refer to Appendix I. 12

13 1 8 10G Port 3 10G Port ilo 2 ilo UID FC FC PORT 2 L A L A PORT 1 10GbE SFP T X R X T X R X FC1243 FC1243 FC1243 Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array Table 2. HBA card placement in the DL980 Card Slots Local CPU list NUMA Node 0 9, NUMA Node 2 2, 3, NUMA Node 4 12, 13, Figure 8 shows the connection locations and FC connection mapping to the HP 3PAR StoreServ All-flash array. All port 0 HBA connections go to switch A, and all port 1 HBA connections go to switch B. To achieve the maximum IOPS during I/O characterization testing, the connections to the virtual volumes needed to be isolated to specific NUMA nodes to minimize latencies in the operating system. Each connection has a specific single initiator to single target zone defined. See Appendix F for a zoning example. The integrated 10Gb Ethernet is used for connections back to the switches for client access. The user can choose whatever 10GbE infrastructure connection is required for their environment. The ilo connection is available on the HP ProLiant DL980 for remote management as needed. Figure 8. HP ProLiant DL980 G7 rear view connection diagram DL980 rear view connections 2:1:2 3:1:2 FC Switch B to 7450 node port x:x:x 10GbE bonded connections to switches FC Switch A to 7450 node port x:x:x 1:1:1 0:1:1 2:2:2 0:2:2 3:2:2 PS 1 2:2:1 1:2:1 0:2: ilo PS 2 0:2:2 1:1:2 0:1:2 PS 3 PS 4 3:2:1 3:1:1 2:1:1 TOP TOP TOP FC1243 FC1243 FC1243 TOP HP 3PAR StoreServ 7450 two node pairs Figure 9 shows the rear view of two HP 3PAR StoreServ 7450 Node pairs. A 4 node array has two node pairs. This tested solution used two 4 node arrays with each array having two additional disk shelves. The SSDs in the array are evenly distributed across the disk shelves. So for 48 drives in the array, each enclosure holds 12 drives. With two arrays, this would be a total of 96 drives and 12 drives per enclosure. This leaves 12 more slots per enclosure open for capacity expansion. The HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array uses the 4 port 8Gb/s FC option for additional FC ports to achieve the 1M IOPS. 13

14 Figure 9. Two node pairs for the HP 3PAR StoreServ 7450 All-flash array Capacity and sizing DL980 server configurations Depending on your application performance requirements, you have several options of processor, CPU and I/O card configurations. Recommended hardware settings Put FC HBAs in x8 slots for best performance. Distribute FC HBA cards evenly across the available I/O bays and I/O hubs. Does not use slot 1 for any FC HBA card. It is a PCI-x Gen1 slot. For the 8-socket configuration install memory DIMMs across all memory sockets of the eight CPUs for optimum NUMA performance. The tables below list the supported Intel Xeon processors, memory DIMMs, and PCI expansion slots for the ProLiant DL980 G7 server. For the best extreme performance, use the E performance processors. Table 3. Supported E7 Family Processors Processor Type Intel Xeon Cores per Processor Max Cores in an 8 Processor DL980 G7 E (30MB Cache, 2.4GHz, 130W, 6.4 GT/s QPI) (recommended) E (24MB Cache, 2.26GHz, 130W, 6.4 GT/s QPI) E (24MB Cache, 2.00GHz, 130W, 6.4 GT/s QPI) E (24MB Cache, 2.13 GHz, 105W, 6.4 GT/s QPI) 8 64 E (18MB Cache, 1.86 GHz, 95W, 4.8 GT/s QPI )

15 Note The Intel Xeon processor E7 series supports Hyper-Threading (HT). HT is not recommended and was disabled in our configurations. However it is good practice to test HT with your particular application. The DL980 G7 server comes with the Standard Main I/O board with PCIe slots Slots 9 and 11 are x8 Gen2 PCIe slots. The PCIe expander option provides additional I/O slots 1-6. Slots 2, 3, 5 and 6 are x8 Gen2 PCIe slots. The low profile expansion option provides additional I/O slots Slots 12, 13, 15, and 16 are x8 Gen2 PCIe slots Table 4. HP ProLiant DL980 G7 server with HP FC HBA PCIe slot configurations Configuration Number of HP 3PAR 7450 arrays DL980 PCIe x8 slots needed Recommended PCIe x8 I/O slot number Slot Type , 5, 9, 11 x8 Gen2 PCIe , 3, 5, 9, 11, 12, 13, 15 x8 Gen2 PCIe Configuration 2 with more than two arrays was not tested. It is recommended to do a proof of concept (POC) to evaluate your performance workload requirements. If an add-on SAS controller is installed into the DL980 it may be possible the SAS controller could interfere with the performance of any FC HBA cards installed in PCIe x8 slots 9 and 11 on the Standard Main I/O. Moving any FC cards to different NUMA nodes was not in context of the tested configuration. Note It is not recommended to use slot 1 for any HP FC HBA cards due to low I/O performance (PCI-e x4 Gen1). Table 5 shows the memory module kits available for the DL980 G7. The more ranks per DIMM the higher the performance, so quad rank DIMMs perform better than dual rank DIMMs. Performance is best when the installed DIMMs are all of equal size. Table 5. Supported Memory DIMMs Memory Kits HP 4GB 1Rx4 PC R-9 (DDR3-1333) HP 8GB 2Rx4 PC R-9 (DDR3-1333) HP 16GB 2Rx4 PC3L-10600R-9 (DDR3-1333) HP 32GB 4R x4 PC3L-8500R-7 (DDR3-1333) Rank Single Dual Dual (recommended) Quad (recommended) PC3L = low voltage memory Table 5 represents the minimum, middle, and maximum memory combinations possible for the 4, 8, 16 and 32 GB memory kits available for the DL980 G7 servers. For best performance, use dual or quad rank memory DIMMs. HP 3PAR StoreServ SSD IOPS There is flexibility in the size of the SSDs used in the HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array solution. There is very minimal performance impact between the drives. Maximum throughput and IOPS are more dependent on the number of HP 3PAR StoreServ 7450 arrays used. At least two arrays are recommended and required for 1M IOPS and 10.5 GB/sec throughput. Using one array cuts the maximum IOPS and throughput in half. This solution was tested with two arrays; but additional arrays are supported. 15

16 Table 6 shows reasonable maximum IOPS using two HP 3PAR StoreServ 7450 arrays. Half of the drives go into each array. For instance, using 16 SSDs in each array (32 total) the infrastructure would support a maximum of 850K IOPS, regardless of which drive type 100GB SLC, 200GB SLC or 400GB MLC. The only exception would be when using a minimum of 16 drives in each array where the maximum IOPS difference between SLC and MLC may vary within 5%. The maximum IOPS of 1M are at 96 drives split between the two arrays. Drive types should be installed in increments of 16 drives (4 per enclosure) per array. When considering capacity, cost and performance the decision of RAID 1 versus RAID 5 is very much dependent on the percentage of write in the database workload. Table 6. IOPS for two arrays SSD configuration Number of drives on two arrays 8k read IOPS 8k write IOPS 8k mixed IOPS 67% read 33% write RAID K 240K 440K RAID K 420K 740K RAID M 420K 740K RAID K 160K 340K RAID K 180K 380K RAID M 180K 380K RAID 5 Table 7 shows the same type of maximum IOPS list for use with only one array. Maximum IOPS is about 500K IOPS and the maximum throughput is about 5.2 GB/sec. Table 7. IOPS for one array SSD configuration Number of drives on one array 8k read IOPS 8k write IOPS 8k mixed IOPS 67% read 33% write RAID K 120K 220K RAID K 210K 370K RAID K 210K 370K RAID K 80K 170K RAID K 90K 190K RAID K 90K 190K RAID 5 Note All IOPS results documented in this paper were achieved using the server operating system (Red Hat Enterprise Linux release 6 update 4), NUMA and storage tuning mentioned in the recommendations and best practices. 16

17 Workload description I/O characterization workload All I/O characterization testing was performed with I/O generator tools capable of producing standard asynchronous I/Os using the Linux libaio libraries also used by Oracle and other database solutions. The tools are capable of generating many variations of workloads allowing for flexibility generating random and sequential access, various block sizes, various queuing and thread counts. The I/O requests were read, write or mixed. All mixed tests used a ratio of 67% read and 33% write. Characterization workloads were run on combinations of SSD sets. On each array, 16, 32 and 48 drive combinations were tested with RAID 1 and RAID 5 protection. These values are valid in the context of this UDB testing on the DL980 G7 server. The I/O characterization tests were run repeatedly and the storage system, fabric zoning and DL980 G7 server were tuned for the purpose of determining maximum I/O performance. Specific information in this paper reflects the best practices for the tested configuration. Oracle database workload tool The Oracle database workload tool consists of an OLTP workload with a table schema similar but not equal to TPC-C. Due to restrictions from Oracle, HP is not permitted to publish transactional information. The transaction results have been normalized and will be used to compare UDB test configurations. Other metrics measured during the workload come from the operating system or standard Oracle Automatic Workload Repository (AWR) stats reports. Tests, performed on a 3TB database (RAID 1) and a 1.5 TB database (RAID 5), included an I/O intensive OLTP test and a CPU intensive database test. The database parameters were adjusted from results in the I/O intensive database test. The environment was tuned for maximum user transactions and maximum % database usage efficiency. After the database was tuned, the storage IOPS were recorded at different user count levels. Because many workloads vary so much in characteristics, the measurement was made with maximum transactions but the transactions are not reported because of legal restrictions imposed by Oracle. Oracle Enterprise Database version was used in this testing, but other databases can be implemented on the HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array solution. Storage configuration for testing The storage configuration involves the use of 16 virtual volumes. Each of the two 3PAR StoreServ 7450 arrays had 8 virtual volumes. Each virtual volume had 2 vluns (device paths). The device mapper on the server saw two paths for each virtual volume exported to the host. Figure 10 shows how the virtual volumes are mapped to the host for best performance. Extensive tests were run to achieve 1M IOPS, which required the storage mapping and zoning shown in figure 10. The virtual volumes have two vluns paths per volume. Each virtual volume has both of its paths coming from HBAs belonging to the same NUMA node. Any one virtual volume never has two paths to different NUMA nodes, but only to different HBA cards within the node. Each port 0 on the HBA goes to switch 1 and port 1 goes to switch 2. The zoning is tightly configured to single initiator to single target for maximum performance. Each array is identically configured and connected to the server. Any additional arrays would also be configured and connected the same way. For the Oracle configuration, fourteen of the volumes were used in an ASM DATA group and two were used in an ASM LOG group. 17

18 Figure 10. Server to storage volume mapping to NUMA nodes Workload tuning considerations The server, storage, SAN and operating system parameters were adjusted to deliver best I/O and processing performance after several I/O characterization test iterations. The I/O characterization workloads were used to validate the best configuration to deliver the best I/O performance, thus validating the capabilities of the infrastructure. The storage capabilities are validated by HP s Storage division and are also validated specific to this configuration in these general areas: Server NUMA affinity minimize communication between NUMA nodes BIOS DL980 has BIOS optimizations for best performance Kernel and Operating System sysfs, sysctl kernel parameters Debug/tools disable processes or tools that can cause latencies I/O tuning provisioning, zoning, multipathing, special array settings, etc. Workload data/configuration results I/O characterization results OLTP random workloads testing RAID 5/RAID 1 using 16, 32, 48 drives per array The results of the characterization tests involving random small block workloads revealed capabilities of more than 1 million IOPS for small block reads using 48 drives per array and as high as 980,000 IOPS using only 32 drives per array. Tests with pure 8k write were as high as 438,400 IOPS. Results of a mixed 8k workload of 67% reads and 33% writes were as high as 743,700 IOPS using 48 drives per array. The 32 drive configuration was nearly as good with 733,600 IOPS, again demonstrating the 32 drive RAID 1 configuration performed almost as well for mixed 8k workloads as the 48 drive configuration. In testing the configurations of 16 drives, 32 drives and 48 drives per array (using two arrays), a disk performance bottleneck is not realized with the 32 and 48 drives. The maximum throughput capability of the nodes is reached first. With the 16 drive tests, the maximum throughput of the drives begins to be evident. 18

19 IOPS Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array RAID 5 characterization comparisons for 8k random reads, writes and mixed workloads Figure 11 below compares 8k read IOPS for a RAID 5 configuration using the 16 drive set, the 32 drive set or the 48 drive set. The test used two HP 3PAR StoreServ 7450 arrays, 32, 64 and 96 drives total for both. RAID 5 performs well with a more read weighted workload, as would be expected for RAID 5, the write performance is not optimum. RAID 5 uses 25% of the usable capacity for data protection with a 3+1 configuration and 12.5% of the usable capacity for data protection with a 7+1 configuration. This is compared to RAID 1 which uses 50% of the usable capacity for data protection. The RAID 5 tests were performed using a 3+1 configuration. Figure 11. RAID 5 8k small block results with two arrays RAID 5 small block 8k random Comparing 16, 32 and 48 drives per array - using 2 arrays Reads Writes Mixed(67/33) X-Axis: Total number of SSDs in both 7450 All-flash arrays 19

20 IOPS Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array RAID 1 characterization comparisons for 8k random reads, writes and mixed workloads Figure 12 shows the distribution of IOPS for a RAID 1 configuration. Reads in a RAID 1 setup are very similar to the RAID 5 results. Write performance for RAID 1 is significantly better then RAID 5 in figure 11. Looking at the distribution of performance between using 16, 32 and 48 SSDs in the array, the performance is very similar between 32 drives and 48 drives. RAID 1 uses 50% of the storage capacity for RAID protection. Figure 12. RAID 1 8k small block results with two arrays RAID 1 small block 8k random Comparing 16, 32 and 48 drives per array - using 2 arrays Reads Writes Mixed(67/33) X-Axis: Total number of SSDs in both 7450 All-flash arrays 20

21 RAID RAID Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array Comparison of RAID 5 to RAID 1 relative to 8k reads Comparing 8k read IOPS on RAID 5 versus RAID 1 in figure 13, the result is very similar, within 1.2%. In a 100% read case a RAID 5 configuration would be an optimum choice because of the additional user data RAID 5 makes available compared to RAID 1. In practice, it is fairly rare to have a 100% read workload but more common to have a mostly read workload that is very light on writes. Figure 13. RAID 5 and RAID 1 reads comparison IOPS for 100% read - 48 drives per array RAID RAID IOPS Comparison of RAID 5 to RAID 1 relative to 8k writes Reviewing the maximum writes in figure 14 shows a significant difference in write performance between RAID 5 and RAID 1. RAID 1 performs better than RAID 5 at a factor of about 2.3 times on the configuration using 48 drives per array. This is largely due to the overhead of RAID 5 having to do with parity calculations and partial write penalty. Figure 14. RAID 5 and RAID 1 writes comparison IOPS for 100% write - 48 drives per array RAID RAID IOPS 21

22 RAID Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array Comparison of RAID 5 to RAID 1 relative to 8k mixed 67% read and 33% write As figure 15 shows in a mixed workload of 67% read and 33% write, we see an improvement by a factor of 1.86 times using RAID 1 over RAID 5. This improvement, like any RAID 5/RAID 1 comparison, is going to be at the cost of user space if RAID 1 is used. A possible consideration might be to use RAID 5 if the mixed workload is very heavy on reads and lower on writes. For example if the workload is 90% reads and 10% writes, then there may be a greater performance versus capacity benefit using RAID 5. Figure 15. RAID 5 and RAID 1 mixed comparison IOPS for mixed read/write (67%/33%) - 48 drives per array RAID RAID IOPS Oracle OLTP peak transactions and IOPS The Oracle test consisted of the creation of a RAID 1 OLTP database 3TB in size, and a RAID 5 database 1.5TB in size. The workload was an I/O intensive OLTP benchmark tool that could stress the server as well as the I/O subsystem. As the series of tests was run, the Oracle database init file was adjusted for maximum transactions and minimum physical I/Os. All of the specific Oracle tuning values are documented in Appendix H and best practices are under the Best practices section of this paper. The database uses two ASM groups created with default extent values. The DATA group consists of 14 volumes and a LOG group with two volumes. The LOG volumes were provisioned to HBAs on NUMA node 4 so that the log writer process could be pinned to specific CPU cores on NUMA node 4. Test results show that pinning the log writer process did not improve performance with this workload. This should be tested on individual implementations. The OLTP stress workload is I/O intensive. Workload ramped from 50 users to 400 users. In real database applications, the DL980 handles tens of thousands of users but with the stress benchmark, each user is doing thousands of transactions per second with no latencies or think times. This is why the user count was not tested over 400 users. If the benchmark were doing a connection stress test, then the user count would be in tens of thousands. The benchmark workload generally started ramping at users and peaked at users. Because HP is not legally allowed to publish any Oracle benchmark results, all of the transactional numbers have been normalized, only a trend of the transactions are shown. The benchmark we used was not a standard TPC-C type of workload. 22

23 IOPS Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array RAID 1 OLTP The RAID 1 Oracle workload shows transactions peaking at around 250 benchmark users. On the DL980 server the operating system usage was 49% user, 33% system and 11% I/O wait. Figure 16 shows the IOPS for reads and writes taken from the Oracle AWR reports, under the metrics of physical read per second and physical writes per second. At this level of stress on 80 CPU cores, with the database buffer cache tuned for maximum logical I/Os, the IOPS are into the hundreds of thousands but well within the storage infrastructure limits. Figure 16. RAID 1 Oracle OLTP workload physical IOPS OLTP RAID1 - Normalized Transactions Total Physical Writes 5, , , , , , , , Total Physical Reads 115, , , , , , , , Normalized Total Transactions X -Axis: number of users 23

24 IOPS Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array RAID 5 OLTP The RAID 5 Oracle workload shows the transactions peaking at around 300 benchmark users. On the DL980 server the operating system utilization was 44% user, 37% system and 10% I/O wait. Figure 17 shows the IOPS for reads and writes taken from the Oracle AWR reports, under the metrics of physical reads per second and physical writes per second. Figure 17. RAID 5 Oracle OLTP workload physical I/Os per second OLTP RAID 5 - Normalized Transactions Total Physical Writes 3, , , , , , , , Total Physical Reads 95, , , , , , , , Normalized Total Transactions X-Axis: number of users Thin Provisioning to Full Provisioning comparison results The HP 3PAR StoreServ 7450 Thin Provisioning feature allows the user to implement storage usage much more efficiently. When a volume is provisioned without Thin Provisioning, all of the needed space is allocated and dedicated to the volume at the time of provisioning. When a volume is created using thin provisioning, the entire volume space is provisioned to the volume but not dedicated to the volume until it is needed. This leaves the unused storage space available for use by other volumes. Storage administrators can provision more data and take into account the full amount of data that is not immediately needed. Capacity planning can proactively monitor growth trends and provide more SSDs as needed. Figures 18 and 19 show the results obtained with an Oracle OLTP workload on 16 fully provisioned volumes, with a database totaling 1.5 TB. The test was run with an OLTP workload and a tuned database, ramping the workload up to maximum transactions. The entire set of ASM database disk groups were then converted from fully provisioned volumes to thin provision volumes and the same series of tests was run again. Figure 18 shows the resulting differences with physical reads between using full-provisioned and thin provisioned volumes. The worse-case difference was 3.9%. This difference is very minor considering the potentially significant cost saving of using Thin Provisioning. Considering the value of a single SSD versus a traditional HDD, the savings is extremely significant. 24

25 Physical reads /sec Physical reads/sec Technical white paper HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array Figure 18. RAID 5 Oracle OLTP Physical reads thin provisioned versus full provisioned R5 Physical reads/sec - Fully Provisioned R5 Physical reads/sec - Thin Provisioned Physical reads - thin versus full provisioning 220, , , , , , , , , , , , , , , , X-Axis: number of users Figure 19 shows the difference in physical writes per second were 3.9%. Figure 19. RAID 5 Oracle OLTP Physical writes thin provisioned versus full provisioned R5 - Physical writes/sec - Full Provisioned R5 - Physical writes/sec - Thin Provisioned Physical writes - thin versus full provisioning 17, , , , , , , , , , , , , , , X-Axis: number of users 25

26 Large block throughput for BI workloads Decision Support Systems (DSS) testing was not part of the scope of this paper but I/O throughput tests were run to measure the large block sequential capabilities of the HP 3PAR StoreServ 7450 storage array. Figure 20 shows the throughput results for sequential reads and writes with a 1M block size. The result is certainly useful for considering a DSS implementation on UDB for HP 3PAR StoreServ The DL980 is a proven solution for BI workloads. The high HP 3PAR StoreServ 7450 throughput capabilities make for a very good match when doing large block queries with the Oracle database, and other databases as well. Figure 20. Sequential read and write access results for 64 SSDs using a 1M block size. Large block seq RAID 5-1M block read and write Sequential writes - 1MB block size 5734 Sequential reads - 1MB block size MB/second Best practices Analysis and recommendations For best I/O performance on the DL980, HP recommends using multiple paths to maintain high availability while also maximizing performance and minimizing latencies. A way to achieve better performance in extreme performance environments is to minimize inter-communication between NUMA nodes. This can be achieved using tightly-zoned hardware configurations and operating system-to-hardware configurations, such as setting CPU affinity to minimize latencies across the NUMA nodes. The approach taken in this effort to achieve maximum I/Os and throughput was to connect and zone the DL980 to storage in such a way that cross node activity is minimized both from the server and the storage. By dedicating virtual volumes to specific HBAs and NUMA nodes, all of the I/O for a specific volume stays local to specific storage nodes and server nodes. For applications that do a good job with NUMA awareness, this can deliver extremely good performance. For those applications that are not as good with NUMA awareness, more manual tuning may be required, but the flexibility to tune the environment exists. SAN recommendations Each dedicated port has its own zone. For each virtual volume, two ports are connected to any single virtual volume. Zoning too many paths on a single volume can create latencies across the NUMA nodes. Improvements observed were as high as 20%. At the very least, the paths on a single volume should all come from HBAs within a single NUMA node on the DL980 G7 server. 26

27 Server configuration best practices DL980 server BIOS Virtual technology disabled Hyper-Threading disabled Intel Turbo boost enabled HP Power Profile Max Performance Minimum processor idle power state s no C-states in the BIOS Operating system NUMA configuration Map the location of the HBAs NUMA node owner. Map out the interrupt numbers in the server s Linux directory /proc/irq, then assign the interrupt affinity to a core owned by that NUMA node. See Appendices C and D for details. OS and Kernel configuration Disable cstates at kernel boot see details in Appendix B Set sysctl.conf values as stated in Appendix A Ensure debug is not enabled in sysfs. Remove any packages in the OS that may be enabling tracing. To check to see if tracing is disabled, see Appendix G. Storage configuration best practices UDEV settings for performance set udev parameters per values in the Appendix E Set the sysfs rotational value for disks to 0 Set the sysfs value rq_affinity to 2 for each device. Request completions were all occurring on core 0, causing a bottleneck. Setting rq_affinity to a value of 2 resolved this problem. Set scheduler to NOOP ( no operation) Set permissions and ownership for Oracle volumes. SSD loading Load SSDs in groups of 4 per enclosure at a minimum Volume size Virtual volumes should all be the same size and SSD type for each Oracle ASM group. vluns HP recommends that any volume path originate from the same NUMA node on the DL980. It s best to keep the number of vluns per volume down to two. Refer to figure 10. Use Thin Provisioning for the storage of the database and logs. If logs are not going to grow, then use full provisioning for the logs. Database configuration best practices ASM Use separate DATA and LOG ASM groups Logs Assign at least two volumes to the log group and pick volumes from the same NUMA node Oracle parameters Appendix H Set HUGE pages only Disable automatic memory management if applicable Set buffer cache memory size large enough per your implementation to avoid physical reads Enable NUMA support Bill of materials Below is the bill of materials (BOM) for the tested configuration. Variations of the configuration based on customer needs are possible, but would require using a separate BOM. Talk to your HP sale representative for detailed quotes. See figure 21 for the reference architecture diagram of the tested environment. 27

28 Reference architecture diagram Figure 21. Reference architecture diagram. 2 x 10GbE Switches 2 x 8Gb FC Switches DL380 Gen8 optional management server Universal Database Storage Two HP node flash storage arrays with two additional disk shelves in each array. Each node has the 4 port FC expansion card totaling 24 usable ports per array. Each array tested with 16, 32 and 48 SSD 100GB SLC drives. Other drive choices are 200GB SLC and 400GB MLC. Universal Database Server HP ProLiant DL980 G7 Server 2TB quad rank memory 8 x 10 core Xeon E7 processors 8 x AJ764A dual port FC cards 4 integrated 10GbE ports Operating system RHEL

29 Reference architecture BOM Note Part numbers are at time of publication and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HP Reseller or HP Sales Representative for more details. hp.com/large/contact/enterprise/index.html Quantity Product number Description Rack 1 BW930A HP Air Flow Optimization Kit 1 BW930A B01 Include with complete system 1 BW906A HP 42U 1075mm Side Panel Kit 2 AF511A HP Mod PDU Core 48A/3Phs NA Kit 2 AF500A HP 2, 7X C-13 Stk Intl Modular PDU 1 BW904A HP mm Shock Intelligent Rack Network 2 C8R07A HP StoreFabric 8/24 Bundled FC Switch 48 QK735A HP Premier Flex LC/LC OM4 2f 15m Cbl 48 AJ716B HP 8Gb Short Wave B-Series SFP+ 1 Pack 2 JG296A HP 5920 Network Switch Management Server (optional) B21 HP ProLiant DL380p Gen8 8 SFF CTO L B21 HP DL380p Gen8 Intel Xeon E5-2640v2 (2.0GHz/8-core/20MB/95W) FIO Processor Kit HP 16GB (1x16GB) Dual Rank x4 PC3L-12800R (DDR3-1600) Registered CAS-11 Low Voltage Memory Kit B21 HP Ethernet 10Gb 2-port 530FLR-SFP+ FIO Adapter B21 HP 750W Common Slot Platinum Plus Hot Plug Power Supply Kit DB Server DL980 G7 1 AM451A HP ProLiant DL980 G7 CTO system-e7 proc L21 HP DL980 G7 E FIO 4-processor Kit B21 HP DL980 G7 E processor Kit 1 AM450A HP DL980 CPU Installation Assembly for E7 8 A0R60A HP DL980 G7 (E7) Memory Cartridge 29

30 Quantity Product number Description 128 A0R55A HP DL980 16GB 4Rx4 PC3-8500R-7 Kit B21 HP 300GB 6G SAS 15K 2.5in DP ENT HDD B21 HP Slim 12.7mm SATA DVDRW Optical Kit B21 HP DL580G7 PCI Express Kit 1 AM434A HP DL980 LP PCIe I/O Expansion Module B21 HP NC365T 4-port Ethernet Server Adapter 8 AJ764A HP 82Q 8Gb Dual Port PCI-e FC HBA 4 AM470A HP DL W CS Plat Ht Plg Pwr Supply B21 HP Raid 1 Drive 1 FIO Setting 1 A0R66A HP ProLiant DL980 NC375i SPI Board 4 port Storage 2 C8R37A 3PAR StoreServ Node 8 QR486A HP 3PAR pt 8Gb/s FC Adapter 96 QR502A HP M GB 6G SAS 2.5in SLC SSD 0 QR503A HP M GB 6G SAS 2.5in SLC SSD 0 QR504A HP M GB 6G SAS 2.5in MLC SSD 2 BC914A HP 3PAR 7450 Reporting Suite Media LTU 1 BC890A HP 3PAR 7450 OS Suite Base Media LTU 96 BC891A HP 3PAR 7450 OS Suite Drive LTU 4 QR490A HP M in 2U SAS Drive Enclosure 0 QR516B Physical service processor 8 QK734A HP Premier Flex LC/LC OM4 2f 5m Cbl Notes Refer to HP 5920 Network switch QuickSpecs to determine proper transceivers and accessories for your specific network environment Refer to HP 3PAR StoreServ 7450 QuickSpecs for the service processor (SP) and HP 3PAR StoreServ 7450 OS Suite options. Refer to HP ProLiant DL380p Gen8 QuickSpecs to determine the desired options for your environment. Refer to HP 3PAR Software Products QuickSpecs for details on HP 3PAR software options. Summary The HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array is a new significant part of the overall HP performance reference architecture portfolio that was developed to provide high performance I/O throughput for transactional databases in a package which delivers business continuity, extreme IOPS, faster user response times and 30

31 increased throughput vs. comparable traditional server/storage configurations. This solution integrates with high availability options and disaster recovery options like HP 3PAR Remote Copy and Serviceguard for Linux. Key success factors in our extensive testing include: Successfully configure an Oracle database environment using the HP ProLiant DL980 G7 server and two HP 3PAR StoreServ 7450 flash arrays capable of delivering 1M IOPS. Demonstrated a stable OLTP I/O stressed workload and compare the same with Thin Provisioning The solution addresses the challenge for customers to find extremely high performance flash storage with rich management features and ease of management and integrates with the world class HP DL980 G7 to produce a high performance server and over achieving I/O performance with the ability to compete with extreme performance database appliances. The solution provides a flexible, mission critical, extreme database performance solution with more options to meet the customer s needs. Implementing a proof-of-concept As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept, using a test environment that closely matches the planned production environment, to obtain appropriate performance and scalability characterizations. For help with a proof-of-concept, contact an HP Services representative (hp.com/large/contact/enterprise/index.html) or your HP partner. Appendix Appendix A Red Hat 6.4 kernel tunables /etc/sysctl.conf fs.aio-max-nr = fs.file-max = net.ipv4.ip_local_port_range = kernel.shmmax = kernel.shmall = kernel.shmmni = 4096 kernel.sem = fs.file-max = fs.aio-max-nr = net.core.rmem_default = net.core.wmem_default = net.core.rmem_max = net.core.wmem_max = net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = net.ipv4.ip_local_port_range = vm.swappiness=0 vm.dirty_background_ratio=3 vm.dirty_ratio=15 vm.dirty_expire_centisecs=500 vm.dirty_writeback_centisecs=100 vm.hugetlb_shm_group = 1000 vm.nr_hugepages =

32 Appendix B Grub configuration for disabling cstates module /vmlinuz el6.x86_64 ro root=/dev/mapper/vg_aps85180-lv_root intel_iommu=on rd_no_luks LANG=en_US.UTF-8 rd_lvm_lv=vg_aps85180/lv_swap rd_lvm_lv=vg_aps85180/lv_root rd_no_md SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_no_dm rhgb quiet elevator=noop nosoftlockup intel_idle.max_cstate=0 mce=ignore_ce Appendix C IRQ affinity script for /etc/rc.local Note HBA card interrupt numbers must be verified with each specific implementation. See file /proc/interrupts in sysfs of the Linux operating system. /etc/rc.local # This script will be executed *after* all the other init scripts.# You can put your own initialization stuff in here if you don't #want to do the full Sys V style init stuff. touch /var/lock/subsys/local echo "0" > /proc/irq/106/smp_affinity_list echo "1" > /proc/irq/107/smp_affinity_list echo "2" > /proc/irq/108/smp_affinity_list echo "3" > /proc/irq/109/smp_affinity_list echo "4" > /proc/irq/110/smp_affinity_list echo "5" > /proc/irq/111/smp_affinity_list echo "6" > /proc/irq/112/smp_affinity_list echo "7" > /proc/irq/113/smp_affinity_list echo "20" > /proc/irq/114/smp_affinity_list echo "21" > /proc/irq/115/smp_affinity_list echo "22" > /proc/irq/116/smp_affinity_list echo "23" > /proc/irq/117/smp_affinity_list echo "24" > /proc/irq/118/smp_affinity_list echo "25" > /proc/irq/119/smp_affinity_list echo "26" > /proc/irq/120/smp_affinity_list echo "27" > /proc/irq/121/smp_affinity_list echo "28" > /proc/irq/122/smp_affinity_list echo "29" > /proc/irq/123/smp_affinity_list echo "30" > /proc/irq/124/smp_affinity_list echo "31" > /proc/irq/125/smp_affinity_list echo "40" > /proc/irq/126/smp_affinity_list echo "41" > /proc/irq/127/smp_affinity_list echo "42" > /proc/irq/128/smp_affinity_list echo "43" > /proc/irq/129/smp_affinity_list echo "44" > /proc/irq/130/smp_affinity_list 32

33 echo "45" > /proc/irq/131/smp_affinity_list echo "46" > /proc/irq/132/smp_affinity_list echo "47" > /proc/irq/133/smp_affinity_list echo "48" > /proc/irq/134/smp_affinity_list echo "49" > /proc/irq/135/smp_affinity_list echo "50" > /proc/irq/136/smp_affinity_list echo "51" > /proc/irq/137/smp_affinity_list Appendix D HBA NUMA mapping and IRQ map Note Host values and WWN values are specific to each implementation and must be obtained for each implementation. Bus Address Slot Local CPU List NUMA Node Host Port WWN ============================================================================================= 0b: host5 0x b6e5c 0b: host6 0x b6e5e 11: host3 0x b6e34 11: host4 0x b6e36 54: host11 0x c9d0 54: host12 0x c9d2 57: host9 0x b214 57: host10 0x b216 5d: host7 0x ca84 5d: host8 0x ca86 a1: host17 0x cf00 a1: host18 0x cf02 a4: host15 0x b8878 a4: host16 0x b887a aa: host13 0x b6e14 aa: host14 0x b6e16 QLogic Interrupt Affinity Finder Interrupt Number NUMA Node Affinity ============================================================================================= , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) 33

34 , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) , , , , (default) , , , , (rsp_q) Appendix E UDEV configurations /etc/udev/rules.d/10-3par.rules ACTION=="add change", KERNEL=="dm-*", PROGRAM="/bin/bash c 'cat /sys/block/$name/slaves/*/device/vendor grep 3PARdata'", ATTR{queue/rotational}="0", ATTR{queue/scheduler}="noop", ATTR{queue/rq_affinity}="2", ATTR{queue/nomerges}="1" /etc/udev/rules.d/12-dm-permissions.rules ENV{DM_NAME}=="mpathch", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathcg", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathcf", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathce", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathag", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathcd", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathaf", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathcc", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathae", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathcb", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathad", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathac", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathcl", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathck", OWNER:="oracle", GROUP:="oinstall", MODE:="660" 34

35 ENV{DM_NAME}=="mpathcj", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="mpathci", OWNER:="oracle", GROUP:="oinstall", MODE:="660" Appendix F Storage information Zoning example WWNs specific to implementation. Example only. Switch Top Effective configuration: cfg: CFG_BOTH zone: Z1 50:01:43:80:18:6b:6e:5c 20:11:00:02:ac:00:5f:9a zone: Z10 50:01:43:80:18:6b:6e:34 21:11:00:02:ac:00:5f:98 zone: Z11 50:01:43:80:24:22:c9:d0 20:21:00:02:ac:00:5f:98 zone: Z12 50:01:43:80:24:22:b2:14 21:21:00:02:ac:00:5f:98 zone: Z13 50:01:43:80:24:22:ca:84 22:21:00:02:ac:00:5f:98 zone: Z14 50:01:43:80:24:22:cf:00 22:11:00:02:ac:00:5f:98 zone: Z15 50:01:43:80:18:6b:88:78 23:11:00:02:ac:00:5f:98 zone: Z16 50:01:43:80:18:6b:6e:14 23:21:00:02:ac:00:5f:98 zone: Z2 50:01:43:80:18:6b:6e:34 21:11:00:02:ac:00:5f:9a zone: Z3 50:01:43:80:24:22:c9:d0 20:21:00:02:ac:00:5f:9a zone: Z4 50:01:43:80:24:22:b2:14 21:21:00:02:ac:00:5f:9a zone: Z5 50:01:43:80:24:22:ca:84 22:11:00:02:ac:00:5f:9a zone: Z6 50:01:43:80:24:22:cf:00 22:11:00:02:ac:00:5f:9a zone: Z7 50:01:43:80:18:6b:88:78 23:11:00:02:ac:00:5f:9a zone: Z8 50:01:43:80:18:6b:6e:14 23:21:00:02:ac:00:5f:9a zone: Z9 50:01:43:80:18:6b:6e:5c 20:11:00:02:ac:00:5f:98 Switch Bottom Zones Effective configuration: cfg: CFG_BOTH 35

36 zone: Z1 zone: Z10 zone: Z11 zone: Z12 zone: Z13 zone: Z14 zone: Z15 zone: Z16 zone: Z2 zone: Z3 zone: Z4 zone: Z5 zone: Z6 zone: Z7 zone: Z8 zone: Z9 50:01:43:80:18:6b:6e:5e 22:12:00:02:ac:00:5f:9a 50:01:43:80:18:6b:6e:36 23:12:00:02:ac:00:5f:98 50:01:43:80:24:22:c9:d2 22:22:00:02:ac:00:5f:98 50:01:43:80:24:22:b2:16 23:22:00:02:ac:00:5f:98 50:01:43:80:24:22:ca:86 20:22:00:02:ac:00:5f:98 50:01:43:80:24:22:cf:02 20:12:00:02:ac:00:5f:98 50:01:43:80:18:6b:88:7a 21:12:00:02:ac:00:5f:98 50:01:43:80:18:6b:6e:16 21:22:00:02:ac:00:5f:98 50:01:43:80:18:6b:6e:36 23:12:00:02:ac:00:5f:9a 50:01:43:80:24:22:c9:d2 22:22:00:02:ac:00:5f:9a 50:01:43:80:24:22:b2:16 23:22:00:02:ac:00:5f:9a 50:01:43:80:24:22:ca:86 20:22:00:02:ac:00:5f:9a 50:01:43:80:24:22:cf:02 20:12:00:02:ac:00:5f:9a 50:01:43:80:18:6b:88:7a 21:12:00:02:ac:00:5f:9a 50:01:43:80:18:6b:6e:16 21:22:00:02:ac:00:5f:9a 50:01:43:80:18:6b:6e:5e 22:12:00:02:ac:00:5f:98 HP 3PAR StoreServ 7450 CLI examples SHOWVLUN - 48 drives RAID1 prometheus cli% showvlun Active VLUNs Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type Status ID 0 APS84_11.0 NUMA0_pair B6E5C 0:1:1 host active 0 0 APS84_11.0 NUMA0_pair B6E34 1:1:1 host active 0 0 APS84_11.1 NUMA0_pair B6E5E 2:1:2 host active 0 0 APS84_11.1 NUMA0_pair B6E36 3:1:2 host active 0 0 APS84_11.2 NUMA2_pair C9D0 0:2:1 host active 0 0 APS84_11.2 NUMA2_pair B214 1:2:1 host active 0 0 APS84_11.3 NUMA2_pair CA86 0:2:2 host active 0 36

37 0 APS84_11.3 NUMA2_pair C9D2 2:2:2 host active 0 0 APS84_11.4 NUMA2_pair CA84 2:2:1 host active 0 0 APS84_11.4 NUMA2_pair B216 3:2:2 host active 0 0 APS84_11.5 NUMA4_pair CF00 2:1:1 host active 0 0 APS84_11.5 NUMA4_pair B8878 3:1:1 host active 0 0 APS84_11.6 NUMA4_pair B887A 1:1:2 host active 0 0 APS84_11.6 NUMA4_pair B6E16 1:2:2 host active 0 0 APS84_11.7 NUMA4_pair CF02 0:1:2 host active 0 0 APS84_11.7 NUMA4_pair B6E14 3:2:1 host active total VLUN Templates Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type 0 APS84_11.0 NUMA0_pair host 0 APS84_11.1 NUMA0_pair host 0 APS84_11.2 NUMA2_pair host 0 APS84_11.3 NUMA2_pair host 0 APS84_11.4 NUMA2_pair host 0 APS84_11.5 NUMA4_pair host 0 APS84_11.6 NUMA4_pair host 0 APS84_11.7 NUMA4_pair host total SHOWPORT N:S:P Connmode ConnType CfgRate MaxRate Class2 UniqNodeWwn VCN IntCoal 0:0:1 disk point 6Gbps 6Gbps n/a n/a n/a enabled 0:0:2 disk point 6Gbps 6Gbps n/a n/a n/a enabled 0:1:1 host point auto 8Gbps disabled disabled disabled disabled 0:1:2 host point auto 8Gbps disabled disabled disabled disabled 0:2:1 host point auto 8Gbps disabled disabled disabled disabled 0:2:2 host point auto 8Gbps disabled disabled disabled disabled 0:2:3 disk loop auto 8Gbps disabled disabled disabled enabled 0:2:4 disk loop auto 8Gbps disabled disabled disabled enabled 1:0:1 disk point 6Gbps 6Gbps n/a n/a n/a enabled 1:0:2 disk point 6Gbps 6Gbps n/a n/a n/a enabled 1:1:1 host point auto 8Gbps disabled disabled disabled disabled 1:1:2 host point auto 8Gbps disabled disabled disabled disabled 1:2:1 host point auto 8Gbps disabled disabled disabled disabled 1:2:2 host point auto 8Gbps disabled disabled disabled disabled 1:2:3 disk loop auto 8Gbps disabled disabled disabled enabled 1:2:4 disk loop auto 8Gbps disabled disabled disabled enabled 2:0:1 disk point 6Gbps 6Gbps n/a n/a n/a enabled 2:0:2 disk point 6Gbps 6Gbps n/a n/a n/a enabled 2:1:1 host point auto 8Gbps disabled disabled disabled disabled 37

38 2:1:2 host point auto 8Gbps disabled disabled disabled disabled 2:2:1 host point auto 8Gbps disabled disabled disabled disabled 2:2:2 host point auto 8Gbps disabled disabled disabled disabled 2:2:3 disk loop auto 8Gbps disabled disabled disabled enabled 2:2:4 disk loop auto 8Gbps disabled disabled disabled enabled 3:0:1 disk point 6Gbps 6Gbps n/a n/a n/a enabled 3:0:2 disk point 6Gbps 6Gbps n/a n/a n/a enabled 3:1:1 host point auto 8Gbps disabled disabled disabled disabled 3:1:2 host point auto 8Gbps disabled disabled disabled disabled 3:2:1 host point auto 8Gbps disabled disabled disabled disabled 3:2:2 host point auto 8Gbps disabled disabled disabled disabled 3:2:3 disk loop auto 8Gbps disabled disabled disabled enabled 3:2:4 disk loop auto 8Gbps disabled disabled disabled enabled SHOWCPG prometheus cli% showcpg (MB) Volumes- -Usage Usr Snp Adm --- Id Name Warn% VVs TPVVs Usr Snp Total Used Total Used Total Used 0 SSD_r SSD_r SSD_r SSD_R1_16Drives SSD_R5_16Drives SSD_R1_32Drives SSD_R5_32Drives total Appendix G Check or set operating system tracing parameter If tracing is enabled on the operating system, latencies from events can be introduced into the kernel causing delays in I/O operations. During I/O characterization testing as much as 10% I/O performance degradation was observed. Ensure any tools that enable tracing have been disabled or removed unless they are needed for specific support purposes. To check the state of tracing on the system, run the following commands: cat /sys/kernel/debug/tracing/tracing_enabled cat /sys/kernel/debug/tracing/tracing_on The result of both these commands should be 0. To disable tracing temporarily run the following commands: echo "0" > /sys/kernel/debug/tracing/tracing_enabled echo "0" > /sys/kernel/debug/tracing/tracing_on To permanently disable tracing, remove the application on the system that is enabling debug or add the above commands to the /etc/rc.local file. A debug tool called flightrecorder can cause debug to be enabled. To determine if flightrecorder is installed check for it on your Linux server using this command: rpm qa grep flightrecorder 38

39 If the package exists, delete it using rpm e or run the following command: service trace-cmd stop Appendix H Oracle parameters DB1. db_cache_size= DB1. java_pool_size= DB1. large_pool_size= DB1. oracle_base='/u01/app/oracle'#oracle_base set from environment DB1. pga_aggregate_target= DB1. sga_target= DB1. shared_io_pool_size=0 DB1. shared_pool_size= DB1. streams_pool_size= *._db_block_numa=1 *._enable_automatic_maintenance=0 *._enable_numa_support=true *._shared_io_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/db1/adump' *.audit_trail='db' *.compatible=' ' *.control_files='+data/db1/controlfile/current ' *.db_block_checking='true' *.db_block_checksum='true' *.db_block_size=8192 *.db_cache_size= *.db_create_file_dest='+data' *.db_create_online_log_dest_1='+data' *.db_domain='' *.db_file_multiblock_read_count=128 *.db_files=1050 *.db_name='db1' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(protocol=tcp) (SERVICE=DB1XDB)' *.filesystemio_options='setall' *.java_pool_size= *.large_pool_size= *.open_cursors=3000 *.parallel_degree_policy='manual' *.parallel_max_servers=0 *.parallel_min_servers=800 *.pga_aggregate_target= *.processes=12000 *.recovery_parallelism=240 *.remote_login_passwordfile='exclusive' *.sessions=1000 *.sga_target=0 *.shared_pool_size= *.statistics_level='typical' *.streams_pool_size= *.timed_statistics=true *.trace_enabled=true *.undo_tablespace='undotbs1' *.use_large_pages='only' 39

40 Appendix I HP ProLiant DL980 PCIe card loading order Figure 22. DL980 G7 I/O Expansion Slot Options & PCIe Loading 40

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC

More information

Software-defined Storage at the Speed of Flash

Software-defined Storage at the Speed of Flash TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing

More information

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper

High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation report prepared under contract with HP Executive Summary Solid state storage is transforming the entire

More information

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)

More information

HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture

HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture WHITE PAPER HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture Based on Microsoft SQL Server 2014 Data Warehouse

More information

HP AppSystem for SAP HANA

HP AppSystem for SAP HANA Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering Technical white paper HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering Table of contents Executive summary 2 Fast Track reference

More information

HP 3PAR StoreServ 8000 Storage - what s new

HP 3PAR StoreServ 8000 Storage - what s new HP 3PAR StoreServ 8000 Storage - what s new October 2015 Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP 3PAR StoreServ

More information

Dell PowerEdge R720xd Scalability for Microsoft SQL Server 2008 R2 Databases

Dell PowerEdge R720xd Scalability for Microsoft SQL Server 2008 R2 Databases Dell PowerEdge R720xd Scalability for Microsoft SQL Server 2008 R2 Databases A Dell Technical White Paper explains how R720XD can be scaled up from processor and disks point of view for OLTP based SQL

More information

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

HP SN1000E 16 Gb Fibre Channel HBA Evaluation HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

VMware vsphere 6 and Oracle Database Scalability Study

VMware vsphere 6 and Oracle Database Scalability Study VMware vsphere 6 and Oracle Database Scalability Study Scaling Monster Virtual Machines TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Test Environment... 3 Virtual Machine

More information

How System Settings Impact PCIe SSD Performance

How System Settings Impact PCIe SSD Performance How System Settings Impact PCIe SSD Performance Suzanne Ferreira R&D Engineer Micron Technology, Inc. July, 2012 As solid state drives (SSDs) continue to gain ground in the enterprise server and storage

More information

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation 1. Overview of NEC PCIe SSD Appliance for Microsoft SQL Server Page 2 NEC Corporation

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms IT@Intel White Paper Intel IT IT Best Practices: Data Center Solutions Server Virtualization August 2010 Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms Executive

More information

HP D3600 Disk Enclosure 4,000 Mailbox Resiliency Exchange 2013 Storage Solution

HP D3600 Disk Enclosure 4,000 Mailbox Resiliency Exchange 2013 Storage Solution Technical white paper HP D3600 Disk Enclosure 4,000 Mailbox Resiliency Exchange 2013 Storage Solution Table of contents Overview... 2 Disclaimer... 2 Features of the tested solution... 2 HP D3600 Disk

More information

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

EMC XtremSF: Delivering Next Generation Performance for Oracle Database White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Maximum performance, minimal risk for data warehousing

Maximum performance, minimal risk for data warehousing SYSTEM X SERVERS SOLUTION BRIEF Maximum performance, minimal risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (95TB) The rapid growth of technology has

More information

Boost Database Performance with the Cisco UCS Storage Accelerator

Boost Database Performance with the Cisco UCS Storage Accelerator Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with Internal PCIe SSD Storage

Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with Internal PCIe SSD Storage Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with This Dell Technical White Paper discusses the OLTP performance benefit achieved on a SQL Server database using a

More information

Data Center Storage Solutions

Data Center Storage Solutions Data Center Storage Solutions Enterprise software, appliance and hardware solutions you can trust When it comes to storage, most enterprises seek the same things: predictable performance, trusted reliability

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

HP recommended configuration for Microsoft Exchange Server 2010: ProLiant DL370 G6 supporting 1000-2GB mailboxes

HP recommended configuration for Microsoft Exchange Server 2010: ProLiant DL370 G6 supporting 1000-2GB mailboxes HP recommended configuration for Microsoft Exchange Server 2010: ProLiant DL370 G6 supporting 1000-2GB mailboxes Table of contents Executive summary... 2 Introduction... 3 Tiered solution matrix... 3 Recommended

More information

Kaminario K2 All-Flash Array

Kaminario K2 All-Flash Array Kaminario K2 All-Flash Array The Kaminario K2 all-flash storage array delivers predictable performance, cost, scale, resiliency and simplicity so organizations can handle ever-changing and unforeseen business

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

Deploying SQL Server 2008 R2 with Hyper-V on the Hitachi Virtual Storage Platform

Deploying SQL Server 2008 R2 with Hyper-V on the Hitachi Virtual Storage Platform 1 Deploying SQL Server 2008 R2 with Hyper-V on the Hitachi Virtual Storage Platform Reference Architecture Guide By Rick Andersen October 2010 Month Year Feedback Hitachi Data Systems welcomes your feedback.

More information

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

Running Highly Available, High Performance Databases in a SAN-Free Environment

Running Highly Available, High Performance Databases in a SAN-Free Environment TECHNICAL BRIEF:........................................ Running Highly Available, High Performance Databases in a SAN-Free Environment Who should read this paper Architects, application owners and database

More information

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial

More information

Sizing guide for Microsoft Hyper-V on HP server and storage technologies

Sizing guide for Microsoft Hyper-V on HP server and storage technologies Sizing guide for Microsoft Hyper-V on HP server and storage technologies Executive summary... 2 Hyper-V sizing: server capacity considerations... 2 Application processor utilization requirements... 3 Application

More information

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering 1 Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering Reference Architecture Guide By Eduardo Freitas May 2011 Month Year

More information

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

Optimizing SQL Server Storage Performance with the PowerEdge R720

Optimizing SQL Server Storage Performance with the PowerEdge R720 Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced

More information

HP ProLiant DL380p Gen8 1000 mailbox 2GB mailbox resiliency Exchange 2010 storage solution

HP ProLiant DL380p Gen8 1000 mailbox 2GB mailbox resiliency Exchange 2010 storage solution Technical white paper HP ProLiant DL380p Gen8 1000 mailbox 2GB mailbox resiliency Exchange 2010 storage solution Table of contents Overview 2 Disclaimer 2 Features of the tested solution 2 Solution description

More information

The Revival of Direct Attached Storage for Oracle Databases

The Revival of Direct Attached Storage for Oracle Databases The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to

More information

Performance and scalability of a large OLTP workload

Performance and scalability of a large OLTP workload Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

More information

AirWave 7.7. Server Sizing Guide

AirWave 7.7. Server Sizing Guide AirWave 7.7 Server Sizing Guide Copyright 2013 Aruba Networks, Inc. Aruba Networks trademarks include, Aruba Networks, Aruba Wireless Networks, the registered Aruba the Mobile Edge Company logo, Aruba

More information

Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server

Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server Express5800 Scalable Enterprise Server Reference Architecture For NEC PCIe SSD Appliance for Microsoft SQL Server An appliance that significantly improves performance of enterprise systems and large-scale

More information

HP 85 TB reference architectures for Microsoft SQL Server 2012 Fast Track Data Warehouse: HP ProLiant DL980 G7 and P2000 G3 MSA Storage

HP 85 TB reference architectures for Microsoft SQL Server 2012 Fast Track Data Warehouse: HP ProLiant DL980 G7 and P2000 G3 MSA Storage Technical white paper HP 85 TB reference architectures for Microsoft SQL Server 2012 Fast Track Data Warehouse: HP ProLiant DL980 G7 and P2000 G3 MSA Storage Table of contents Executive summary... 2 Fast

More information

Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate. Nytro Flash Accelerator Cards

Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate. Nytro Flash Accelerator Cards Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate Nytro Flash Accelerator Cards Technology Paper Authored by: Mark Pokorny, Database Engineer, Seagate Overview SQL Server 2014 provides

More information

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a

More information

Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering 1 Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering Reference Architecture Guide By Jeff Chen May 2011 Month Year Feedback Hitachi Data Systems

More information

Infortrend ESVA Family Enterprise Scalable Virtualized Architecture

Infortrend ESVA Family Enterprise Scalable Virtualized Architecture Infortrend ESVA Family Enterprise Scalable Virtualized Architecture R Optimized ROI Ensures the most efficient allocation of consolidated capacity and computing power, and meets wide array of service level

More information

Minimize cost and risk for data warehousing

Minimize cost and risk for data warehousing SYSTEM X SERVERS SOLUTION BRIEF Minimize cost and risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (55TB) Highlights Improve time to value for your data

More information

Storage Designed to Support an Oracle Database. White Paper

Storage Designed to Support an Oracle Database. White Paper Storage Designed to Support an Oracle Database White Paper Abstract Databases represent the backbone of most organizations. And Oracle databases in particular have become the mainstream data repository

More information

Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card

Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Technology Paper Authored by Rick Stehno, Principal Database Engineer, Seagate Introduction Supporting high transaction

More information

SAS Business Analytics. Base SAS for SAS 9.2

SAS Business Analytics. Base SAS for SAS 9.2 Performance & Scalability of SAS Business Analytics on an NEC Express5800/A1080a (Intel Xeon 7500 series-based Platform) using Red Hat Enterprise Linux 5 SAS Business Analytics Base SAS for SAS 9.2 Red

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment

More information

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads Gen9 Servers give more performance per dollar for your investment. Executive Summary Information Technology (IT) organizations face increasing

More information

HP App Map for Database Consolidation for Microsoft SQL Server on ConvergedSystem 700x

HP App Map for Database Consolidation for Microsoft SQL Server on ConvergedSystem 700x Technical white paper HP App Map for Database Consolidation for Microsoft SQL Server on ConvergedSystem 700x Table of contents Executive summary... 3 Introduction... 3 Overview... 4 HP ConvergedSystem

More information

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment DATASHEET TM NST6000 UNIFIED HYBRID STORAGE Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment UNIFIED The Nexsan NST6000 unified hybrid storage appliance is ideal for

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

SAN TECHNICAL - DETAILS/ SPECIFICATIONS SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance

More information

SQL Server Virtualization

SQL Server Virtualization The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization

More information

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd Reference Implementation for up to 8000 mailboxes Dell Global Solutions Engineering June 2015 A Dell Reference Architecture THIS

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software White Paper Overview The Micron M500DC SSD was designed after months of close work with major data center service providers and

More information

Xanadu 130. Business Class Storage Solution. 8G FC Host Connectivity and 6G SAS Backplane. 2U 12-Bay 3.5 Form Factor

Xanadu 130. Business Class Storage Solution. 8G FC Host Connectivity and 6G SAS Backplane. 2U 12-Bay 3.5 Form Factor RAID Inc Xanadu 200 100 Series Storage Systems Xanadu 130 Business Class Storage Solution 8G FC Host Connectivity and 6G SAS Backplane 2U 12-Bay 3.5 Form Factor Highlights Highest levels of data integrity

More information

High Availability Databases based on Oracle 10g RAC on Linux

High Availability Databases based on Oracle 10g RAC on Linux High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database

More information

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009

More information

VMware Best Practice and Integration Guide

VMware Best Practice and Integration Guide VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture

Flash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture Flash Performance for Oracle RAC with PCIe Shared Storage Authored by: Estuate & Virident HGST Table of Contents Introduction... 1 RAC Share Everything Architecture... 1 Oracle RAC on FlashMAX PCIe SSDs...

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

An Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups

An Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups An Oracle White Paper September 2011 Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups Table of Contents Introduction... 3 Tape Backup Infrastructure Components... 4 Requirements...

More information

Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.

More information

E4 UNIFIED STORAGE powered by Syneto

E4 UNIFIED STORAGE powered by Syneto E4 UNIFIED STORAGE powered by Syneto THE E4 UNIFIED STORAGE (US) SERIES POWERED BY SYNETO From working in the heart of IT environment and with our major customers coming from Research, Education and PA,

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) July 2014 NEC Corporation

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) July 2014 NEC Corporation PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) July 2014 NEC Corporation 1. Overview of NEC PCIe SSD Appliance for Microsoft SQL Server Page 2 NEC Corporation 2014 Background

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

Exadata HW Overview. Marek Mintal

Exadata HW Overview. Marek Mintal Exadata HW Overview Marek Mintal marek.mintal@phaetech.com Oracle Day 2011 20.10.2011 Exadata Hardware Architecture Scalable Grid of industry standard servers for Compute and Storage Eliminates long-standing

More information

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

2009 Oracle Corporation 1

2009 Oracle Corporation 1 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,

More information

Intel RAID SSD Cache Controller RCS25ZB040

Intel RAID SSD Cache Controller RCS25ZB040 SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster

More information

Data Center Solutions

Data Center Solutions Data Center Solutions Systems, software and hardware solutions you can trust With over 25 years of storage innovation, SanDisk is a global flash technology leader. At SanDisk, we re expanding the possibilities

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

HPe in Datacenter HPe3PAR Flash Technologies

HPe in Datacenter HPe3PAR Flash Technologies HPe in Datacenter HPe3PAR Flash Technologies Martynas Skripkauskas Hewlett Packard Enterprise Storage Department Flash Misnomers Common statements used when talking about flash Flash is about speed Flash

More information