HP 85 TB reference architectures for Microsoft SQL Server 2012 Fast Track Data Warehouse: HP ProLiant DL980 G7 and P2000 G3 MSA Storage



Similar documents
HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

HP reference configuration for entry-level SAS Grid Manager solutions

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP recommended configuration for Microsoft Exchange Server 2010: ProLiant DL370 G6 supporting GB mailboxes

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

HP ProLiant DL580 Gen8 and HP LE PCIe Workload WHITE PAPER Accelerator 90TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture

How To Write An Article On An Hp Appsystem For Spera Hana

HP recommended configurations for Microsoft Exchange Server 2013 and HP ProLiant Gen8 with direct attached storage (DAS)

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Dell Microsoft SQL Server 2008 Fast Track Data Warehouse Performance Characterization

SQL Server Business Intelligence on HP ProLiant DL785 Server

RAID 5 rebuild performance in ProLiant

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

EMC Backup and Recovery for Microsoft SQL Server

SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software

All Silicon Data Warehouse: Violin Memory Fast Track Data Warehouse Reference Architecture. Installation and Configuration Guide

Use cases and best practices for HP StorageWorks P2000 G3 MSA FC/iSCSI Combo Controller

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Microsoft SQL Server 2014 Fast Track

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms

EMC Backup and Recovery for Microsoft SQL Server

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

DATA WAREHOUSE FAST TRACK FOR MICROSOFT SQL SERVER 2014

Desktop virtualization: Implementing Virtual Desktop Infrastructure (VDI) with HP. What is HP Virtual Desktop Infrastructure?

SAN Conceptual and Design Basics

Proof Point: Example Clustered Microsoft SQL Configuration on the HP ProLiant DL980

HP Smart Array Controllers and basic RAID performance factors

Maximum performance, minimal risk for data warehousing

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

EMC Unified Storage for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

Windows Server 2008 R2 for Itanium-Based Systems offers the following high-end features and capabilities:

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Microsoft SharePoint Server 2010

HP ProLiant DL380p Gen mailbox 2GB mailbox resiliency Exchange 2010 storage solution

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Virtual SAN Design and Deployment Guide

Redundancy in enterprise storage networks using dual-domain SAS configurations

James Serra Sr BI Architect

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

HP Client Virtualization SMB Reference Architecture for Windows Server 2012

Sizing guide for Microsoft Hyper-V on HP server and storage technologies

HP and Mimosa Systems A system for archiving, recovery, and storage optimization white paper

Using HP StoreOnce D2D systems for Microsoft SQL Server backups

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

QuickSpecs. Models HP ProLiant Storage Server iscsi Feature Pack. Overview

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

Assessing RAID ADG vs. RAID 5 vs. RAID 1+0

Windows 8 SMB 2.2 File Sharing Performance

Oracle Database Scalability in VMware ESX VMware ESX 3.5

HP StorageWorks P2000 G3 and MSA2000 G2 Arrays

HP reference configuration for Scalable Warehouse Solution for Oracle: HP DL980 G7 and P2000 G3 MSA

Database Server Configuration Best Practices for Aras Innovator 10

Microsoft SharePoint Server 2010

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

HP D3600 Disk Enclosure 4,000 Mailbox Resiliency Exchange 2013 Storage Solution

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

HP ProLiant Cluster for MSA1000 for Small Business Hardware Cabling Scheme Introduction Software and Hardware Requirements...

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

VTrak SATA RAID Storage System

ARCHITECTING COST-EFFECTIVE, SCALABLE ORACLE DATA WAREHOUSES

SAS Business Analytics. Base SAS for SAS 9.2

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Michael Kagan.

HP MSA 2040 Storage 750 Mailbox Resiliency Exchange 2013 Storage Solution with Microsoft Hyper-V

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Accelerating MS SQL Server 2012

Enabling VMware Enhanced VMotion Compatibility on HP ProLiant servers

QuickSpecs. What's New Support for two InfiniBand 4X QDR 36P Managed Switches

Microsoft Exchange Server 2007 deployment scenarios for midsize businesses

High Performance Tier Implementation Guideline

VMware Best Practice and Integration Guide

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture

White Paper. Recording Server Virtualization

UCS M-Series Modular Servers

Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate. Nytro Flash Accelerator Cards

The Benefits of Virtualizing

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

HP iscsi storage for small and midsize businesses

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Transcription:

Technical white paper HP 85 TB reference architectures for Microsoft SQL Server 2012 Fast Track Data Warehouse: HP ProLiant DL980 G7 and P2000 G3 MSA Storage Table of contents Executive summary... 2 Fast Track reference architecture... 2 Important points and caveats... 2 Solution criteria... 4 Storage options... 5 Capacity summary... 7 Database configuration... 8 SQL Server 2012 performance metrics... 8 Measured performance... 8 Queue Depth settings... 9 LUN layout... 9 Volume Read Ahead settings... 11 Multi Path I/O (MPIO)... 11 SQL Server settings... 12 MAXDOP parameter... 12 Resource Governor... 13 High Availability... 13 Additional storage requirements... 15 Manual storage mapping... 16 Disaster recovery... 19 Bills of materials... 20 High Availability... 21 Disaster recovery... 22 Implementing a proof-of-concept... 23

Executive summary Microsoft SQL Server Fast Track Data Warehouse for Hewlett-Packard (HP) servers, storage and networking products provides a prescriptive approach for balancing server, storage, network, and software configurations for architecting Microsoft SQL Server 2012 data warehouse solutions. The reference architectures provide server and storage guidance for various data warehouse workloads giving you the most efficient hardware for your solution, saving you time and cost in choosing the right technology, and giving you peace of mind that the right platform and architecture is in place. Target audience: The target audience for this document consists of IT planners, architects, DBAs, CIOs, CTOs, and business intelligence (BI) users with an interest in options for their BI applications and in the factors that affect those options. This white paper describes testing performed by HP in April 2013 Fast Track reference architecture This document is a reference architecture companion document to the Microsoft SQL Server Technical Article, Fast Track Data Warehouse Reference Guide for SQL Server 2012, which describes a repeatable architectural approach for implementing a scalable model for a symmetric multiprocessor (SMP)-based Microsoft SQL Server 2012 data warehouse. The end result of the process described in this companion document represents a recommended minimal SQL Server 2012 configuration, inclusive of all the software and hardware, required to achieve and maintain a baseline level of out of box scalable performance when deploying SQL Server data warehousing (SSDW) sequential data access workload scenarios versus traditional random I/O methods. This document provides specific details about the configuration and bill of materials for one such Fast Track 4.0 reference architecture, based on the HP ProLiant DL980 G7 server using HP P2000 G3 MSA storage. This configuration is targeted at a data warehouse or data mart environment with scan rate requirements of around 11-17GB/sec. It is optimized at up to 85TB of compressed user data capacity. Important points and caveats The configuration described here and the approach detailed in the reference architecture guide is exclusively designed for, and is only applicable to, sequential data workloads. Use of this approach on other workload types is not appropriate and may yield configurations that are inefficient. ALL recommendations and best practices defined in the reference architecture guide must be implemented in their entirety in order to preserve and maintain the sequential order of the data and sequential I/O against the data. 2

Figure 1. Fast Track 85TB reference architecture with the ProLiant DL980 G7 and 16 P2000 G3 MSA storage arrays. A complete SQL Server DBMS (Database Management System) configuration, or stack," is a collection of all the components that are configured to work together to support the database application. This includes the physical server hardware (with its BIOS settings and appropriate firmware releases), memory, CPU (number, type, clock and bus speed, cache and core count), operating system settings, the storage arrays and interconnects, disk (capacity, form factor and spindle speeds), database, DBMS settings and configuration, and even table types, indexing strategy, and physical data layout. The primary goal of Fast Track, which is also a common goal when designing most data center infrastructures, is a balanced configuration where all components can be utilized to their maximum capability. Architecting and maintaining a balance prevents over subscribing certain components within the stack to a point where the expected performance is not realized; understanding the performance limits of your configuration can help prevent wasted cost for components that will never realize their potential due to other constraints within the stack. 3

Solution criteria The Fast Track reference architectures are built on several different HP ProLiant server platforms, each targeting a different tier of a SQL Server Data Warehousing solution. The HP Fast Track architectures gain greater throughput and scale by using the following approach: Targeting query workloads patterned for large sequential data sets rather than small random data transactions Optimizing rapid data reads and query aggregations HP currently supports both Fast Track 3.0 and, Fast Track 4.0 (SQL Server 2012). The configurations with external storage leverage the HP P2000 G3 MSA FC Dual Controller SFF array, which allows for dual reads when drives are mirrored. For sequential data reads from data warehouse queries, this capability enables tremendous throughput per storage volume. The Fast Track approach, and supporting storage array architecture, is optimized for sequential reads. To support a nonoptimized, random I/O data warehousing workload, up to 2 to 3 times the number of drives would be required to achieve the same throughput. Current Fast Track 3.0 and 4.0 configurations are listed below in Tables 1 and 2. For 3.0 configurations, the entry level ProLiant DL370 G6 configurations are aimed at new startup data marts with scan rate requirements of 1-1.8GB/sec, while the ProLiant DL380p Gen8 and DL385 G7 configurations are ideal for data marts with a small departmental footprint of query activity with a volume optimized at 10-30TB of compressed data capacity. The DL380p Gen8 SSD configuration is targeted at high performance query applications with storage requirements of less than 11TB. The DL560 Gen8 along with the DL580 G7 and DL585 G7 configurations targets the middle tier of data warehouse activity, with moderately complex queries and concurrency. The DL980 G7 rounds out the top end at a rated capacity of 85TB. Table 1 summarizes Fast Track 3.0 configurations, while Table 2 references the Fast Track for SQL Server 2012 configurations. Table 1. Fast Track 3.0 reference architectures with HP ProLiant servers and storage Server CPU Total Cores SAN Drive Count (user data) Recommended Max Capacity DL370 G6 (2) Intel Xeon Model X5670 12 N/A (12) 600GB 6G 10k SAS 5.5TB DL370 G6 w/expansion (2) Intel Xeon Model X5670 12 N/A (30) 600GB 6G 10k SAS 14TB DL385 G7 (2) AMD Opteron Model 6176SE 24 (3) HP P2000 G3 MSA (48) 600GB 6G 10k SAS 20TB DL585 G7 (4) AMD Opteron Model 6176SE 48 (6) HP P2000 G3 MSA (96) 600GB 6G 10k SAS 40TB The Fast Track for SQL Server 2012 configurations are highlighted in Table 2 below. Table 2. Fast Track for SQL Server 2012 reference architectures with HP ProLiant servers and storage Server CPU Total Cores SAN Drive Count (user data) Recommended Max Capacity DL380p Gen8 (2) Intel Xeon Model E5-2670 16 Internal SSD (14) 400/800GB 6G SATA MLC SSD 4/11TB DL380p Gen8 (2) Intel Xeon Model E5-2690 16 (4) HP P2000 G3 MSA (64) 600GB 6G 10k SAS 30TB DL560 Gen8 (4) Intel Xeon Model E5-4650 32 (8) HP P2000 G3 MSA (128) 600GB 6G 10k SAS 50TB DL580 G7 (4) Intel Xeon Model E7-4870 40 (8) HP P2000 G3 MSA (128) 600GB 6G 10k SAS 60TB DL980 G7 (8) Intel Xeon Model E7-4870 80 (16) HP P2000 G3 MSA (256) 600GB 6G 10k SAS 85TB 4

The focus of this reference architecture document is a high-end configuration, based on the ProLiant DL980 G7. Table 3 outlines the configuration details for this platform. Table 3. ProLiant DL980 G7 configuration details Model ProLiant DL980 G7 E7-4870 8P CPU (8) Ten-Core Intel Xeon Processors Model E7-4870 (2.40 GHz, 30MB L3 cache, 130W) Number Cores 80 PCI-E Slots Drives Storage Controller Host Bus Adapters Network Adapters RAM (10) x8, (6) x4, 11 FH/FL, 5 LP (3) HP 146GB 6G SAS 15K SFF DP ENT HDD (System + hot spare) (5) HP 600GB 6G SAS 10K SFF DP ENT HDD (Pagefile + hot spare) HP Smart Array P410i/1GB FBWC (14) HP 82E 8Gb Dual Port PCI-E FC HBA One HP NC375i Quad Port Gigabit Server Adapter 768 GB PC3-10600R expandable to 4TB Note that the recommended 768GB is the minimum RAM for this configuration. Generally, for SQL Server Data Warehousing environments, as workload demands grow, increasing RAM provides performance benefits. If the workload consists of a large number of small-to-medium sized queries hitting the same data (for example, last week s sales), performance and throughput can be increased by caching results in memory. Storage options The Fast Track reference architecture is based on a model of a core-balanced architecture which allows for a configuration that is initially balanced and provides predictable performance. This core balanced architecture is sized for an optimum number of storage components to drive a certain number of CPU cores. Each CPU core has a Maximum Consumption Rate (MCR). The MCR is the maximum throughput of information processing for a given CPU core. Multiplying the per-core MCR and the number of CPU cores yields the MCR for the entire system. To ensure balanced performance we must ensure that there are no bottlenecks in the system that would prevent the CPUs from reaching their MCR. The diagram below represents the building blocks for the HP ProLiant DL980 G7 required to achieve a core-balanced sequential data I/O scan rate of about 11-17GB/sec. 5

Figure 2. Example DL980 G7 core-balanced architecture SQL Server 2012 Minimum Server Configuration SMP Core-Balanced Architecture More information regarding the Fast Track Data Warehouse reference architecture approach, including the means of calculating building blocks for other HP server and storage combinations, is included in the Microsoft SQL Server Technical Article, Fast Track Data Warehouse Reference Guide for SQL Server 2012. It is also possible to increase storage capacity without adding more CPU cores by using larger drives; however, increasing the capacity beyond the optimized configuration will not increase performance. Rather, only capacity will be increased. It is important to note that adding more data, but still querying the same sized ranges per query, will not decrease performance. 6

Capacity summary Several data capacities are cited in this document, including Maximum Data Capacity, and Fast Track Rated Database Capacity. The following summarizes these values: Maximum Data Capacity: This is raw data space minus the required minimum space for TempDB database files, multiplied by the Fast Track compression factor (3.5). This capacity does not take into account primary staging space, or disk headroom. Fast Track Rated Database Capacity: This is the optimal, Fast Track certified data capacity of the configuration. Maximum rated performance of the system is valid when Fast Track data does not exceed this capacity. Table 4 summarizes the configuration specifics and capabilities of the fibre channel version of the P2000, the P2000 G3 MSA FC dual controller SFF array system. Table 4. P2000 G3 MSA storage configuration Controller Model Drives Drive layout Raw space (user data) TempDB Space (30%) Maximum Data Capacity (600GB HDD) Fast Track Rated Database Capacity Est. Log Capacity Secondary Staging Space (16) P2000 G3 MSA FC Array with dual read controllers (328) 600GB 6G SAS SFF HDD (256) drives for user data, configured as (32) 8-disk RAID10 LUNs (28) drives for log data configured as (7) 4-disk RAID10 LUNs (12) drives for staging data configured as (2) 6-disk RAID5 LUN (32) drives configure for hot spares (2 per array) 71.4 TB 14.3 TB 57 TB (no compression) 200 TB (with compression) Notes: 1) Compression is assumed to be 3.5:1 85 TB (with compression) 7.8 TB 5.6 TB 7

Database configuration The testing for this DL980 G7 solution was performed using a 6.5 TB test database and a workload simulation program. The performance numbers depicted in this document are output from the simulation tool running both a simple and complex workload. The test database uses a master filegroup and 7 additional filegroups which represent the 7 partitions. Table 5. DB Filegroups Filegroup Files Note PRIMARY 1 Contains master DB file (MDF) Part_ci1FG 32 1 file on each LUN Part_ci2FG 32 1 file on each LUN Part_ci3FG 32 1 file on each LUN Part_ci4FG 32 1 file on each LUN Part_ci5FG 32 1 file on each LUN Part_ci6FG 32 1 file on each LUN Part_ci7FG 32 1 file on each LUN SQL Server 2012 performance metrics SQL Server 2012 features new indexing features based on columnar storage technology. This new index improves query performance for a wide range of queries. Throughput performance is now measured using traditional indexes and columnar indexes (CSI). These metrics are shown below as Fast Track Average CSI I/O. Measured performance During our performance testing, we use simulated sequential read workloads to gauge performance on each Fast Track configuration. These tests use workloads with varying sizes ranging from 5 to 40 concurrent queries, running from 30 minutes to 1 hour each. From the results of these tests we determine both logical (how fast data can be read from buffer cache) and physical (how fast data can be read from physical disk) scan rates. Table 6 below lists the observed scan rates for this configuration. Table 6. Measured Scan Rates Scan Rate Type Scan Rate Comment Benchmark Logical Scan Rate Benchmark Physical Scan Rate 14516 MB/sec 8117 MB/sec Fast Track Average I/O 11317 MB/sec This is the average of the physical and logical scan rates Fast Track Average CSI I/O 33444MB/sec This is the average of the physical and logical scan rates with Column Store Indexing enabled Maximum Observed Logical Scan Rate 16515 MB/sec Maximum average scan rate observed during testing MAXDOP setting used for these results was 30 Resource Governor was set to 5% These performance metrics were attained using a set of SQL Server 2012 configuration parameters. These parameters are detailed below in Table 7. Further explanations of some of these settings are found throughout this document. 8

*Note Column Store Index (CSI) testing was performed without Trace Flag 834. Microsoft does not recommend turning on T834 when using CSI. For more information, see Microsoft KB article 920093 at the following link: http://support.microsoft.com/kb/920093 Table 7. SQL Server Parameters Parameter Setting Comment Max Degree of Parallelism (MAXDOP) 30 Optimal settings for our testing Resource Governor Memory Allocation 5% The default is 25% Fast Track Required Startup Parameters -E and -T1117 These settings are required for ALL Fast Track configurations Optional SQL Trace Flags -T834 The T834 trace flag enables large pages in memory. This provides significant performance increases with 4 and 8 socket systems. Not enabled when testing using Column Store Indexing. HP P2000 G3 MSA Volume Read Ahead Setting 1MB Change from Default setting Queue Depth settings Several tests were run to determine the optimal Queue Depth settings using the HP 82E Host Bus Adapters (HBAs). All tests performed for this reference configuration used a Queue Depth setting of 64. Testing found optimal Queue Depth settings ranging from 48 to 64. This is a change from the default setting of 32 on the HBAs. LUN layout All primary data LUNs were configured using 8-disk RAID10 volumes. Log LUNs were configured with 4-disk RAID10 volumes using 7 arrays, while the dual staging LUNs used 6-disk RAID 5 volumes on Arrays 8 and 16. All volumes on the P2000 array were created using the maximum amount of space available (1 volume per Vdisk). Figure 3 below depicts the LUN layout for this 85TB Data Warehouse architecture. 9

Figure 3. P2000 G3 MSA FC LUN Layout 10

Volume Read Ahead settings The P2000 G3 arrays contain provisions that allow for the setting of custom Read Ahead cache settings for each volume. Several variations of read ahead settings were tested with Fast Track configurations, and we have found that the setting of 1MB provides the best performance. The Read Ahead setting must be set on EACH volume in the configuration. Figure 4 below details the process for setting read ahead on a P2000 G3 volume. Figure 4. P2000 G3 Volume Read Ahead Setting Simply highlight a volume, then select Configuration Modify Volume Cache Settings and then modify the box labeled Read Ahead Size. Repeat for each volume in the configuration. Multi Path I/O (MPIO) The solution presented here presents a total of 88 paths per LUN (22 HBA ports X 4 storage ports). This number of paths is greater than the 32 paths supported by Microsoft Windows Server 2008 R2 MPIO. For this reason, manual path configuration must be employed. Using the management interface on the P2000 arrays, explicit mappings must be created, matching up a single storage port with a single HBA port on the server. Two paths per HBA can be set up, with the MPIO policy Failover Only. This will direct MPIO to use a single path only and failover to the second or secondary path when the first one fails. For more detailed information, refer to the HP published document: Microsoft SQL Server Fast Track: HP P2000 G3 MSA Multipath I/O configuration and best practices, http://h20195.www2.hp.com/v2/getdocument.aspx?docname=4aa0-9466enw. Since this configuration uses 2 Data LUNs per array rather than the typical 4, we must adjust the manual MPIO mapping scheme. In order to take advantage of the load balancing capability in the P2000 arrays, each LUN must be mapped across 2 storage ports residing on the SAME storage controller. Each LUN can have mappings to more than 1 HBA or HBA port, but it cannot be mapped across storage controllers (port A1, and port B2 for example). Table 8 below shows an example of the correct mappings for DL980 G7 LUNs residing on a P2000 array: Table 8. Manual Mappings for DL980 G7 LUNs LUN Storage Port HBA Port Array1, LUN1 A1 HBAport 2A Array1, LUN1 A2 HBAport 3B Array1, LUN2 B1 HBAport 3A Array1, LUN2 B2 HBAport 4A 11

Note: In other architectures with 4 LUNs per array, we only have to have 1 map per LUN. Now with 2 LUNs per array we MUST have 2 maps per LUN to maintain performance. SQL Server settings The processors in the DL980 G7 server use Intel Hyper-Threading technology. This technology allows the server to see and use more logical CPU cores than what is physically available in the system. With higher intensity workloads using a large number of parallel queries, this leads to a condition where SQL Server can over-subscribe the amount of CPU and memory resources given to queries during run time. For example in this DL980 G7 configuration, the server has 80 physical CPU cores. With Hyper-Threading (HT) turned on, SQL Server will see the system having 160 logical CPU cores, and will allocate resources accordingly. To mitigate this, we can change the maximum amount of CPU cores that SQL will use for parallel execution plans. MAXDOP parameter SQL Server defaults to a maximum degree of parallelism (MAXDOP) of 0, which dictates that SQL Server will dynamically allocate work up to the total number of CPU cores seen by the SQL Service. With HT turned on for a system with 80 physical cores, SQL Server will see a total of 160 logical cores which can, for many workload mixes, lead to a sub-optimal allocation of SQL threads. To address this, we can simply change the Max Degree of Parallelism parameter in SQL Server Advanced properties to equal the number of physical CPU cores (80 in this case) or lower. During our testing, the best results were achieved using a DOP setting of 30. Figure 5 below depicts the process to change the DOP parameter. Figure 5. MAXDOP Parameter With a fixed DOP value, SQL Server would assign worker threads to no more cores than the DOP setting specifies, for a single query. Note that any additional cores (physical or Hyper-Threaded) above the DOP value are still available to receive work from other queries. In general, the best Fast Track benchmark results are seen for higher core count systems (4 socket and greater servers) with DOP fixed between 16 and 32. Hyper-Threading, if available, should be turned ON. As with all Fast Track performance benchmarks actual results for customer data and workload may vary. Testing at various DOP settings can reveal best results for specific situations. In addition, memory resources in these larger systems, with higher amounts of memory, can be over-committed as well. This can lead to excessive waits for memory grants and releases, etc. To mitigate this, a SQL Server Resource Governor (RG) policy can be implemented. The details on implementing this policy are shown in the section below. 12

Resource Governor Resource Governor settings control the amount of CPU or memory resources that can be allocated to a given query. For running workloads with higher parallelism on the DL980 G7, the default workload group can be modified to reduce the maximum amount of memory that can be allocated to a query from the default of 25% to 5-6% for example. Our testing with this setting has shown to improve I/O throughput and response on our high parallel workloads. The following query in Figure 6 can be run to implement this policy and turn Resource Governor on with an example setting of 5%. Figure 6. Resource Governor ALTER WORKLOAD GROUP "default" WITH (REQUEST_MAX_MEMORY_GRANT_PERCENT = 5) GO ALTER RESOURCE GOVERNOR RECONFIGURE GO High Availability HP has designed and tested a High Availability (HA) solution for all its Fast Track solutions. This HA solution leverages Microsoft Windows and SQL Server failover clustering to provide local, shared storage failover capability. Failover clustering utilizes a shared SAN storage model whereby 2 or more servers connected to the same set of storage arrays act as a single SQL application server. The 2 nodes communicate via an Ethernet heartbeat in order to detect when one of the servers has a problem or hardware failure. In the case that the heartbeat is not received by one of the servers, the cluster will initiate an automatic failover to the standby node. This failover is transparent to the user, and results in a temporary pause in SQL Server activity as the resources are brought online to the secondary server, and SQL Server transactions are replayed and brought back to the exact state before the failover occurred. To ensure cluster communication, and proper failover, the SQL failover cluster must reside in the same local area network or site. Figure 7 below depicts the connectivity between both cluster nodes and the connectivity between the servers and storage for this DL980 G7 solution. Note: For practical purposes, the picture below depicts only 4 P2000 arrays. This DL980 G7 configuration uses a total of 16 arrays. Table 9 below depicts the overall wiring guidance for this HA solution. Table 9. DL980 G7 Wiring Guidance Components Fibre Switch Ethernet Switch P2000 Arrays Arrays 1-8 port A1 8/40 Switch #1 Arrays 1-8 port A2 8/40 Switch #1 Arrays 1-8 port B1 8/24 Switch #2 Arrays 1-8 port B2 8/24 Switch #2 Arrays 9-16 port A1 8/40 Switch #2 Arrays 9-16 port A2 8/40 Switch #2 Arrays 9-16 port B1 8/24 Switch #1 Arrays 9-16 port B2 8/24 Switch #1 Mgmt Port A Mgmt Port B 2510-48G Switch 2510-48G Switch DL980 G7 Server ilo Ethernet 2510-48G Switch 13

Components Fibre Switch Ethernet Switch NIC Port #1 2510-48G Switch HBAs #1-5 Port #1 8/40 Switch #1 HBAs #1-5 Port #2 8/24 Switch #2 HBAs #6-10 Port #1 8/24 Switch #1 HBAs #6-10 Port #2 8/40 Switch #2 HBA #11 Port #1 8/40 Switch #1 HBA #11 Port #2 8/40 Switch #2 Since we have a total of 4 fibre switches, each storage array controller will connect to a different switch, providing fault tolerance down to the storage controller. The 16 arrays are divided in half so that 8 arrays will use 2 switches and the other 8 arrays will use the 2 other switches. HBAs are also divided for redundancy, with HBAs 1-5 and 11 using 2 switches and 6-10 using 2 switches also. Figure 7. High Availability Overview 14

Additional storage requirements This HA solution also requires additional storage components for cluster functionality and transaction logging. Quorum disk Microsoft Windows clustering uses a voting method to ensure cluster quorum. Each cluster node, along with a cluster quorum disk is used to establish cluster quorum. This solution utilizes the Node and Disk Majority quorum type in which either both nodes, or 1 node and the quorum disk must be online for the cluster to continue. In addition to the Fast Track storage, 1 RAID1 LUN is used for this quorum functionality. Root disk Since SQL Server system and database files will be shared between 2 systems in this configuration, these files can no longer be housed or mounted to the C: drive. These files must reside on the SAN. For this we allocate an additional RAID1 LUN for the volume mount points, and for the SQL Server system files. SQL transaction logging SQL Server Fast Track solutions utilize the Simple recovery model which provides for limited transaction logging and no point in time database restoration. Transaction log files are truncated at each checkpoint. One of the requirements for SQL clustering is that the databases run in Full recovery mode, in which transaction logs are kept until they are backed up, and the database can be restored to a point in time using these logs. This method requires more disk space to store the logs and potentially more disk bandwidth, since the system is writing more logs. For this solution we added two additional transaction log LUNs to ensure that we have enough bandwidth and storage for the additional overhead. Figure 8 below depicts the HA storage configuration for this solution. Figure 8. HA Storage Overview Additional details on setting up and configuring an HP Fast Track HA solution can be found in the following white paper published on the HP website, http://h20195.www2.hp.com/v2/getdocument.aspx?docname=4aa4-2740enw Table 12 in this document depicts a bill of materials for the DL980 G7 Fast Track HA solution. 15

Manual storage mapping All Fast Track architectures require a Microsoft Multipath I/O (MPIO) manual mapping scheme. This manual mapping scheme uses the Fail over Only MPIO policy for SAN connected storage. Each LUN has a primary and one or more secondary maps. Since we are using redundant SAN switches for the Fast Track HA configurations, primary LUN mappings should be spread out over the available SAN switches. Careful attention should be paid to ensure that LUNs are mapped using controller ports and HBA ports that are physically connected to the same switch. This DL980 G7 reference architecture is unique in that there are only 2 Data LUNs per array, rather than the usual 4. In order to provide the best performance, each Data LUN must map through 2 storage ports rather than one. Table 10 below details a redundant manual mapping example for this DL980 G7 HA architecture. Table 10. DL980 G7 Example HA manual cross-mapping scheme LUN Primary Map Secondary Map Comment Array #1 Data LUN #1 Port A1,A2; HBA 1 Port #1 Port B1,B2; HBA 2 Port#2 In case of switch Loss, ½ of LUNs failover to secondary map Data LUN #2 Port B1,B2; HBA 2 Port #2 Port A1,A2; HBA 1 Port#1 Log LUN #1 Port A1,A2; HBA 1 Port #1 Port B1,B2; HBA 2 Port#2 Cluster Root Disk Port B1,B2; HBA 1 Port #2 Port A1,A2; HBA 2 Port#1 Root disk holds only SQL and mount point files. Array #2 Data LUN #3 Port A1,A2; HBA 2 Port #1 Port B1,B2; HBA 1 Port#2 Data LUN #4 Port B1,B2; HBA 1 Port #2 Port A1,A2; HBA 2 Port#1 Log LUN #2 Port A1,A2; HBA 2 Port #1 Port B1,B2; HBA 3 Port#2 Cluster Quorum Disk Port B1,B2; HBA 2 Port #2 Port A1,A2; HBA 3 Port#1 Quorum disk does not house any files; Cluster use only Array #3 Data LUN #5 Port A1,A2; HBA 3 Port #1 Port B1,B2; HBA 4 Port#2 Data LUN #6 Port B1,B2; HBA 4 Port #2 Port A1,A2; HBA 3 Port#1 Log LUN #3 Port A1,A2; HBA 3 Port #1 Port B1,B2; HBA 4 Port#2 Array #4 Data LUN #7 Port A1,A2; HBA 4 Port #1 Port B1,B2; HBA 3 Port#2 Data LUN #8 Port B1,B2; HBA 3 Port #2 Port A1,A2; HBA 4 Port#1 Log LUN #4 Port A1,A2; HBA 4 Port #1 Port B1,B2; HBA 3 Port#2 Array #5 Data LUN #9 Port A1,A2; HBA 5 Port #1 Port B1,B2; HBA 11 Port#2 Data LUN #10 Port B1,B2; HBA 11 Port #2 Port A1,A2; HBA 5 Port#1 Log LUN #5 Port A1,A2; HBA 5 Port #1 Port B1,B2; HBA 11 Port#2 16

LUN Primary Map Secondary Map Comment Array #6 Data LUN #11 Port A1,A2; HBA 11 Port #1 Port B1,B2; HBA 5 Port#2 Data LUN #12 Port B1,B2; HBA 5 Port #2 Port A1,A2; HBA 11 Port#1 Log LUN #6 Port A1,A2; HBA 11 Port #1 Port B1,B2; HBA 5 Port#2 Array #7 Data LUN #13 Port A1,A2; HBA 1 Port #1 Port B1,B2; HBA 2 Port#2 Data LUN #14 Port B1,B2; HBA 2 Port #2 Port A1,A2; HBA 1 Port#1 Log LUN #7 Port B1,B2; HBA 3 Port #2 Port A1,A2; HBA 1 Port#1 Array #8 Data LUN #15 Port A1,A2; HBA 3 Port #1 Port B1,B2; HBA 4 Port#2 Data LUN #16 Port B1,B2; HBA 4 Port #2 Port A1,A2; HBA 3 Port#1 Stage LUN #1 Port B1,B2; HBA 4 Port #2 Port A1,A2; HBA 3 Port#1 Array #9 Data LUN #17 Port A1,A2; HBA 6 Port #1 Port B1,B2; HBA 7 Port#2 Data LUN #18 Port B1,B2; HBA 7 Port #2 Port A1,A2; HBA 6 Port#1 Log LUN #8 Port A1,A2; HBA 6 Port #1 Port B1,B2; HBA 7 Port#2 Array #10 Data LUN #19 Port A1,A2; HBA 8 Port #1 Port B1,B2; HBA 9 Port#2 Data LUN #20 Port B1,B2; HBA 9 Port #2 Port A1,A2; HBA 8 Port#1 Array #11 Data LUN #21 Port A1,A2; HBA 10 Port #1 Port B1,B2; HBA 6 Port#2 Data LUN #22 Port B1,B2; HBA 6 Port #2 Port A1,A2; HBA 10 Port#1 Array #12 Data LUN #23 Port A1,A2; HBA 7 Port #1 Port B1,B2; HBA 8 Port#2 Data LUN #24 Port B1,B2; HBA 8 Port #2 Port A1,A2; HBA 7 Port#1 Array #13 Data LUN #25 Port A1,A2; HBA 9 Port #1 Port B1,B2; HBA 10 Port#2 Data LUN #26 Port B1,B2; HBA 10 Port #2 Port A1,A2; HBA 9 Port#1 Array #14 Data LUN #27 Port A1,A2; HBA 6 Port #1 Port B1,B2; HBA 7 Port#2 Data LUN #28 Port B1,B2; HBA 7 Port #2 Port A1,A2; HBA 6 Port#1 17

LUN Primary Map Secondary Map Comment Array #15 Data LUN #29 Port A1,A2; HBA 8 Port #1 Port B1,B2; HBA 9 Port#2 Data LUN #30 Port B1,B2; HBA 9 Port #2 Port A1,A2; HBA 8 Port#1 Array #16 Data LUN #31 Port A1,A2; HBA 10 Port #1 Port B1,B2; HBA 6 Port#2 Data LUN #32 Port B1,B2; HBA 6 Port #2 Port A1,A2; HBA 10 Port#1 Stage LUN #2 Port B1,B2; HBA 7 Port #2 Port A1,A2; HBA 6 Port#1 Note: Each HBA has 2 ports, Port #1, and Port #2. Each storage array has a total of 4 ports and 2 controllers. Ports A1 and A2 refer to the top controller, while ports B1 and B2 refer to the bottom controller. 18

Disaster recovery This reference architecture can be duplicated across 2 different sites for disaster recovery purposes. The HP P2000 G3 MSA arrays provide native snapshot capability and allow for the simultaneous replication of incremental changes throughout the system. The diagram in Figure 9 below provides a high level overview of this multi-site configuration. Figure 9. Disaster Recovery Overview The P2000 G3 FC/iSCSI Combo MSA controllers allow for the replication of snapshot data using the P2000 iscsi links, while communication between the Fast Track server and the P2000 arrays is done using the Fibre Channel links between the P2000 arrays and the Host Bus Adapters (HBAs) installed in the server. This separation of duties prevents data access traffic and replication traffic from interfering with each other, providing optimal speeds for both functions. Through the use of the HP P2000 VSS (Microsoft Volume Shadow Copy Service) Provider for Windows x64, the Fast Track server can be paused while a snapshot is taken, then resumed once it is complete. This allows the databases that are contained within the snapshot to be in a consistent state for the replication process. Once these databases are replicated to the DR site, they can be attached to the DR server, and mounted in the same state as they were during the initial snapshot. In order for end users to access the databases on the secondary server, Domain Name Services (DNS) aliases are used for connection. This allows for user connections to be directed to the new server during the failover process. In order for end users to access the database on the DR server, many processes must happen behind the scenes to ensure that the data is 19

replicated and accessible. HP provides a step-by-step document to assist in the configuration of a script based disaster recovery failover for Fast Track. The document is available at: Microsoft SQL Server 2008 Fast Track: A Multi-Site Disaster Recovery Architecture using the HP P2000 G3 MSA Storage Array, http://h20195.www2.hp.com/v2/getdocument.aspx?docname=4aa3-6489enw The bill of materials for a disaster recovery configuration is detailed in Table 13 of this document. Bills of materials Table 11 includes the bill of materials for the server, storage, and switching components of the reference architecture. Table 11. Bill of materials Qty Part Number Description Server Configuration 1 AM451A HP ProLiant DL980 G7 CTO Chassis 1 650770-L21 Intel Xeon E7-4870 4P Kit 1 650770-B21 Intel Xeon E7-4870 4P Kit 1 AM450A DL980 CPU Assembly 4 AM470A HP 1200W 12V HE AC Power Supply 1 534562-B21 1G FBWC Module 1 588137-B21 DL580 G7 PCI-E Expansion Kit 1 AM434A DL580 G7 LP PCIe Expansion Kit 3 512547-B21 HP 146GB 6G SAS 15K SFF DP ENT HDD 5 581286-B21 HP 600GB 6G SAS 10K SFF DP ENT HDD 14 AJ763B HP 82E 8Gb Dual Port PCI-E FC HBA 8 A0R60A DL980 G7 Memory Board 96 A0R56A 8 GB Dual Rank x4 PC3-10600R Dimm Storage Configuration 16 AP846B HP P2000 G3 MSA FC DC SFF Modular Smart Array 328 581286-B21 HP 600GB 6G SAS 10K SFF DP ENT HDD Switch Configuration 2 AM869B HP 8/40 40-Ports / 24-Active Enabled SAN Switch 2 AM868B HP 8/24 24-Ports/ 16-Active Enabled SAN Switch 2 T5518A HP 8/40 SAN Switch 8-Port Upgrade Activation License 92 QK733A 2m LC-LC Cable 3 263474-B22 HP Cat 5e Cables 6ft 8 Pack 2 263474-B23 HP Cat 5e Cables 12ft 8 Pack 92 AJ716B HP 8Gb Shortwave FC SFP+ 1 Pack 1 J9280A HP 2510-48G Switch 20

High Availability Table 12 includes the bill of materials for the configuration of a High Availability architecture. Table 12. High Availability Bill of materials Qty Part Number Description Server Configuration 2 AM451A HP ProLiant DL980 G7 CTO Chassis 2 650770-L21 Intel Xeon E7-4870 4P Kit 2 650770-B21 Intel Xeon E7-4870 4P Kit 2 AM450A DL980 CPU Assembly 8 AM470A HP 1200W 12V HE AC Power Supply 2 534562-B21 1G FBWC Module 2 588137-B21 DL580 G7 PCI-E Expansion Kit 2 AM434A DL580 G7 LP PCIe Expansion Kit 6 512547-B21 HP 146GB 6G SAS 15K SFF DP ENT HDD 10 581286-B21 HP 600GB 6G SAS 10K SFF DP ENT HDD 28 AJ763B HP 82E 8Gb Dual Port PCI-E FC HBA 16 A0R60A DL980 G7 Memory Board 192 A0R56A 8 GB Dual Rank x4 PC3-10600R Dimm Storage Configuration 16 AP846B HP P2000 G3 MSA FC DC SFF Modular Smart Array 340 581286-B21 HP 600GB 6G SAS 10K SFF DP ENT HDD Switch Configuration 2 AM869B HP 8/40 40-Ports / 24-Active Enabled SAN Switch 2 AM868B HP 8/24 24-Ports/ 16-Active Enabled SAN Switch 2 T5518A HP 8/40 SAN Switch 8-Port Upgrade Activation License 120 QK733A 2m LC-LC Cable 4 263474-B22 HP Cat 5e Cables 6ft 8 Pack 3 263474-B23 HP Cat 5e Cables 12ft 8 Pack 120 AJ716B HP 8Gb Shortwave FC SFP+ 1 Pack 1 J9280A HP 2510-48G Switch 21

Disaster recovery Table 13 includes the bill of materials for the configuration of a disaster recovery architecture. Table 13. Disaster Recovery Bill of materials Qty Part Number Description Server Configuration 2 AM451A HP ProLiant DL980 G7 CTO Chassis 2 650770-L21 Intel Xeon E7-4870 4P Kit 2 650770-B21 Intel Xeon E7-4870 4P Kit 2 AM450A DL980 CPU Assembly 8 AM470A HP 1200W 12V HE AC Power Supply 2 534562-B21 1G FBWC Module 2 588137-B21 DL580 G7 PCI-E Expansion Kit 2 AM434A DL580 G7 LP PCIe Expansion Kit 6 512547-B21 HP 146GB 6G SAS 15K SFF DP ENT HDD 10 581286-B21 HP 600GB 6G SAS 10K SFF DP ENT HDD 28 AJ763B HP 82E 8Gb Dual Port PCI-E FC HBA 16 A0R60A DL980 G7 Memory Board 192 A0R56A 8 GB Dual Rank x4 PC3-10600R Dimm Storage Configuration 32 AW568B HP P2000 G3 MSA FC/iSCSI DC SFF Modular Smart Array 16 TA808A HP Remote Snap LTU 656 581286-B21 HP 600GB 6G SAS 10K SFF DP ENT HDD Switch Configuration 4 AM869B HP 8/40 40-Ports / 24-Active Enabled SAN Switch 4 AM868B HP 8/24 24-Ports/ 16-Active Enabled SAN Switch 4 T5518A HP 8/40 SAN Switch 8-Port Upgrade Activation License 184 QK733A 2m LC-LC Cable 10 263474-B22 HP Cat 5e Cables 6ft 8 Pack 12 263474-B23 HP Cat 5e Cables 12ft 8 Pack 184 AJ716B HP 8Gb Shortwave FC SFP+ 1 Pack 2 J9280A HP 2510-48G Switch 22

Implementing a proof-of-concept As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HP Services representative hp.com/large/contact/enterprise/index.html or your HP partner. 23

For more information Fast Track Data Warehouse Reference Guide for SQL Server 2012, http://technet.microsoft.com/en-us/library/hh918452.aspx HP ActiveAnswers, hp.com/solutions/activeanswers HP ActiveAnswers for Microsoft SQL Server solutions, hp.com/solutions/activeanswers/microsoft/sql Microsoft SQL Server 2012 Fast Track Data Warehouse Reference Architectures for HP, hp.com/solutions/microsoft/fasttrack HP Converged Systems for Microsoft SQL: Data Management, hp.com/solutions/microsoft/sql HP and Microsoft, hp.com/go/microsoft HP ProLiant DL980 G7 server, hp.com/servers/dl980-g7 HP Networking, hp.com/go/networking How to buy, hp.com/buy To help us improve our documents, please provide feedback at hp.com/solutions/feedback. Sign up for updates hp.com/go/getupdated Copyright 2011-2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. AMD is a trademark of Advanced Micro Devices, Inc. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. 4AA3-3609ENW, July 2013, Rev. 4