Hitachi Unified Storage 110 Dynamically Provisioned 10,400 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution



Similar documents
Hitachi Unified Storage 110 Dynamically Provisioned 27,200 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

Hitachi Unified Storage 130 Dynamically Provisioned 8,000 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

Hitachi Unified Storage VM Dynamically Provisioned 24,000 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution

Hitachi Unified Storage VM Dynamically Provisioned 120,000 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

Hitachi Virtual Storage Platform Dynamically Provisioned 160,000 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

Hitachi Universal Storage Platform V Dynamically Provisioned 112,000 Mailbox Microsoft Exchange 2010 Resiliency Storage Solution.

NetApp FAS Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

HP ProLiant DL380p Gen mailbox 2GB mailbox resiliency Exchange 2010 storage solution

Nimble Storage Exchange ,000-Mailbox Resiliency Storage Solution

ESRP Storage Program. EMC Symmetrix VMAX (100,000 User) Exchange 2010 Mailbox Resiliency Storage Solution. EMC Global Solutions

HP D3600 Disk Enclosure 4,000 Mailbox Resiliency Exchange 2013 Storage Solution

How To Test A Mailbox On A Sun Zfs Storage Appliance With A Powerpoint 2.5D (Dag) On A Server With A 32K Volume Of Memory On A 32,000 Mailbox On An Uniden (Daga) On

HP MSA 2040 Storage 750 Mailbox Resiliency Exchange 2013 Storage Solution with Microsoft Hyper-V

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960

Deploying a 48,000-user Exchange Server 2010 Environment with Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500

Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

HP D2600 disk enclosure and HP Smart Array P411 2,000 user 4 GB mailbox resiliency Exchange 2010 storage solution

HUAWEI OceanStor S5500T Exchange Server 2010 Solution with Users

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (SSDs)

Deploying SQL Server 2008 R2 with Hyper-V on the Hitachi Virtual Storage Platform

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Reference Architecture Guide. By Jeff Chen and Leo Nguyen. Month Year

The Benefits of Virtualizing

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

Silverton Consulting, Inc. StorInt Briefing

Monitoring and Managing Microsoft Exchange Server 2007 on the Adaptable Modular Storage 2000 Family

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Virtualizing Exchange Server 2007 Using VMware vsphere 4 on the Hitachi Adaptable Modular Storage 2500

MS Exchange Server Acceleration

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage

Lab Validation Report. By Steven Burns. Month Year

Microsoft Exchange Server 2003 Deployment Considerations

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Analysis of VDI Storage Performance During Bootstorm

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

Accelerating Server Storage Performance on Lenovo ThinkServer

Performance Impact on Exchange Latencies During EMC CLARiiON CX4 RAID Rebuild and Rebalancing Processes

Lab Validation Report

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

Best Practices RAID Implementations for Snap Servers and JBOD Expansion

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Accelerating Microsoft Exchange Servers with I/O Caching

Virtualized High Availability and Disaster Recovery Solutions

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Violin Memory Arrays With IBM System Storage SAN Volume Control

How To Understand And Understand The Power Of Aird 6 On Clariion

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Microsoft SharePoint Server 2010

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

XenDesktop 7 Database Sizing

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Microsoft Exchange 2010 on Dell Systems. Simple Distributed Configurations

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

EMC Backup and Recovery for Microsoft Exchange 2007

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Architecting a High Performance Storage System

Using Synology SSD Technology to Enhance System Performance Synology Inc.

MICROSOFT EXCHANGE SERVER 2010 PERFORMANCE REVIEW USING THE EMC VNX5300 UNIFIED STORAGE PLATFORM

FlexArray Virtualization

Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform

Virtualization of the MS Exchange Server Environment

Intel RAID SSD Cache Controller RCS25ZB040

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

IBM System Storage DS5020 Express

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

Lab Validation Report. Leo Nguyen. Month Year

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

SAN Conceptual and Design Basics

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

MICROSOFT EXCHANGE best practices BEST PRACTICES - DATA STORAGE SETUP

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Evaluation of Enterprise Data Protection using SEP Software

HP Smart Array Controllers and basic RAID performance factors

Transcription:

1 Hitachi Unified Storage 110 Dynamically Provisioned 10,400 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 3.0 Test Date: July August 2012 Month Year

Notices and Disclaimer Copyright 2012 Hitachi Data Systems Corporation. All rights reserved. The performance data contained herein was obtained in a controlled isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While Hitachi Data Systems Corporation has reviewed each item for accuracy in a specific situation, there is no guarantee that the same results can be obtained elsewhere. All designs, specifications, statements, information and recommendations (collectively, "designs") in this manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its suppliers disclaim all warranties, including without limitation, the warranty of merchantability, fitness for a particular purpose and non-infringement or arising from a course of dealing, usage or trade practice. In no event shall Hitachi Data Systems Corporation or its suppliers be liable for any indirect, special, consequential or incidental damages, including without limitation, lost profit or loss or damage to data arising out of the use or inability to use the designs, even if Hitachi Data Systems Corporation or its suppliers have been advised of the possibility of such damages. This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data Systems Corporation may make improvements and/or changes in product and/or programs at any time without notice. 1

Table of Contents Overview... 3 Disclaimer... 3 Features... 4 Solution Description... 5 Targeted Customer Profile... 10 Test Deployment... 11 Replication Configuration... 13 Best Practices... 15 Core Storage... 15 Storage-based Replication... 16 Backup Strategy... 16 Test Results Summary... 17 Reliability... 17 Storage Performance Results... 17 Backup and Recovery Performance... 19 Conclusion... 20 Appendix A RAID 5 Drive Failure and Rebuild... 21 Appendix B Test Reports... 22 Performance Test Result: CB10... 22 Performance Test Checksums Result: CB10... 33 Stress Test Result: CB10... 38 Stress Test Checksums Result: CB10... 48 Backup Test Result: CB10... 53 Soft Recovery Test Result: CB10... 58 Soft Recovery Test Performance Result: CB10... 67 2

Hitachi Unified Storage 110 Dynamically Provisioned 10,400 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 3.0 Test Date: July-August 2012 Overview This document provides information on a Microsoft Exchange Server 2010 mailbox resiliency storage solution that uses Hitachi Unified Storage 110 storage systems with Hitachi Dynamic Provisioning. This solution is based on the Microsoft Exchange Solution Reviewed Program (ESRP) Storage program. For more information about the contents of this document or Hitachi Data Systems best practice recommendations for Microsoft Exchange Server 2010 storage design, see Hitachi Data Systems Microsoft Exchange Solutions Web page. The ESRP Storage program was developed by Microsoft Corporation to provide a common storage testing framework for vendors to provide information on its storage solutions for Microsoft Exchange Server software. For more information about the Microsoft ESRP Storage program, see TechNet s overview of the program. Disclaimer This document has been produced independently of Microsoft Corporation. Microsoft Corporation expressly disclaims responsibility for, and makes no warranty, express or implied, with respect to the accuracy of the contents of this document. The information contained in this document represents the current view of Hitachi Data Systems on the issues discussed as of the date of publication. Due to changing market conditions, it should not be interpreted to be a commitment on the part of Hitachi Data Systems, and Hitachi Data Systems cannot guarantee the accuracy of any information presented after the date of publication. 3

Features The purpose of this testing was to measure the ESRP 3.0 results on a Microsoft Exchange 2010 environment with 10,400 users and four servers. This testing used Hitachi Unified Storage 110 with Hitachi Dynamic Provisioning in a two-pool RAID-5 (8D+1P) (one for databases and one for logs) resiliency configuration. These results help answer questions about the kind of performance capabilities to expect with a large-scale Exchange deployment on Hitachi Unified Storage 110. Testing used four Hitachi Compute Blade 2000 server blades in a single chassis, each with the following: 64 GB of RAM Two quad-core Intel Xeon X5690 3.46GHz CPUs Two dual-port 8 Gb/sec Fibre Channel PCIe HBA (Emulex LPe1205-HI, using two port per HBA) located in the chassis expansion tray Microsoft Windows Server 2008 R2 Enterprise This solution includes Exchange 2010 Mailbox Resiliency by using the database availability group (DAG) feature. This tested configuration uses four DAGs, each containing twenty database copies and two servers. The test configuration was capable of supporting 10,400 users with a 0.12 IOPS per user profile and user mailbox size of 1 GB. A Hitachi Unified Storage 110 with the following was used for these tests: 117 2TB 7.2K RPM SAS disks 8 GB of cache 8 8Gb/sec paths used Hitachi Unified Storage 110 is a medium-sized, highly reliable midrange storage system that can scale to 120 disks while maintaining 99.999% availability. It is highly suitable for a variety of applications and host platforms and is modular in scale. With the option of in-system and cross-system replication functionality, Hitachi Unified Storage 110 is fully capable of being used as the core underlying storage platform for high-performance Exchange Server 2010 architectures. 4

Solution Description Deploying Microsoft Exchange Server 2010 requires careful consideration of all aspects of the solution architecture. Host servers need to be configured so that they are robust enough to handle the required Exchange load. The storage solution must be designed to provide the necessary performance while also being reliable and easy to administer. Of course, an effective backup and recovery plan should be incorporated into the solution as well. The aim of this solution report is to provide a tested configuration that uses Hitachi Unified Storage 110 to meet the needs of a large Exchange Server deployment. This solution uses Hitachi Dynamic Provisioning, which is enabled on Hitachi Unified Storage 110 via a license key. In the most basic sense, Hitachi Dynamic Provisioning is similar to the use of a host-based logical volume manager (LVM), but with several additional features available within Hitachi Unified Storage 110 and without the need to install software on the host or incur host processing overhead. Hitachi Dynamic Provisioning is a superior solution by providing for one or more pools of wide striping across many RAID groups within Hitachi Unified Storage 110. One or more Hitachi Dynamic Provisioning virtual volumes (DP-VOLs) of a user-specified logical size (with no initial physical space allocated) are created and associated with a single pool. Primarily, Hitachi Dynamic Provisioning is deployed to avoid the routine issue of hot spots that occur on logical units (LUs) from individual RAID groups when the host workload exceeds the IOPS or throughput capacity of that RAID group. By using many RAID groups as members of a striped Hitachi Dynamic Provisioning pool underneath the virtual or logical volumes seen by the hosts, a host workload is distributed across many RAID groups, which provides a smoothing effect that dramatically reduces hot spots and results in fewer mailbox moves for the Exchange administrator. Hitachi Dynamic Provisioning also carries the side benefit of thin provisioning, where physical space is only assigned from the pool to the DP-VOL as needed using 1 GB chunks per RAID group, up to the logical volume size specified for each DP-VOL. Space from a 1 GB chunk is then allocated as needed as 32 MB pool pages to that DP-VOL s logical block address range. A pool can also be dynamically expanded by adding more RAID groups without disruption or requiring downtime. Upon expansion, a pool can be rebalanced easily so that the data and workload are wide striped evenly across the current and newly added RAID groups that make up the pool. High availability is also a part of this solution with the use of database availability groups (DAG), which is the base component of the high availability and site resilience framework built into Microsoft Exchange Server 2010. A DAG is a group of up to 16 mailbox servers that host a set of databases and logs and use continuous replication to provide automatic database-level recovery from failures that affect individual servers or databases. Any server in a DAG can host a copy of a mailbox database from any other server in the DAG. When a server is added to a DAG, it monitors and works with the other servers in the DAG to provide automatic recovery delivering a robust, highly available Exchange solution without the administrative complexities of traditional failover clustering. For more information about the DAG feature in Exchange Server 2010, see http://technet.microsoft.com/en-us/library/dd979799.aspx This solution includes two copies of each Exchange database using four DAGs, with each DAG configured with two server blades (one simulated) that host active mailboxes in twenty databases. To target the 10,400-user resiliency solution, a Hitachi Unified Storage 110 storage system was configured with 117 disks (maximum 120). Four servers (one per DAG) were used, with each server configured with 2,600 mailboxes. There were 20 active databases and the simulated database copies for the tests. 5

Each DAG contained two copies of the databases hosted by that DAG: A local, active copy on a server connected to the primary Hitachi Unified Storage 110 A passive copy (simulated) on another server connected to a second Hitachi Unified Storage 110 (simulated). This recommended configuration can support both high-availability and disaster-recovery scenarios when the active and passive database copies are allocated among both DAG members and dispersed across both storage systems. Each simulated DAG server node in this solution maintains a mirrored configuration and possesses adequate capacity and performance capabilities to support the second set of replicated databases. Figure 1 illustrates the two systems that make up the simulated DAG configuration. For more information, see the Hitachi Data Systems Storage Systems web page. Figure 1 6

This solution enables organizations to consolidate Exchange Server 2010 DAG deployments on two Hitachi Unified Storage 110 storage systems. Using identical hardware and software configurations guarantees that an active database and its replicated copy do not share storage paths, disk spindles or storage controllers, making it a very reliable, high-performing, highly available Exchange Server 2010 solution that is cost effective and easy to manage. This helps ensure that performance and service levels related to storage are maintained regardless of which server is hosting the active database. If further protection is needed in a production environment, additional Exchange Server 2010 mailbox servers can be easily added to support these failover scenarios. Table 1 illustrates how the disks in Hitachi Unified Storage 110 were organized into RAID groups for use by databases or logs. Each set of colored disks represents a RAID-5 (8D+1P) group. There were 117 2TB 7.2K RPM SAS disks used in these tests configured as 13 RAID groups for the Exchange databases and logs. Table 1. Hitachi Unified Storage 110 RAID Groups by Tray Layout HUS110, 2TB 7.2K SAS, R-5(8+1) HDP Pool Disk Layout Unit 4 RG10 RG11 RG12 S S S Unit 3 RG8 RG9 RG10 Unit 2 RG5 RG6 RG7 Unit 1 RG2 RG3 RG4 RG5 Unit 0 RG0 RG1 RG2 Slot # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 DB Pool Log Pool Disk trays 0 through 4 each held 24 7.2K RPM SAS disks. Two dynamic provisioning pools were created, one for the databases and the other for the logs. The database pool was created from 11 RAID-5 (8D+1P) groups and the log pool was created from 2 RAID-5 (8D+1P) groups. From the database pool, 80 DP-VOLs (each specified to have a 1,920 GB size limit) were created for 80 databases (twenty per server). From the log pool, 80 DP-VOLs (each specified to have a size limit of 192 GB) were created for 80 logs (twenty per server). 7

Table 2 outlines the port layout for the primary storage and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. Table 2. Hitachi Unified Storage 110 Ports to Server Layout Server Primary path Secondary path CB10 0A 1A 0C 1C CB11 1A 0A 1C 0C CB12 0B 1B 0D 1D CB13 1B 0B 1D 0D Table 3 outlines the port layout with the database DP-VOL assignments for the primary storage and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. Table 3. Hitachi Unified Storage 110 Ports to DP-VOL Layout Port DB DP-VOLs 0A s 1-10 0-9 0C s 11-20 10-19 1A s 21-30 20-29 1C s 31-40 30-39 0B s 41-50 40-49 0B s 51-60 50-59 1B s 61-70 60-69 1D s 71-80 70-79 8

Table 4 outlines the port layout with the log DP-VOL assignments for the primary storage and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. Table 4. Hitachi Unified Storage 110 Ports to Log DP-VOL Layout Port Log Log DP-VOLs 0A log 81-90 80-89 0C log 91-100 90-99 1A log 101-110 100-109 1C log 111-120 110-119 0B log 121-130 120-129 0B log 131-140 130-139 1B log 141-150 140-149 1D log 151-160 150-159 Table 5 provides the detailed specifications for the storage configuration which uses RAID-5 (8D+1P) groups and 2TB 7K disks. Dynamic Provisioning Pool 1 is dedicated for the databases and Dynamic Provisioning Pool 0 is dedicated for the logs. Table 5. Hitachi Unified Storage 110 Configuration Details Host Pool Port DP-VOL Size (GB) RAID Level Description CB10 1 0A/1A 0-9 1920 RAID-5 s 1-10 1 0C/1C 10-19 1920 RAID-5 s 11-20 CB11 1 1A/0A 20-29 1920 RAID-5 s 21-30 1 1C/0C 30-39 1920 RAID-5 s 31-40 CB12 1 0B/1B 40-49 1920 RAID-5 s 41-50 1 0D/1D 50-59 1920 RAID-5 s 51-60 CB13 1 1B/0B 60-69 1920 RAID-5 s 61-70 1 1D/0D 70-79 1920 RAID-5 s 71-80 CB10 0 0A/1A 80-89 192 RAID-5 log 81-90 0 0C/1C 90-99 192 RAID-5 log 91-100 CB11 0 1A/0A 100-109 192 RAID-5 log 101-110 0 1C/0C 110-119 192 RAID-5 log 111-120 CB12 0 0B/1B 120-129 192 RAID-5 log 121-130 0 0D/1D 130-139 192 RAID-5 log 131-140 CB13 0 1B/0B 140-149 192 RAID-5 log 141-150 0 1D/0D 150-159 192 RAID-5 log 151-160 9

The ESRP Storage program focuses on storage solution testing to address performance and reliability issues with storage design. However, storage is not the only factor to take into consideration when designing a scale-up Exchange solution. These factors also affect server scalability: Server processor utilization Server physical and virtual memory limitations Resource requirements for other applications Directory and network service latencies Network infrastructure limitations Replication and recovery requirements Client usage profiles These factors are all beyond the scope of the ESRP Storage program. Therefore, the number of mailboxes hosted per server as part of the tested configuration might not necessarily be viable for some customer deployments. For more information about identifying and addressing performance bottlenecks in an Exchange system, see Microsoft's Troubleshooting Microsoft Exchange Server Performance. Targeted Customer Profile This solution is designed for medium to large organizations that plan to consolidate their Exchange Server 2010 storage on high-performance, high-reliability storage systems. This configuration is designed to support 10,400 Exchange users with the following specifications: Eight Exchange servers (four tested, four simulated for the database copies) Four database availability groups (DAG) each with two servers (one simulated) and two copies per database Two Hitachi Unified Storage 110 (one tested) 0.1 IOPS per user (0.12 tested for 20 percent growth) 1 GB mailbox size Mailbox resiliency provides high-availability and used as primary data protection mechanism. Hitachi Unified Storage RAID protection against physical failure or loss. 24x7 background database maintenance enabled. 10

Test Deployment The following tables summarize the testing environment. Table 6. Simulated Exchange Configuration Number of Exchange mailboxes simulated 10,400 Number of database availability groups (DAGs) 4 Number of servers per DAG 2 (1 simulated) Number of active mailboxes per server 2,600 Number of databases per host 20 Number of copies per database 2 Number of mailboxes per database 130 Simulated profile: s per second per mailbox (IOPS, include 20% headroom) 0.12 LU size Log LU siz Total database size for performance testing 1920 GB 192 GB 10,400 GB % storage capacity used by Exchange database** 6.7% **Storage performance characteristics change based on the percentage utilization of the individual disks. Tests that use a small percentage of the storage (~25%) might exhibit reduced throughput if the storage capacity utilization is significantly increased beyond what was tested for this paper. Table 7. Storage Hardware Storage connectivity (Fibre Channel, SAS, SATA, iscsi) Storage model and OS/firmware revision Storage cache Fibre Channel 1 Hitachi Unified Storage 110 Firmware: 0920/A-W WHQL listing: Hitachi Unified Storage 110 8 GB Number of storage controllers 2 Number of storage ports 8 Maximum bandwidth of storage connectivity to host 64 Gb/sec (8 8 Gb/sec ports) Switch type/model/firmware revision HBA model and firmware Number of HBAs per host Brocade 5300, Fabric OS v7.0.1b Emulex LPe1205-HI FW : 1.11X14 2 dual-ported HBA per host, 2 8 Gb/sec port used per HBA 11

Host server type Hitachi Compute Blade E55A2 2 3.46GHz Intel Xeon Processors, 32 GB memory Total number of disks tested in solution 117 Maximum number of spindles that can be hosted in the storage 120 Table 8. Storage Software HBA driver Storport Miniport 7.2.20.006 HBA QueueTarget setting 0 HBA QueueDepth setting 32 Multipathing Host OS Hitachi Dynamic Link Manager v7.2.1-00 Microsoft Windows Server 2008 R2 Enterprise ESE.dll file version 14.01.0225.017 Replication solution name/version N/A Table 9. Storage Disk Configuration (Mailbox Store Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of physical disks in test 2 TB 99 (dynamic provisioning pool) Total raw storage capacity (GB) 198,000 Disk slice size (GB) N /A Number of slices per LU or number of disks per LU RAID level Total formatted capacity N/A RAID-5 (8D+1P) at storage level 155,760 GB Storage capacity utilization 78.7% capacity utilization 77.6% Table 10. Storage Disk Configuration (Transaction Log Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of spindles in test 2 TB 18(dynamic provisioning pool) Total raw storage capacity (GB) 36,000 12

Disk slice size (GB) Number of slices per LU or number of disks per LU RAID level Total formatted capacity N/A N/A RAID-5 (8D+1P) at storage level 28,320 GB Replication Configuration Table 11. Replication Configuration Replication mechanism Exchange Server 2010 Availability Group (DAG) Number of links 2 Simulated link distance Link type Link bandwidth N/A IP GigE (1 Gb/sec) Table 12. Replicated Storage Hardware Storage connectivity (Fibre Channel, SAS, SATA, iscsi) Storage model and OS/firmware revision Storage cache Fibre Channel 1 Hitachi Unified Storage 110 Firmware: 0920/A-W WHQL listing: Hitachi Unified Storage 110 8 GB Number of storage controllers 2 Number of storage ports 8 Maximum bandwidth of storage connectivity to host 64 Gb/sec (8 8 Gb/sec ports) Switch type/model/firmware revision HBA model and firmware Number of HBAs per host Host server type Brocade 5300, Fabric OS v7.0.1b Emulex LPe1205-HI FW : 1.11X14 2 dual-ported HBA per host, 2 8 Gb/sec port used per HBA Hitachi Compute Blade E55A2 2 3.46GHz Intel Xeon Processors, 32 GB memory Total number of disks tested in solution 117 Maximum number of spindles that can be hosted in the storage 120 13

Table 13. Replicated Storage Software HBA driver Storport Miniport 7.2.20.006 HBA QueueTarget setting 0 HBA QueueDepth setting 32 Multipathing Host OS Hitachi Dynamic Link Manager v7.2.1-00 Microsoft Windows Server 2008 R2 Enterprise ESE.dll file version 14.01.0225.017 Replication solution name/version N/A Table 14. Replicated Storage Disk Configuration (MailboxStore Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of physical disks in test 2 TB 99 (dynamic provisioning pool) total raw storage capacity (GB) 198,000 Disk slice size (GB) N /A Number of slices per LU or number of disks per LU Raid level Total formatted capacity N/A RAID-5 (8D+1P) at storage level 155,760 GB Storage capacity utilization 78.7% capacity utilization 77.6% Table 15. Replicated Storage Disk Configuration (Transactional Log Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of spindles in test 2 TB 18 (dynamic provisioning pool) Total raw storage capacity (GB) 36,000 Disk slice size (GB) Number of slices per LU or number of disks per LU Raid level Total formatted capacity N/A N/A RAID-5 (8D+1P) at storage level 28,320 GB 14

Best Practices Microsoft Exchange Server 2010 is a disk-intensive application. It presents two distinct workload patterns to the storage, with 32KB random read/write operations to the databases, and sequential write operations of varying size (between 512 bytes up to the log buffer size) to the transaction logs. For this reason, designing an optimal storage configuration can prove challenging in practice. Based on the testing run using the ESRP framework, Hitachi Data Systems recommends these best practices to improve the performance of Hitachi Unified Storage 110 running Exchange 2010. For more information about Exchange 2010 best practices for storage design, see the Microsoft TechNet article Mailbox Server Storage Design. Core Storage 1. When formatting a newly partitioned LU, Hitachi Data Systems recommends setting the ALU to 64K for the database files and 4K for the log files. 2. Disk alignment is no longer required when using Microsoft Windows Server 2008. 3. Keep the Exchange workload isolated from other applications. Mixing another intensive application whose workload differs from Exchange can cause the performance for both applications to degrade. 4. Use Hitachi Dynamic Link Manager multipathing software to provide fault tolerance and high availability for host connectivity. 5. Use Hitachi Dynamic Provisioning to simplify storage management of the Exchange database and log volumes. 6. Due to the difference in patterns, isolate the Exchange database from the log groups. Create a dedicated Hitachi Dynamic Provisioning pool for the databases and a separate pool for the logs. 7. The log LUs should be at least 10 percent of the size of the database LUs. 8. Hitachi Data Systems does not recommend using LU concatenation. 9. Hitachi Data Systems recommends implementing Mailbox Resiliency using the Exchange Server 2010 Availability Group feature. 10. Ensure that each DAG maintains at least two database copies to provide high availability. 11. Isolate active databases and their replicated copies in separate dynamic provisioning pools or ensure that they are located on a separate Hitachi Unified Storage110. 12. Use fewer, larger LUs for Exchange 2010 databases (up to 2TB) with Background Maintenance (24x7) enabled. 13. Size storage solutions for Exchange based primarily on performance criteria. The number of disks, RAID level and percent utilization of each disk directly affect the level of achievable performance. Factor in capacity requirements only after performance is addressed. 14. Disk size is unrelated to performance with regards to IOPS or throughput rates. Disk size is related to the usable capacity of all of the LUs from a RAID group, which is a choice users make. 15

15. The number of spindles, coupled with the RAID level, determines the physical IOPS capacity of the RAID group and all of its LUs. If the disk has too few spindles, the response times grow to large values very quickly. Storage-based Replication N/A Backup Strategy N/A 16

Test Results Summary This section provides a high-level summary of the test data from ESRP and the link to the detailed HTML reports that are generated by ESRP testing framework. Reliability A number of tests in the framework check reliability spanning a 24-hour window. The goal is to verify the storage can handle high load for a long period of time. Following these stress tests, both log and database files are analyzed for integrity to ensure that no database or log corruption occurs. No errors were reported in the event log file for the storage reliability testing. No errors were reported for the database and log checksum process. If done, no errors were reported during the backup to disk test process. No errors were reported for the database checksum on the remote storage database. Storage Performance Results Primary storage performance testing exercises the storage with maximum sustainable Exchange type of for two hours. The test shows how long it takes for the storage to respond to an under load. The following data is the sum of all of the logical disk s and average of all the logical disks latency in the two-hour test duration. Individual Server Metrics These individual server metrics show the sum of the s across the storage groups and the average latency across all storage groups on a per-server basis. Table 16. Individual Server Metrics for Exchange Server (CB10) Disk Transfers Per Second 490 Disk Reads Per Second 288 Disk Writes Per Second 201 Disk Read Latency (ms) 11.1 Disk Write Latency (ms) 1.3 Transaction Log Log Disk Writes Per Second 191 Log Disk Write Latency (ms) 0.8 17

Table 17. Individual Server Metrics for Exchange Server (CB11) Disk Transfers Per Second 491 Disk Reads Per Second 289 Disk Writes Per Second 202 Disk Read Latency (ms) 10.9 Disk Write Latency (ms) 1.3 Transaction Log Log Disk Writes Per Second 192 Log Disk Write Latency (ms) 0.8 Table 18. Individual Server Metrics for Exchange Server (CB12) Disk Transfers Per Second 488 Disk Reads Per Second 287 Disk Writes Per Second 201 Disk Read Latency (ms) 10.8 Disk Write Latency (ms) 1.3 Transaction Log Log Disk Writes Per Second 191 Log Disk Write Latency (ms) 0.8 Table 19. Individual Server Metrics for Exchange Server (CB13) Disk Transfers Per Second 492 Disk Reads Per Second 289 Disk Writes Per Second 202 Disk Read Latency (ms) 10.8 Disk Write Latency (ms) 1.3 Transaction Log Log Disk Writes Per Second 192 Log Disk Write Latency (ms) 0.8 18

Aggregate Performance Across All Servers Metric The aggregate performance across all server metrics shows the sum of s across all servers in the solution and the average latency across all servers in the solution. Table 20. Aggregate Performance for Exchange Server 2010 Disk Transfers Per Second 1960.69 Disk Reads Per Second 1154.27 Disk Writes Per Second 806.42 Disk Read Latency (ms) 10.92 Disk Write Latency (ms) 1.32 Transaction Log Log Disk Writes Per Second 765.18 Log Disk Write Latency (ms) 0.78 Backup and Recovery Performance This section has two tests: The first measures the sequential read rate of the database files and the second measures recovery/replay performance (playing transaction logs in to the database). Read-only Performance This test measures the maximum rate at which databases can be backed up via VSS. The following tables show the average rate for a single database file. Table 21. Read-only Performance MB Read Per Second Per 19.66 MB Read Per Second Total Per Server 393.26 Transaction Log Recovery/Replay Performance This test measures the maximum rate at which the log files can be played against the databases. The following table shows the average rate for 500 log files played in a single storage group. Each log file is 1MB in size. Table 22. Transaction Log Recovery/Replay Performance Time to Play One Log File (sec) 4.42335 19

Conclusion This document details a tested and robust Exchange Server 2010 Resiliency solution capable of supporting 10,400 users with a 0.12 IOPS per user profile and user mailbox size of 1 GB using four DAG s each configured with 2 server nodes (one simulated). A Hitachi Unified Storage 110 storage system, with 8 GB of cache and four 8 Gb/sec Fibre Channel host paths, using Hitachi Dynamic Provisioning (with two pools) and 117 2TB 7K RPM SAS disks in a RAID-5 (8D+1P) configuration was used for these tests. Testing confirmed that Hitachi Unified Storage 110 is more than capable of delivering the IOPS and capacity requirements needed to support the active and replicated databases for 10,400 Exchange mailboxes configured with the specified user profile, while maintaining additional headroom to support peak throughput. The solution outlined in this document does not include data protection components, such as VSS snapshot or clone backups, and relies on the built-in Mailbox Resiliency features of Exchange Server 2010 coupled with Hitachi Unified Storage RAID technology to provide high-availability and protection from logical and physical failures. Adding additional protection requirements may affect performance and capacity requirements of the underlying storage configuration, and as such need to be factored into the storage design accordingly. For more information to about planning Exchange Server 2010 storage architectures for the Hitachi Unified Storage family, see http://www.hds.com/ This document is developed by Hitachi Data Systems and reviewed by the Microsoft Exchange product team. The test results and data presented in this document are based on the tests introduced in the ESRP test framework. Do not quote the data directly for pre-deployment verification. It is still necessary to validate the storage design for a specific customer environment. The ESRP program is not designed to be a benchmarking program; tests do not generate the maximum throughput for a given solution. Rather, it is focused on producing recommendations from vendors for Exchange application. Thus, do not use the data presented in this document for direct comparisons among the solutions 20

Appendix A RAID 5 Drive Failure and Rebuild These ESRP tests used RAID-5 (8D+1P) rather than RAID-6 (8D+2P) or RAID-10 (for example, 4D+4D). RAID-5 is a more capacity-efficient RAID level than the others. It loses only 12.5 percent of the usable capacity when using 8D+1P. This compares to 20 percent for (8D+2P) or 50 percent for (4D+4D). One downside with the use of parity RAID-5 instead of mirrored and striped RAID-10 is that the internal disk write penalty for host writes is higher. RAID-5 volumes require four physical disk s (2 reads, 2 writes) on the backend for every host write. In comparison, RAID-10 requires two physical s (2 writes) and RAID-6 requires six physical s (3 reads, 3 writes) for each host write. The other downside is the rebuild time for the RAID group after a sudden disk failure. Hitachi Unified Storage 100 family storage systems continually scans the storage system looking for soft fails, because an excessive soft fail count is a predictor of a hard failure in the future. If the number of soft fails exceeds the user-set failure threshold in a 24-hour period, the Hitachi Unified Storage 100 family storage system does the following, in order: 1. Executes a disk-to-disk copy to a global hot spare to avoid a RAID-5 (or RAID-6) volume rebuild. 2. Switches to using the spare disk and marks the source disk as failed. 3. Alerts person responsible for storage system maintenance to replace the disk. If a hard fail of a disk in a RAID volume does occur, the following happens: If using RAID-10, the contents of the good disk are mirrored onto a spare disk. These hot spares are user-defined in several disk enclosures on a storage system. If using RAID-5 or RAID-6, all disks in the RAID group must be read to recreate the missing data or parity that was located on the failed disk onto the spare disk. This rebuild mode is called corrective copy. An associated array setting called [Drive] Restore Options determines how aggressive the rebuild operation is while there are still ongoing host s. This setting has three levels: aggressive, moderate, and background (extremely slow). Lab tests were conducted on a Hitachi Unified Storage 110 volume using RAID-5 (8D+1P) on 900 GB 10K SAS disks with an aggressive restore option setting. A corrective copy operation took about 5 hours to complete with a 25% host load with a random (70% reads, 30% writes) host workload on the four LUs from that RAID group. When there was a sustained 90% host workload (of the same type) to the four LUNs from that RAID group, the rebuild time increased to about 26 hours. The IOPS performance on the four LUNs from that RAID group was only slightly reduced in both cases. 21

Appendix B Test Reports This appendix contains Jetstress test results for one of the servers used in testing this storage solution. These test results are representative of the results obtained for all of the servers tested. Performance Test Result: CB10 Test Summary Overall Test Result Machine Name Pass CB10 Test Description Test Start Time Test End Time Collection Start Time Collection End Time 8/24/2012 1:12:53 AM 8/24/2012 3:44:35 AM 8/24/2012 1:26:59 AM 8/24/2012 3:26:48 AM Jetstress Version 14.01.0225.017 ESE Version 14.01.0218.012 Operating System Windows Server 2008 R2 Enterprise Service Pack 1 (6.1.7601.65536) Performance Log C:\HUS110_PE108_C1B1_SAS7K_ESRP_R5_1GB_mbox_2600 Users\Performance Test\Performance_2012_8_24_1_13_38.blg Sizing and Throughput Achieved Transactional per Second 489.813 Target Transactional per Second 312 Initial Size (bytes) 10748303769600 Final Size (bytes) 10750384144384 Files (Count) 20 22

Jetstress System Parameters Thread Count Minimum Cache Maximum Cache 3 (per database) 640.0 MB 5120.0 MB Insert Operations 40% Delete Operations 20% Replace Operations 5% Read Operations 35% Lazy Commits 70% Run Background Maintenance True Number of Copies per 2 23

Configuration Instance5488.1 Instance5488.2 Instance5488.3 Instance5488.4 Instance5488.5 Instance5488.6 Instance5488.7 Instance5488.8 Instance5488.9 Instance5488.10 Instance5488.11 Instance5488.12 Instance5488.13 Instance5488.14 Instance5488.15 Instance5488.16 Instance5488.17 Instance5488.18 Log path: C:\logluns\log1 : C:\dbluns\db1\Jetstress001001.edb Log path: C:\logluns\log2 : C:\dbluns\db2\Jetstress002001.edb Log path: C:\logluns\log3 : C:\dbluns\db3\Jetstress003001.edb Log path: C:\logluns\log4 : C:\dbluns\db4\Jetstress004001.edb Log path: C:\logluns\log5 : C:\dbluns\db5\Jetstress005001.edb Log path: C:\logluns\log6 : C:\dbluns\db6\Jetstress006001.edb Log path: C:\logluns\log7 : C:\dbluns\db7\Jetstress007001.edb Log path: C:\logluns\log8 : C:\dbluns\db8\Jetstress008001.edb Log path: C:\logluns\log9 : C:\dbluns\db9\Jetstress009001.edb Log path: C:\logluns\log10 : C:\dbluns\db10\Jetstress010001.edb Log path: C:\logluns\log11 : C:\dbluns\db11\Jetstress011001.edb Log path: C:\logluns\log12 : C:\dbluns\db12\Jetstress012001.edb Log path: C:\logluns\log13 : C:\dbluns\db13\Jetstress013001.edb Log path: C:\logluns\log14 : C:\dbluns\db14\Jetstress014001.edb Log path: C:\logluns\log15 : C:\dbluns\db15\Jetstress015001.edb Log path: C:\logluns\log16 : C:\dbluns\db16\Jetstress016001.edb Log path: C:\logluns\log17 : C:\dbluns\db17\Jetstress017001.edb Log path: C:\logluns\log18 24

: C:\dbluns\db18\Jetstress018001.edb Instance5488.19 Instance5488.20 Log path: C:\logluns\log19 : C:\dbluns\db19\Jetstress019001.edb Log path: C:\logluns\log20 : C:\dbluns\db20\Jetstress020001.edb 25

Transactional Performance MSExchange ==> Instances Reads Latency (msec) Writes Latency (msec) Reads/sec Writes/sec Reads Bytes Writes Bytes Log Reads Latency (msec) Log Writes Latency (msec) Log Reads/sec Log Writes/sec Log Reads Bytes Log Writes Bytes Instance5488.1 15.495 1.470 14.360 10.076 36305.909 37120.907 0.000 0.735 0.000 9.517 0.000 4643.093 Instance5488.2 10.858 1.495 14.808 10.502 37654.575 37195.714 0.000 0.772 0.000 9.912 0.000 4676.016 Instance5488.3 10.433 1.424 14.113 9.732 37614.657 37165.251 0.000 0.807 0.000 9.356 0.000 4480.427 Instance5488.4 11.044 1.420 14.547 10.141 37317.834 36950.127 0.000 0.719 0.000 9.719 0.000 4481.508 Instance5488.5 10.656 1.379 14.336 9.867 37722.228 37203.478 0.000 0.741 0.000 9.435 0.000 4585.438 Instance5488.6 10.818 1.397 14.297 10.003 38063.919 37136.894 0.000 0.777 0.000 9.389 0.000 4685.973 Instance5488.7 11.171 1.307 14.464 10.167 39149.347 37113.689 0.000 0.758 0.000 9.611 0.000 4576.272 Instance5488.8 11.040 1.331 14.345 10.063 37411.303 37141.837 0.000 0.778 0.000 9.646 0.000 4683.351 Instance5488.9 10.885 1.261 14.433 10.040 37682.045 37023.231 0.000 0.780 0.000 9.551 0.000 4554.886 Instance5488.10 10.740 1.273 14.469 10.065 38132.410 37192.960 0.000 0.739 0.000 9.483 0.000 4592.795 Instance5488.11 10.870 1.340 14.346 9.906 37108.085 37197.752 0.000 0.749 0.000 9.351 0.000 4609.329 Instance5488.12 10.879 1.365 14.320 10.082 37749.139 37188.616 0.000 0.726 0.000 9.524 0.000 4576.843 Instance5488.13 10.822 1.280 14.192 9.903 37718.422 37323.400 0.000 0.761 0.000 9.250 0.000 4698.883 Instance5488.14 10.417 1.256 14.261 9.834 37899.316 37125.788 0.000 0.759 0.000 9.402 0.000 4636.277 Instance5488.15 11.132 1.237 14.468 10.017 37405.019 37100.686 0.000 0.734 0.000 9.476 0.000 4567.092 Instance5488.16 10.621 1.251 14.610 10.299 37959.004 36968.295 0.000 0.731 0.000 9.685 0.000 4477.333 Instance5488.17 10.811 1.193 14.571 10.281 37863.615 37343.813 0.000 0.751 0.000 9.598 0.000 4650.981 Instance5488.18 10.607 1.233 14.326 9.953 37374.143 37224.041 0.000 0.765 0.000 9.492 0.000 4639.481 Instance5488.19 11.134 1.164 14.727 10.215 37141.516 37070.902 0.000 0.724 0.000 9.724 0.000 4534.791 Instance5488.20 10.845 1.198 14.500 10.174 37134.008 37036.961 0.000 0.714 0.000 9.667 0.000 4564.235 26

Background Maintenance Performance MSExchange ==> Instances Maintenance IO Reads/sec Maintenance IO Reads Bytes Instance5488.1 32.935 261394.426 Instance5488.2 38.508 261277.811 Instance5488.3 38.839 261403.548 Instance5488.4 38.156 261345.415 Instance5488.5 38.449 261404.657 Instance5488.6 38.178 261415.785 Instance5488.7 37.951 259633.241 Instance5488.8 37.904 261281.372 Instance5488.9 38.072 261353.803 Instance5488.10 38.465 261340.374 Instance5488.11 38.167 261431.442 Instance5488.12 38.185 261278.796 Instance5488.13 38.245 261307.773 Instance5488.14 38.811 261352.776 Instance5488.15 37.981 261323.811 Instance5488.16 38.687 261305.415 Instance5488.17 38.270 261333.333 Instance5488.18 38.699 261319.501 Instance5488.19 37.859 261270.353 Instance5488.20 38.206 261269.252 27

Log Replication Performance MSExchange ==> Instances Log Reads/sec Log Reads Bytes Instance5488.1 0.178 69377.673 Instance5488.2 0.183 71805.873 Instance5488.3 0.169 66297.000 Instance5488.4 0.176 68829.819 Instance5488.5 0.172 66934.797 Instance5488.6 0.174 67911.948 Instance5488.7 0.173 69129.868 Instance5488.8 0.179 70340.147 Instance5488.9 0.172 67408.696 Instance5488.10 0.172 67003.399 Instance5488.11 0.172 66934.797 Instance5488.12 0.174 67911.948 Instance5488.13 0.172 67816.650 Instance5488.14 0.172 66934.797 Instance5488.15 0.173 67423.373 Instance5488.16 0.173 67423.373 Instance5488.17 0.179 70340.147 Instance5488.18 0.176 69777.616 Instance5488.19 0.177 68957.699 Instance5488.20 0.176 68400.523 28

Total Performance MSExchange ==> Instances Reads Latency (msec) Writes Latency (msec) Reads/sec Writes/sec Reads Bytes Writes Bytes Log Reads Latency (msec) Log Writes Latency (msec) Log Reads/sec Log Writes/sec Log Reads Bytes Log Writes Bytes Instance5488.1 15.495 1.470 47.294 10.076 193051.944 37120.907 3.069 0.735 0.178 9.517 69377.673 4643.093 Instance5488.2 10.858 1.495 53.316 10.502 199167.217 37195.714 3.747 0.772 0.183 9.912 71805.873 4676.016 Instance5488.3 10.433 1.424 52.952 9.732 201757.138 37165.251 3.410 0.807 0.169 9.356 66297.000 4480.427 Instance5488.4 11.044 1.420 52.703 10.141 199508.493 36950.127 3.266 0.719 0.176 9.719 68829.819 4481.508 Instance5488.5 10.656 1.379 52.786 9.867 200654.169 37203.478 3.253 0.741 0.172 9.435 66934.797 4585.438 Instance5488.6 10.818 1.397 52.475 10.003 200564.341 37136.894 3.374 0.777 0.174 9.389 67911.948 4685.973 Instance5488.7 11.171 1.307 52.415 10.167 198788.501 37113.689 3.991 0.758 0.173 9.611 69129.868 4576.272 Instance5488.8 11.040 1.331 52.249 10.063 199817.882 37141.837 3.744 0.778 0.179 9.646 70340.147 4683.351 Instance5488.9 10.885 1.261 52.505 10.040 199870.769 37023.231 3.403 0.780 0.172 9.551 67408.696 4554.886 Instance5488.10 10.740 1.273 52.935 10.065 200328.021 37192.960 3.367 0.739 0.172 9.483 67003.399 4592.795 Instance5488.11 10.870 1.340 52.513 9.906 200150.566 37197.752 3.083 0.749 0.172 9.351 66934.797 4609.329 Instance5488.12 10.879 1.365 52.504 10.082 200314.455 37188.616 3.473 0.726 0.174 9.524 67911.948 4576.843 Instance5488.13 10.822 1.280 52.436 9.903 200793.953 37323.400 3.666 0.761 0.172 9.250 67816.650 4698.883 Instance5488.14 10.417 1.256 53.073 9.834 201307.534 37125.788 3.254 0.759 0.172 9.402 66934.797 4636.277 Instance5488.15 11.132 1.237 52.449 10.017 199556.588 37100.686 3.496 0.734 0.173 9.476 67423.373 4567.092 Instance5488.16 10.621 1.251 53.297 10.299 200080.852 36968.295 3.542 0.731 0.173 9.685 67423.373 4477.333 Instance5488.17 10.811 1.193 52.841 10.281 199710.621 37343.813 3.507 0.751 0.179 9.598 70340.147 4650.981 Instance5488.18 10.607 1.233 53.025 9.953 200816.691 37224.041 3.572 0.765 0.176 9.492 69777.616 4639.481 Instance5488.19 11.134 1.164 52.585 10.215 198502.996 37070.902 3.961 0.724 0.177 9.724 68957.699 4534.791 Instance5488.20 10.845 1.198 52.707 10.174 199606.074 37036.961 3.657 0.714 0.176 9.667 68400.523 4564.235 29

Host System Performance Counter Minimum Maximum % Processor Time 1.511 0.000 2.936 Available MBytes 55973.658 55855.000 56462.000 Free System Page Table Entries 33555414.597 33555187.000 33555795.000 Transition Pages RePurposed/sec 0.000 0.000 0.000 Pool Nonpaged Bytes 92662159.296 92565504.000 93323264.000 Pool Paged Bytes 158935070.055 148660224.000 197251072.000 Page Fault Stalls/sec 0.000 0.000 0.000 Test Log8/24/2012 1:12:53 AM -- Jetstress testing begins... 8/24/2012 1:12:53 AM -- Preparing for testing... 8/24/2012 1:13:13 AM -- Attaching databases... 8/24/2012 1:13:13 AM -- Preparations for testing are complete. 8/24/2012 1:13:13 AM -- Starting transaction dispatch.. 8/24/2012 1:13:13 AM -- cache settings: (minimum: 640.0 MB, maximum: 5.0 GB) 8/24/2012 1:13:13 AM -- flush thresholds: (start: 51.2 MB, stop: 102.4 MB) 8/24/2012 1:13:38 AM -- read latency thresholds: (average: 20 msec/read, maximum: 100 msec/read). 8/24/2012 1:13:38 AM -- Log write latency thresholds: (average: 10 msec/write, maximum: 100 msec/write). 8/24/2012 1:13:55 AM -- Operation mix: Sessions 3, Inserts 40%, Deletes 20%, Replaces 5%, Reads 35%, Lazy Commits 70%. 8/24/2012 1:13:55 AM -- Performance logging started (interval: 15000 ms). 8/24/2012 1:13:55 AM -- Attaining prerequisites: 8/24/2012 1:26:59 AM -- \MSExchange (JetstressWin)\ Cache Size, Last: 4849402000.0 (lower bound: 4831838000.0, upper bound: none) 8/24/2012 3:26:59 AM -- Performance logging has ended. 8/24/2012 3:42:13 AM -- JetInterop batch transaction stats: 7538, 7605, 7392, 7549, 7428, 7479, 7547, 7438, 7351, 7470, 7321, 7469, 7391, 7435, 7534, 7589, 7499, 7434, 7522 and 7530. 8/24/2012 3:42:14 AM -- Dispatching transactions ends. 8/24/2012 3:42:14 AM -- Shutting down databases... 8/24/2012 3:44:35 AM -- Instance5488.1 (complete), Instance5488.2 (complete), Instance5488.3 (complete), Instance5488.4 (complete), Instance5488.5 (complete), Instance5488.6 (complete), Instance5488.7 (complete), Instance5488.8 (complete), Instance5488.9 (complete), Instance5488.10 (complete), Instance5488.11 (complete), Instance5488.12 (complete), Instance5488.13 (complete), Instance5488.14 (complete), Instance5488.15 (complete), Instance5488.16 (complete), Instance5488.17 (complete), Instance5488.18 (complete), Instance5488.19 (complete) and Instance5488.20 (complete) 8/24/2012 3:44:35 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R5_1GB_mbox_2600 Users\Performance Test\Performance_2012_8_24_1_13_38.blg has 529 samples. 8/24/2012 3:44:35 AM -- Creating test report... 30

8/24/2012 3:44:45 AM -- Instance5488.1 has 15.5 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.1 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.1 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.2 has 10.9 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.2 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.2 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.3 has 10.4 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.3 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.3 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.4 has 11.0 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.4 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.4 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.5 has 10.7 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.5 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.5 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.6 has 10.8 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.6 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.6 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.7 has 11.2 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.7 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.7 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.8 has 11.0 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.8 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.8 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.9 has 10.9 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.9 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.9 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.10 has 10.7 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.10 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.10 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.11 has 10.9 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.11 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.11 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.12 has 10.9 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.12 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.12 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.13 has 10.8 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.13 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.13 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.14 has 10.4 for Reads Latency. 31

8/24/2012 3:44:45 AM -- Instance5488.14 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.14 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.15 has 11.1 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.15 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.15 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.16 has 10.6 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.16 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.16 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.17 has 10.8 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.17 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.17 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.18 has 10.6 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.18 has 0.8 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.18 has 0.8 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.19 has 11.1 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.19 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.19 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.20 has 10.8 for Reads Latency. 8/24/2012 3:44:45 AM -- Instance5488.20 has 0.7 for Log Writes Latency. 8/24/2012 3:44:45 AM -- Instance5488.20 has 0.7 for Log Reads Latency. 8/24/2012 3:44:45 AM -- Test has 0 Maximum Page Fault Stalls/sec. 8/24/2012 3:44:45 AM -- The test has 0 Page Fault Stalls/sec samples higher than 0. 8/24/2012 3:44:45 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R5_1GB_mbox_2600 Users\Performance Test\Performance_2012_8_24_1_13_38.xml has 476 samples queried. 32

Performance Test Checksums Result: CB10 Checksum Statistics - All Seen pages Bad pages Correctable pages Wrong page-number pages File length / seconds taken C:\dbluns\db1\Jetstress001001.edb 16404002 0 0 0 512625 MB/31178 sec C:\dbluns\db2\Jetstress002001.edb 16404002 0 0 0 512625 MB/30820 sec C:\dbluns\db3\Jetstress003001.edb 16403746 0 0 0 512617 MB/31108 sec C:\dbluns\db4\Jetstress004001.edb 16403746 0 0 0 512617 MB/30793 sec C:\dbluns\db5\Jetstress005001.edb 16403746 0 0 0 512617 MB/31084 sec C:\dbluns\db6\Jetstress006001.edb 16403746 0 0 0 512617 MB/30796 sec C:\dbluns\db7\Jetstress007001.edb 16403746 0 0 0 512617 MB/31139 sec C:\dbluns\db8\Jetstress008001.edb 16403746 0 0 0 512617 MB/30804 sec C:\dbluns\db9\Jetstress009001.edb 16403746 0 0 0 512617 MB/31113 sec C:\dbluns\db10\Jetstress010001.edb 16403490 0 0 0 512609 MB/30789 sec C:\dbluns\db11\Jetstress011001.edb 16403490 0 0 0 512609 MB/31158 sec C:\dbluns\db12\Jetstress012001.edb 16403746 0 0 0 512617 MB/30814 sec C:\dbluns\db13\Jetstress013001.edb 16403746 0 0 0 512617 MB/31117 sec C:\dbluns\db14\Jetstress014001.edb 16403746 0 0 0 512617 MB/30782 sec C:\dbluns\db15\Jetstress015001.edb 16404002 0 0 0 512625 MB/31051 sec C:\dbluns\db16\Jetstress016001.edb 16404002 0 0 0 512625 MB/30742 sec C:\dbluns\db17\Jetstress017001.edb 16404002 0 0 0 512625 MB/31119 sec C:\dbluns\db18\Jetstress018001.edb 16403490 0 0 0 512609 MB/30778 sec C:\dbluns\db19\Jetstress019001.edb 16403746 0 0 0 512617 MB/31095 sec C:\dbluns\db20\Jetstress020001.edb 16404002 0 0 0 512625 MB/30763 sec (Sum) 328075688 0 0 0 10252365 MB/31178 sec 33