Hitachi Unified Storage 110 Dynamically Provisioned 27,200 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

Similar documents
Hitachi Unified Storage 110 Dynamically Provisioned 10,400 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

Hitachi Unified Storage 130 Dynamically Provisioned 8,000 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

Hitachi Unified Storage VM Dynamically Provisioned 24,000 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution

Hitachi Unified Storage VM Dynamically Provisioned 120,000 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

Hitachi Universal Storage Platform V Dynamically Provisioned 112,000 Mailbox Microsoft Exchange 2010 Resiliency Storage Solution.

Hitachi Virtual Storage Platform Dynamically Provisioned 160,000 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

NetApp FAS Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

HP ProLiant DL380p Gen mailbox 2GB mailbox resiliency Exchange 2010 storage solution

How To Test A Mailbox On A Sun Zfs Storage Appliance With A Powerpoint 2.5D (Dag) On A Server With A 32K Volume Of Memory On A 32,000 Mailbox On An Uniden (Daga) On

ESRP Storage Program. EMC Symmetrix VMAX (100,000 User) Exchange 2010 Mailbox Resiliency Storage Solution. EMC Global Solutions

Nimble Storage Exchange ,000-Mailbox Resiliency Storage Solution

HP D3600 Disk Enclosure 4,000 Mailbox Resiliency Exchange 2013 Storage Solution

HP MSA 2040 Storage 750 Mailbox Resiliency Exchange 2013 Storage Solution with Microsoft Hyper-V

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960

Deploying a 48,000-user Exchange Server 2010 Environment with Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500

HP D2600 disk enclosure and HP Smart Array P411 2,000 user 4 GB mailbox resiliency Exchange 2010 storage solution

Deploying Microsoft Exchange Server 2010 on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (SSDs)

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

Deploying SQL Server 2008 R2 with Hyper-V on the Hitachi Virtual Storage Platform

HUAWEI OceanStor S5500T Exchange Server 2010 Solution with Users

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Monitoring and Managing Microsoft Exchange Server 2007 on the Adaptable Modular Storage 2000 Family

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

The Benefits of Virtualizing

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Deploying Microsoft SQL Server 2008 R2 with Logical Partitioning on the Hitachi Virtual Storage Platform with Hitachi Dynamic Tiering

Reference Architecture Guide. By Jeff Chen and Leo Nguyen. Month Year

Silverton Consulting, Inc. StorInt Briefing

Virtualizing Exchange Server 2007 Using VMware vsphere 4 on the Hitachi Adaptable Modular Storage 2500

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

MS Exchange Server Acceleration

Accelerating Server Storage Performance on Lenovo ThinkServer

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Analysis of VDI Storage Performance During Bootstorm

Lab Validation Report

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Accelerating Microsoft Exchange Servers with I/O Caching

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

Virtualized High Availability and Disaster Recovery Solutions

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Microsoft Exchange Server 2003 Deployment Considerations

Lab Validation Report. By Steven Burns. Month Year

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

VBLOCK SOLUTION FOR MICROSOFT EXCHANGE 2010

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Lab Validation Report. Leo Nguyen. Month Year

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

Virtualization of the MS Exchange Server Environment

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks

MICROSOFT EXCHANGE SERVER 2010 PERFORMANCE REVIEW USING THE EMC VNX5300 UNIFIED STORAGE PLATFORM

Latest ESRP V3.0 results

XenDesktop 7 Database Sizing

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage

Oracle Database Scalability in VMware ESX VMware ESX 3.5

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

EMC SYMMETRIX VMAX PERFORMANCE REVIEW FOR MICROSOFT EXCHANGE SERVER 2013

Optimizing SQL Server Storage Performance with the PowerEdge R720

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate. Nytro Flash Accelerator Cards

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

EMC Backup and Recovery for Microsoft Exchange 2007

ACCELERATING MICROSOFT EXCHANGE 2010 PERFORMANCE WITH EMC XTREMCACHE

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

Maximum performance, minimal risk for data warehousing

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Violin Memory Arrays With IBM System Storage SAN Volume Control

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Deploying EMC SourceOne Management

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Microsoft SharePoint Server 2010

OPTIMIZING MICROSOFT EXCHANGE AND SHAREPOINT WITH EMC XTREMIO

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

Intel RAID SSD Cache Controller RCS25ZB040

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

Transcription:

1 Hitachi Unified Storage 110 Dynamically Provisioned 27,200 Mailbox Exchange 2010 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 3.0 Test Date: July August 2012 Month Year

Notices and Disclaimer Copyright 2012 Hitachi Data Systems Corporation. All rights reserved. The performance data contained herein was obtained in a controlled isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While Hitachi Data Systems Corporation has reviewed each item for accuracy in a specific situation, there is no guarantee that the same results can be obtained elsewhere. All designs, specifications, statements, information and recommendations (collectively, "designs") in this manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its suppliers disclaim all warranties, including without limitation, the warranty of merchantability, fitness for a particular purpose and non-infringement or arising from a course of dealing, usage or trade practice. In no event shall Hitachi Data Systems Corporation or its suppliers be liable for any indirect, special, consequential or incidental damages, including without limitation, lost profit or loss or damage to data arising out of the use or inability to use the designs, even if Hitachi Data Systems Corporation or its suppliers have been advised of the possibility of such damages. This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data Systems Corporation may make improvements and/or changes in product and/or programs at any time without notice. 1

Table of Contents Overview... 3 Disclaimer... 3 Features... 4 Solution Description... 5 Targeted Customer Profile... 10 Test Deployment... 11 Replication Configuration... 13 Best Practices... 15 Core Storage... 15 Storage-based Replication... 16 Backup Strategy... 16 Test Results Summary... 17 Reliability... 17 Storage Performance Results... 17 Backup and Recovery Performance... 19 Conclusion... 20 Appendix Test Reports... 21 Performance Test Result: CB10... 21 Performance Test Checksums Result: CB10... 30 Stress Test Result: CB10... 34 Stress Test Checksums Result: CB10... 43 Backup Test Result: CB10... 47 Soft Recovery Test Result: CB10... 51 Soft Recovery Test Performance Result: CB10... 58 2

Hitachi Unified Storage 110 Dynamically Provisioned 27,200 Mailbox Exchange 2010 Mailbox Resiliency Storage Solutions Tested with: ESRP Storage Version 3.0 Test Date: July-August 2012 Overview This document provides information on a Microsoft Exchange Server 2010 mailbox resiliency storage solution that uses Hitachi Unified Storage 110 storage systems with Hitachi Dynamic Provisioning. This solution is based on the Microsoft Exchange Solution Reviewed Program (ESRP) Storage program. For more information about the contents of this document or Hitachi Data Systemsbest practice recommendations for Microsoft Exchange Server 2010 storage design, see Hitachi Data Systems Microsoft Exchange Solutions Web page. The ESRP Storage program was developed by Microsoft Corporation to provide a common storage testing framework for vendors to provide information on its storage solutions for Microsoft Exchange Server software. For more information about the Microsoft ESRP Storage program, see TechNet s overview of the program. Disclaimer This document has been produced independently of Microsoft Corporation. Microsoft Corporation expressly disclaims responsibility for, and makes no warranty, express or implied, with respect to, the accuracy of the contents of this document. The information contained in this document represents the current view of Hitachi Data Systems on the issues discussed as of the date of publication. Due to changing market conditions, it should not be interpreted to be a commitment on the part of Hitachi Data Systems, and Hitachi Data Systems cannot guarantee the accuracy of any information presented after the date of publication. 3

Features The purpose of this testing was to measure the ESRP 3.0 results on a Microsoft Exchange 2010 environment with 27,200 users and four servers. This testing used Hitachi Unified Storage 110 with Hitachi Dynamic Provisioning in a two-pool RAID-10 (4D+4D) (one for databases and one for logs) resiliency configuration. These results help answer questions about the kind of performance capabilities to expect with a large-scale Exchange deployment on Hitachi Unified Storage 110. Testing used four Hitachi Compute Blade 2000 server blades in a single chassis, each with the following: 64 GB of RAM Two quad-core Intel Xeon X5690 3.46 GHz CPUs Two dual-port 8 Gb/sec Fibre Channel PCIe HBA (Emulex LPe1205-HI, using two port per HBA) located in the chassis expansion tray Microsoft Windows Server 2008 R2 Enterprise This solution includes Exchange 2010 Mailbox Resiliency by using the database availability group (DAG) feature. This tested configuration uses four DAGs, each containing twelve database copies and two servers. The test configuration was capable of supporting 27,200 users with a 0.18 IOPS per user profile and user mailbox size of 3 GB. A Hitachi Unified Storage 110 with the following was used for these tests: 120 2TB 7.2K RPM SAS disks 8 GB of cache 8 8Gb/sec paths used Hitachi Unified Storage 110 is a medium-sized, highly reliable midrange storage system that can scale to 120 disks while maintaining 99.999% availability. It is highly suitable for a variety of applications and host platforms and is modular in scale. With the option of in-system and cross-system replication functionality, the Hitachi Unified Storage 110 is fully capable of being used as the core underlying storage platform for highperformance Exchange Server 2010 architectures. 4

Solution Description Deploying Microsoft Exchange Server 2010 requires careful consideration of all aspects of the solution architecture. Host servers need to be configured so that they are robust enough to handle the required Exchange load. The storage solution must be designed to provide the necessary performance while also being reliable and easy to administer. Of course, an effective backup and recovery plan should be incorporated into the solution as well. The aim of this solution report is to provide a tested configuration that uses Hitachi Unified Storage 110 to meet the needs of a large Exchange Server deployment. This solution uses Hitachi Dynamic Provisioning, which is enabled on Hitachi Unified Storage 110 via a license key. In the most basic sense, Hitachi Dynamic Provisioning is similar to the use of a host-based logical volume manager (LVM), but with several additional features available within Hitachi Unified Storage 110 and without the need to install software on the host or incur host processing overhead. Hitachi Dynamic Provisioning is a superior solution by providing for one or more pools of wide striping across many RAID groups within Hitachi Unified Storage 110. One or more dynamic provisioning virtual volumes (DP-VOLs) of a user-specified logical size (with no initial physical space allocated) are created and associated with a single pool. Primarily, Hitachi Dynamic Provisioning is deployed to avoid the routine issue of hot spots that occur on logical units (LUs) from individual RAID groups when the host workload exceeds the IOPS or throughput capacity of that RAID group. By using many RAID groups as members of a striped Hitachi Dynamic Provisioning pool underneath the virtual or logical volumes seen by the hosts, a host workload is distributed across many RAID groups, which provides a smoothing effect that dramatically reduces hot spots and results in fewer mailbox moves for the Exchange administrator. Hitachi Dynamic Provisioning also carries the side benefit of thin provisioning, where physical space is only assigned from the pool to the DP-VOL as needed using 1 GB chunks per RAID Group, up to the logical volume size specified for each DP-VOL. Space from a 1 GB chunk is then allocated as needed as 32 MB pool pages to that DP-VOL s logical block address range. A pool can also be dynamically expanded by adding more RAID groups without disruption or requiring downtime. Upon expansion, a pool can be rebalanced easily so that the data and workload are wide striped evenly across the current and newly added RAID groups thatmake up the pool. High availability is also a part of this solution with the use of database availability groups (DAG), which is the base component of the high availability and site resilience framework built into Microsoft Exchange Server 2010. A DAG is a group of up to 16 mailbox servers that host a set of databases and logs and use continuous replication to provide automatic database-level recovery from failures that affect individual servers or databases. Any server in a DAG can host a copy of a mailbox database from any other server in the DAG. When a server is added to a DAG, it monitors and works with the other servers in the DAG to provide automatic recovery delivering a robust, highly available Exchange solution without the administrative complexities of traditional failover clustering. For more information about the DAG feature in Exchange Server 2010, see http://technet.microsoft.com/en-us/library/dd979799.aspx This solution includes two copies of each Exchange database using four DAGs, with each DAG configured with two servers (one simulated) that host active mailboxes in twelve databases. To target the 27,200-user resiliency solution, a Hitachi Unified Storage 110 storage system was configured with 120 disks (maximum 120). Four servers (one per DAG) were used, with each server configured with 6,800 mailboxes. There were 12 active databases and the simulated database copies for the tests. 5

Each DAG contained two copies of the databases hosted by that DAG; A local, active copy on a server connected to the primary Hitachi Unified Storage 110 A passive copy (simulated) on another server connected to a second Hitachi Unified Storage 110 (simulated). This recommended configuration can support both high-availability and disaster-recovery scenarios when the active and passive database copies are allocated among both DAG members and dispersed across both storage systems. Each simulated DAG server node in this solution maintains a mirrored configuration and possesses adequate capacity and performance capabilities to support the second set of replicated databases. Figure 1 illustrates the two systems that make up the simulated DAG configuration. For more information, see the Hitachi Data Systems Storage Systems web page. Figure 1 This solution enables organizations to consolidate Exchange Server 2010 DAG deployments on two Hitachi Unified Storage 110 storage systems. Using identical hardware and software configurations guarantees that an active database and its replicated copy do not share storage paths, disk spindles or storage controllers, making it a very reliable, high-performing, highly available Exchange Server 2010 solution that is cost effective and easy to manage. This helps ensure that performance and service levels related to storage are maintained regardless of which server is hosting the active database. If further protection is needed in a production 6

environment, additional Exchange Server 2010 mailbox servers can be easily added to support these failover scenarios. Table 1 illustrates how the disks in Hitachi Unified Storage 110 were organized into RAID groups for use by databases or logs. Each set of colored disks represents a RAID-10 (4D+4D) group. There were 120 2TB 7.2K RPM SAS disks used in these tests configured as 13 RAID groups for the Exchange databases and logs. Table 1. Hitachi Unified Storage 110 RAID Groups by Tray Layout HUS110, 2TB 7.2K SAS, R-10(4+4) HDP Pool Disk Layout Unit 4 RG37 RG36 RG35 Unit 3 RG40 RG39 RG38 Unit 2 RG43 RG42 RG41 Unit 1 RG46 RG45 RG44 Unit 0 RG49 RG48 RG47 Slot # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 DB Pool Log Pool Disk trays 0 through 4 each held 24 7.2K RPM SAS disks. Two dynamic provisioning pools were created, one for the databases and the other for the logs. The database pool was created from 13 RAID-10 (4D+4D) RAID groups and the log pool was created from 2 RAID-10 (4D+4D) groups. From the database pool, 48 DP-VOLs (each specified to have a 1,900 GB size limit) were created for 48 databases (twelve per server). From the log pool, 48 DP-VOLs (each specified to have a size limit of 190 GB) were created for 48 logs (twelve per server). Table 2 outlines the port layout for the primary storage system and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. 7

Table 2. Hitachi Unified Storage 110 Ports to Server Layout Server Mapping Configuration Server Primary path Secondary path CB10 0A 1A 0C 1C CB11 1A 0A 1C 0C CB12 0B 1B 0D 1D CB13 1B 0B 1D 0D Table 3 outlines the port layout with the database DP-VOL assignments for the primary storage and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. Table 3. Hitachi Unified Storage 110 Ports to DP-VOL Layout Port DB DP-VOLs 0A s 1-6 0-5 1A s 13-17 12-17 0B s 25-30 24-29 1B s 37-42 36-41 0C s 7-12 6-11 1C s 18-24 18-23 0D s 31-36 30-35 1D s 43-48 42-47 Table 4 outlines the port layout with the log DP-VOL assignments for the primary storage system and servers. An identical configuration would be deployed on the replicated storage system and servers for this solution. 8

Table 4. Hitachi Unified Storage 110Ports to Log DP-VOL Layout Port Log Log DP-VOLs 0A log 49-54 48-53 1A log 61-66 60-65 0B log 73-78 72-77 1B log 85-90 84-89 0C log 55-60 54-59 1C log 67-72 66-71 0D log 79-84 78-83 1D log 91-96 90-95 Table 5 provides the detailed specifications for the storage configuration which uses RAID-10 (4D+4D) RAID groups and 2TB 7.2KRPM disks. Dynamic Provisioning Pool 1 is dedicated for the databases and Dynamic Provisioning Pool 0 is dedicated for the logs. Table 5. Hitachi Unified Storage 110 Configuration Details Host Pool Port DP-VOL Size (GB) RAID Level Description CB10 1 0A/1A 0-5 1900 RAID-10 s 1-6 1 0C/1C 6-11 1900 RAID-10 s 7-12 CB11 1 1A/0A 12-17 1900 RAID-10 s 13-17 1 1C/0C 18-23 1900 RAID-10 s 18-24 CB12 1 0B/1B 24-29 1900 RAID-10 s 25-30 1 0D/1D 30-35 1900 RAID-10 s 31-36 CB13 1 1B/0B 36-41 1900 RAID-10 s 37-42 1 1D/0D 42-47 1900 RAID-10 s 43-48 CB10 0 0A/1A 48-53 190 RAID-10 log 49-54 0 0C/1C 54-59 190 RAID-10 log 55-60 CB11 0 1A/0A 60-65 190 RAID-10 log 61-66 0 1C/0C 66-71 190 RAID-10 log 67-72 CB12 0 0B/1B 72-77 190 RAID-10 log 73-78 0 0D/1D 78-83 190 RAID-10 log 79-84 CB13 0 1B/0B 84-89 190 RAID-10 log 85-90 0 1D/0D 90-95 190 RAID-10 log 91-96 9

The ESRP Storage program focuses on storage solution testing to address performance and reliability issues with storage design. However, storage is not the only factor to take into consideration when designing a scale-up Exchange solution. These factors also affect server scalability: Server processor utilization Server physical and virtual memory limitations Resource requirements for other applications Directory and network service latencies Network infrastructure limitations Replication and recovery requirements Client usage profiles These factors are all beyond the scope of the ESRP Storage program. Therefore, the number of mailboxes hosted per server as part of the tested configuration might not necessarily be viable for some customer deployments. For more information about identifying and addressing performance bottlenecks in an Exchange system, see Microsoft's Troubleshooting Microsoft Exchange Server Performance. Targeted Customer Profile This solution is designed for medium to large organizations that plan to consolidate their Exchange Server 2010 storage on high-performance, high-reliability storage systems. This configuration is designed to support 27,200 Exchange users with the following specifications: Eight Exchange servers (four tested, four simulated for the database copies) Four database availability groups (DAG) each with two servers (one simulated) and two copies per database Two Hitachi Unified Storage 110 (one tested) 0.15 IOPS per user (0.18 tested for 20 percent growth) 3 GB mailbox size Mailbox resiliency provides high-availability and used as primary data protection mechanism. Hitachi Unified Storage RAID protection against physical failure or loss. 24x7 background database maintenance enabled. 10

Test Deployment The following tables summarize the testing environment. Table 6. Simulated Exchange Configuration Number of Exchange mailboxes simulated 27,200 Number of database availability groups (DAGs) 4 Number of servers per DAG 2 Number of active mailboxes per server 6,800 Number of databases per host 12 Number of copies per database 2 Number of mailboxes per database 566 Simulated profile: s per second per mailbox (IOPS, include 20% headroom) 0.18 LU size Log LU siz Total database size for performance testing 1900 GB 190 GB 81,600 GB % storage capacity used by Exchange database** 88.7% **Storage performance characteristics change based on the percentage utilization of the individual disks. Tests that use a small percentage of the storage (~25%) might exhibit reduced throughput if the storage capacity utilization is significantly increased beyond what was tested for this paper. Table 7. Storage Hardware Storage connectivity (Fibre Channel, SAS, SATA, iscsi) Storage model and OS/firmware revision Storage cache Fibre Channel 1 Hitachi Unified Storage 110 Firmware: 0920/A-W WHQL listing: Hitachi Unified Storage 110 8 GB Number of storage controllers 2 Number of storage ports 8 Maximum bandwidth of storage connectivity to host 64 Gb/sec (8 8 Gb/sec ports) Switch type/model/firmware revision HBA model and firmware Number of HBAs per host Brocade 5300, Fabric OS v7.0.1b Emulex LPe1205-HI FW : 1.11X14 2 dual-ported HBA per host, 2 8 Gb/sec port used per HBA 11

Host server type Hitachi Compute Blade E55A2 2 3.46GHz Intel Xeon Processors, 32 GB memory Total number of disks tested in solution 120 Maximum number of spindles that can be hosted in the storage 120 Table 8. Storage Software HBA driver Storport Miniport 7.2.20.006 HBA QueueTarget setting 0 HBA QueueDepth setting 32 Multipathing Host OS Hitachi Dynamic Link Manager v7.2.1-00 Microsoft Windows Server 2008 R2 Enterprise ESE.dll file version 14.01.0225.017 Replication solution name/version N/A Table 9. Storage Disk Configuration (Mailbox Store Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of physical disks in test 2 TB 104 (dynamic provisioning pool) Total raw storage capacity (GB) 208,000 Disk slice size (GB) N /A Number of slices per LU or number of disks per LU RAID level Total formatted capacity N/A RAID-10 (4D+4D) at storage level 92,040 GB Storage capacity utilization 44.3% capacity utilization 43.8% Table 10. Storage Disk Configuration (Transaction Log Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of spindles in test 2 TB 8 (dynamic provisioning pool) Total raw storage capacity (GB) 16,000 12

Disk slice size (GB) Number of slices per LU or number of disks per LU RAID level Total formatted capacity N/A N/A RAID-10 (4D+4D) at storage level 7,080 GB Replication Configuration Table 11. Replication Configuration Replication mechanism Exchange Server 2010 Availability Group (DAG) Number of links 2 Simulated link distance Link type Link bandwidth N/A IP GigE (1Gb/sec) Table 12. Replicated Storage Hardware Storage connectivity (Fibre Channel, SAS, SATA, iscsi) Storage model and OS/firmware revision Storage cache Fibre Channel 1 Hitachi Unified Storage 110 Firmware: 0920/A-W WHQL listing: Hitachi Unified Storage 110 8 GB Number of storage controllers 2 Number of storage ports 8 Maximum bandwidth of storage connectivity to host 64 Gb/sec (8 8 Gb/sec ports) Switch type/model/firmware revision HBA model and firmware Number of HBAs per host Host server type Brocade 5300, Fabric OS v7.0.1b Emulex LPe1205-HI FW : 1.11X14 2 dual-ported HBA per host, 2 8Gb/sec port used per HBA Hitachi Compute Blade E55A2 2 3.46GHz Intel Xeon Processors, 32 GB memory Total number of disks tested in solution 120 Maximum number of spindles that can be hosted in the storage 120 13

Table 13. Replicated Storage Software HBA driver Storport Miniport 7.2.20.006 HBA QueueTarget setting 0 HBA QueueDepth setting 32 Multipathing Host OS Hitachi Dynamic Link Manager v7.2.1-00 Microsoft Windows Server 2008 R2 Enterprise ESE.dll file version 14.01.0225.017 Replication solution name/version N/A Table 14. Replicated Storage Disk Configuration (MailboxStore Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of physical disks in test 2 TB 104 (dynamic provisioning pool) total raw storage capacity (GB) 208,000 Disk slice size (GB) N /A Number of slices per LU or number of disks per LU Raid level Total formatted capacity N/A RAID-10 (4D+4D) at storage level 92,040 GB Storage capacity utilization 44.3% capacity utilization 43.8% Table 15. Replicated Storage Disk Configuration (Transactional Log Disks) Disk type, speed and firmware revision SAS Disk 2TB 7.2K 5C0C Raw capacity per disk (GB) Number of spindles in test 2 TB 8 (dynamic provisioning pool) Total raw storage capacity (GB) 16,000 Disk slice size (GB) Number of slices per LU or number of disks per LU Raid level Total formatted capacity N/A N/A RAID-10 (4D+4D) at storage level 7,080 GB 14

Best Practices Microsoft Exchange Server 2010 is a disk-intensive application. It presents two distinct workload patterns to the storage, with 32 KB random read/write operations to the databases, and sequential write operations of varying size (between 512 bytes up to the log buffer size) to the transaction logs. For this reason, designing an optimal storage configuration can prove challenging in practice. Based on the testing run using the ESRP framework, Hitachi Data Systems recommends these best practices to improve the performance of Hitachi Unified Storage 110 running Exchange 2010. For more information about Exchange 2010 best practices for storage design, see the Microsoft TechNet article Mailbox Server Storage Design. Core Storage 1. When formatting a newly partitioned LU, Hitachi Data Systems recommends setting the ALU to 64K for the database files and 4K for the log files. 2. Disk alignment is no longer required when using Microsoft Windows Server 2008. 3. Keep the Exchange workload isolated from other applications. Mixing another intensive application whose workload differs from Exchange can cause the performance for both applications to degrade. 4. Use Hitachi Dynamic Link Manager multipathing software to provide fault tolerance and high availability for host connectivity. 5. Use Hitachi Dynamic Provisioning to simplify storage management of the Exchange database and log volumes. 6. Due to the difference in patterns, isolate the Exchange database from the log groups. Create a dedicated Hitachi Dynamic Provisioning pool for the databases and a separate pool for the logs. 7. Hitachi Data Systems recommends RAID-5 or RAID-1+0 groups for both the database pools and for the log pool. Use of RAID-10 allows more writes at a lower response time under heavier loads. RAID-10 also supports a shorter RAID group rebuild time on failure of a disk. 8. The log LUs should be at least 10 percent of the size of the database LUs. 9. Hitachi Data Systems does not recommend using LU concatenation. 10. Hitachi Data Systems recommends implementing Mailbox Resiliency using the Exchange Server 2010 Availability Group feature. 11. Ensure that each DAG maintains at least two database copies to provide high availability. 12. Isolate active databases and their replicated copies in separate dynamic provisioning pools or ensure that they are located on a separate Hitachi Unified Storage 110. 13. Use fewer, larger LUs for Exchange 2010 databases (up to 2TB) with Background Maintenance (24x7) enabled. 14. Size storage solutions for Exchange based primarily on performance criteria. The number of disks, RAID level and percent utilization of each disk directly affect the level of achievable performance. Factor in capacity requirements only after performance is addressed. 15

15. Disk size is unrelated to performance with regards to IOPS or throughput rates. Disk size is related to the usable capacity of all of the LUs from a RAID group, which is a choice users make. 16. The number of spindles, coupled with the RAID level, determines the physical IOPS capacity of the RAID group and all of its LUs. If the disk has too few spindles, the response times grow to large values very quickly. Storage-based Replication N/A Backup Strategy N/A 16

Test Results Summary This section provides a high-level summary of the test data from ESRP and the link to the detailed HTML reports that are generated by ESRP testing framework. Reliability A number of tests in the framework check reliability spanning a 24-hour window. The goal is to verify the storage can handle high load for a long period of time. Following these stress tests, both log and database files are analyzed for integrity to ensure that no database or log corruption occurs. No errors were reported in the event log file for the storage reliability testing. No errors were reported for the database and log checksum process. If done, no errors were reported during the backup to disk test process. No errors were reported for the database checksum on the remote storage database. Storage Performance Results Primary storage performance testing exercises the storage with maximum sustainable Exchange type of for two hours. The test shows how long it takes for the storage to respond to an under load. The following data is the sum of all of the logical disk s and average of all the logical disks latency in the two-hour test duration. Individual Server Metrics These individual server metrics show the sum of the s across the storage groups and the average latency across all storage groups on a per-server basis. Table 16. Individual Server Metrics for Exchange Server (CB10) Disk Transfers Per Second 1383 Disk Reads Per Second 747 Disk Writes Per Second 637 Disk Read Latency (ms) 14.4 Disk Write Latency (ms) 2.2 Transaction Log Log Disk Writes Per Second 539 Log Disk Write Latency (ms) 0.7 17

Table 17. Individual Server Metrics for Exchange Server (CB11) Disk Transfers Per Second 1379 Disk Reads Per Second 744 Disk Writes Per Second 635 Disk Read Latency (ms) 14.4 Disk Write Latency (ms) 2.2 Transaction Log Log Disk Writes Per Second 536 Log Disk Write Latency (ms) 0.7 Table 18. Individual Server Metrics for Exchange Server (CB12) Disk Transfers Per Second 1384 Disk Reads Per Second 747 Disk Writes Per Second 637 Disk Read Latency (ms) 14.4 Disk Write Latency (ms) 2.2 Transaction Log Log Disk Writes Per Second 539 Log Disk Write Latency (ms) 0.7 Table 19. Individual Server Metrics for Exchange Server (CB13) Disk Transfers Per Second 1389 Disk Reads Per Second 749 Disk Writes Per Second 639 Disk Read Latency (ms) 14.3 Disk Write Latency (ms) 2.1 Transaction Log Log Disk Writes Per Second 540 Log Disk Write Latency (ms) 0.7 18

Aggregate Performance Across All Servers Metric The aggregate performance across all server metrics shows the sum of s across all servers in the solution and the average latency across all servers in the solution. Table 20. Aggregate Performance for Exchange Server 2010 Disk Transfers Per Second 5534.96 Disk Reads Per Second 2986.59 Disk Writes Per Second 2548.37 Disk Read Latency (ms) 14.37 Disk Write Latency (ms) 2.16 Transaction Log Log Disk Writes Per Second 2152.80 Log Disk Write Latency (ms) 0.68 Backup and Recovery Performance This section has two tests: The first measures the sequential read rate of the database files and the second measures recovery/replay performance (playing transaction logs in to the database). Read-only Performance This test measures the maximum rate at which databases can be backed up via VSS. The following tables show the average rate for a single database file. Table 21. Read-only Performance MB Read Per Second Per 57.40 MB Read Per Second Total Per Server 688.77 Transaction Log Recovery/Replay Performance This test measures the maximum rate at which the log files can be played against the databases. The following table shows the average rate for 500 log files played in a single storage group. Each log file is 1MB in size. Table 22. Transaction Log Recovery/Replay Performance Time to Play One Log File (sec) 2.89226 19

Conclusion This document details a tested and robust Exchange Server 2010 Resiliency solution capable of supporting 27,200 users with a 0.18 IOPS per user profile and user mailbox size of 3 GB using four DAG s each configured with 2 server nodes (one simulated). A Hitachi Unified Storage 110 storage system, with 8 GB of cache and four 8 Gb/sec Fibre Channel host paths, using Hitachi Dynamic Provisioning (with two pools) and 120 2TB 7K RPM SAS disks in a RAID-10 (4D+4D) configuration was used for these tests. Testing confirmed that Hitachi Unified Storage 110 is more than capable of delivering the IOPS and capacity requirements needed to support the active and replicated databases for 27,200 Exchange mailboxes configured with the specified user profile, while maintaining additional headroom to support peak throughput. The solution outlined in this document does not include data protection components, such as VSS snapshot or clone backups, and relies on the built-in Mailbox Resiliency features of Exchange Server 2010 coupled with Hitachi Unified Storage RAID technology to provide high-availability and protection from logical and physical failures. Adding additional protection requirements may affect performance and capacity requirements of the underlying storage configuration, and as such need to be factored into the storage design accordingly. For more information to about planning Exchange Server 2010 storage architectures for the Hitachi Unified Storage family, see http://www.hds.com/ This document is developed by Hitachi Data Systems and reviewed by the Microsoft Exchange product team. The test results and data presented in this document are based on the tests introduced in the ESRP test framework. Do not quote the data directly for pre-deployment verification. It is still necessary to validate the storage design for a specific customer environment. The ESRP program is not designed to be a benchmarking program; tests do not generate the maximum throughput for a given solution. Rather, it is focused on producing recommendations from vendors for Exchange application. Thus, do not use the data presented in this document for direct comparisons among the solutions 20

Appendix Test Reports This appendix contains Jetstress test results for one of the servers used in testing this storage solution. These test results are representative of the results obtained for all of the servers tested. Performance Test Result: CB10 Test Summary Overall Test Result Machine Name Pass CB10 Test Description Test Start Time Test End Time Collection Start Time Collection End Time 7/30/2012 10:19:28 PM 7/31/2012 12:23:23 AM 7/30/2012 10:23:11 PM 7/31/2012 12:22:56 AM Jetstress Version 14.01.0225.017 ESE Version 14.01.0218.012 Operating System Windows Server 2008 R2 Enterprise Service Pack 1 (6.1.7601.65536) Performance Log C:\HUS110_PE108_C1B1_SAS7K_ESRP_R10_3GB_mbox_2000 Users\Performance Test\Performance_2012_7_30_22_19_54.blg Sizing and Throughput Achieved Transactional per Second 1383.18 Target Transactional per Second 1224 Initial Size (bytes) 3587896508416 Final Size (bytes) 3592980004864 Files (Count) 12 21

Jetstress System Parameters Thread Count Minimum Cache Maximum Cache 5 (per database) 384.0 MB 3072.0 MB Insert Operations 40% Delete Operations 20% Replace Operations 5% Read Operations 35% Lazy Commits 70% Run Background Maintenance True Number of Copies per 2 22

Configuration Instance1944.1 Instance1944.2 Instance1944.3 Instance1944.4 Instance1944.5 Instance1944.6 Instance1944.7 Instance1944.8 Instance1944.9 Instance1944.10 Instance1944.11 Instance1944.12 Log path: C:\logluns\0A_P1_log1 : C:\dbluns\0A_P1_db1\Jetstress001001.edb Log path: C:\logluns\0A_P1_log2 : C:\dbluns\0A_P1_db2\Jetstress002001.edb Log path: C:\logluns\0A_P1_log3 : C:\dbluns\0A_P1_db3\Jetstress003001.edb Log path: C:\logluns\0A_P1_log4 : C:\dbluns\0A_P1_db4\Jetstress004001.edb Log path: C:\logluns\0A_P1_log5 : C:\dbluns\0A_P1_db5\Jetstress005001.edb Log path: C:\logluns\0A_P1_log6 : C:\dbluns\0A_P1_db6\Jetstress006001.edb Log path: C:\logluns\0C_P2_log7 : C:\dbluns\0C_P2_db7\Jetstress007001.edb Log path: C:\logluns\0C_P2_log8 : C:\dbluns\0C_P2_db8\Jetstress008001.edb Log path: C:\logluns\0C_P2_log9 : C:\dbluns\0C_P2_db9\Jetstress009001.edb Log path: C:\logluns\0C_P2_log10 : C:\dbluns\0C_P2_db10\Jetstress010001.edb Log path: C:\logluns\0C_P2_log11 : C:\dbluns\0C_P2_db11\Jetstress011001.edb Log path: C:\logluns\0C_P2_log12 : C:\dbluns\0C_P2_db12\Jetstress012001.edb 23

Transactional Performance MSExchange ==> Instances Reads Latency (msec) Writes Latency (msec) Reads/sec Writes/sec Reads Bytes Writes Bytes Reads Latency (msec) Writes Latency (msec) Reads/sec Writes/sec Reads Bytes Writes Bytes Instance1944.1 16.039 2.430 62.055 52.961 34336.632 35309.166 0.000 0.669 0.000 44.994 0.000 4554.234 Instance1944.2 14.161 2.427 61.890 52.786 34722.348 35291.201 0.000 0.689 0.000 44.606 0.000 4547.224 Instance1944.3 14.480 2.285 62.333 53.245 34642.300 35310.352 0.000 0.676 0.000 44.869 0.000 4537.232 Instance1944.4 14.370 2.289 62.296 53.059 34426.571 35273.899 0.000 0.698 0.000 44.429 0.000 4565.172 Instance1944.5 14.253 2.067 62.619 53.460 34601.419 35247.478 0.000 0.670 0.000 44.672 0.000 4519.856 Instance1944.6 14.286 2.107 62.411 53.165 34563.840 35259.300 0.000 0.695 0.000 45.074 0.000 4519.047 Instance1944.7 14.240 2.146 62.270 53.184 34565.377 35281.795 0.000 0.664 0.000 44.869 0.000 4558.098 Instance1944.8 14.227 2.204 61.653 52.402 34729.236 35278.841 0.000 0.682 0.000 44.590 0.000 4555.640 Instance1944.9 14.337 2.067 62.152 52.977 34615.130 35289.105 0.000 0.664 0.000 44.890 0.000 4571.140 Instance1944.10 14.108 2.110 61.948 52.821 34492.240 35304.047 0.000 0.684 0.000 44.993 0.000 4547.479 Instance1944.11 14.226 2.017 62.599 53.268 34703.693 35235.098 0.000 0.663 0.000 45.263 0.000 4507.158 Instance1944.12 14.236 2.092 62.390 53.235 34484.492 35268.348 0.000 0.679 0.000 45.259 0.000 4556.336 24

Background Maintenance Performance MSExchange ==> Instances Maintenance IO Reads/sec Maintenance IO Reads Bytes Instance1944.1 36.570 260891.160 Instance1944.2 38.253 260855.065 Instance1944.3 37.432 260898.239 Instance1944.4 37.755 260879.283 Instance1944.5 38.055 260839.319 Instance1944.6 38.094 260813.731 Instance1944.7 38.025 260874.364 Instance1944.8 37.869 260853.701 Instance1944.9 37.569 260770.036 Instance1944.10 38.251 260871.970 Instance1944.11 37.961 260862.114 Instance1944.12 37.967 260815.617 25

Log Replication Performance MSExchange ==> Instances Reads/sec Reads Bytes Instance1944.1 0.828 230576.814 Instance1944.2 0.820 231473.961 Instance1944.3 0.823 232078.662 Instance1944.4 0.820 231587.091 Instance1944.5 0.816 232055.448 Instance1944.6 0.824 231015.983 Instance1944.7 0.827 230098.577 Instance1944.8 0.821 230138.667 Instance1944.9 0.829 231015.955 Instance1944.10 0.827 230687.056 Instance1944.11 0.824 230014.264 Instance1944.12 0.834 231061.065 26

Total Performance MSExchange ==> Instances Reads Latency (msec) Writes Latency (msec) Reads/sec Writes/sec Reads Bytes Writes Bytes Reads Latency (msec) Writes Latency (msec) Reads/sec Writes/sec Reads Bytes Writes Bytes Instance1944.1 16.039 2.430 98.626 52.961 118342.981 35309.166 28.051 0.669 0.828 44.994 230576.814 4554.234 Instance1944.2 14.161 2.427 100.144 52.786 121101.544 35291.201 27.325 0.689 0.820 44.606 231473.961 4547.224 Instance1944.3 14.480 2.285 99.764 53.245 119533.652 35310.352 30.296 0.676 0.823 44.869 232078.662 4537.232 Instance1944.4 14.370 2.289 100.051 53.059 119879.663 35273.899 31.311 0.698 0.820 44.429 231587.091 4565.172 Instance1944.5 14.253 2.067 100.674 53.460 120119.085 35247.478 29.962 0.670 0.816 44.672 232055.448 4519.856 Instance1944.6 14.286 2.107 100.506 53.165 120318.382 35259.300 29.064 0.695 0.824 45.074 231015.983 4519.047 Instance1944.7 14.240 2.146 100.295 53.184 120365.282 35281.795 29.645 0.664 0.827 44.869 230098.577 4558.098 Instance1944.8 14.227 2.204 99.522 52.402 120770.678 35278.841 32.577 0.682 0.821 44.590 230138.667 4555.640 Instance1944.9 14.337 2.067 99.721 52.977 119817.542 35289.105 28.713 0.664 0.829 44.890 231015.955 4571.140 Instance1944.10 14.108 2.110 100.198 52.821 120913.047 35304.047 33.430 0.684 0.827 44.993 230687.056 4547.479 Instance1944.11 14.226 2.017 100.560 53.268 120077.013 35235.098 28.505 0.663 0.824 45.263 230014.264 4507.158 Instance1944.12 14.236 2.092 100.358 53.235 120110.017 35268.348 32.452 0.679 0.834 45.259 231061.065 4556.336 Host System Performance Counter Minimum Maximum % Processor Time 2.159 1.017 3.464 Available MBytes 58032.259 58027.000 58192.000 Free System Page Table Entries 33555793.619 33555792.000 33555795.000 Transition Pages RePurposed/sec 0.000 0.000 0.000 Pool Nonpaged Bytes 85725672.435 85676032.000 86036480.000 Pool Paged Bytes 130998777.573 130957312.000 131047424.000 Page Fault Stalls/sec 0.000 0.000 0.000 Test Log7/30/2012 10:19:28 PM -- Jetstress testing begins... 27

7/30/2012 10:19:28 PM -- Preparing for testing... 7/30/2012 10:19:41 PM -- Attaching databases... 7/30/2012 10:19:41 PM -- Preparations for testing are complete. 7/30/2012 10:19:41 PM -- Starting transaction dispatch.. 7/30/2012 10:19:41 PM -- cache settings: (minimum: 384.0 MB, maximum: 3.0 GB) 7/30/2012 10:19:41 PM -- flush thresholds: (start: 30.7 MB, stop: 61.4 MB) 7/30/2012 10:19:54 PM -- read latency thresholds: (average: 20 msec/read, maximum: 100 msec/read). 7/30/2012 10:19:54 PM -- Log write latency thresholds: (average: 10 msec/write, maximum: 100 msec/write). 7/30/2012 10:20:04 PM -- Operation mix: Sessions 5, Inserts 40%, Deletes 20%, Replaces 5%, Reads 35%, Lazy Commits 70%. 7/30/2012 10:20:04 PM -- Performance logging started (interval: 15000 ms). 7/30/2012 10:20:04 PM -- Attaining prerequisites: 7/30/2012 10:23:11 PM -- \MSExchange (JetstressWin)\ Cache Size, Last: 2909118000.0 (lower bound: 2899103000.0, upper bound: none) 7/31/2012 12:23:11 AM -- Performance logging has ended. 7/31/2012 12:23:11 AM -- JetInterop batch transaction stats: 29865, 29599, 29588, 29552, 29908, 29806, 29695, 29900, 29803, 29811, 29798 and 29997. 7/31/2012 12:23:11 AM -- Dispatching transactions ends. 7/31/2012 12:23:11 AM -- Shutting down databases... 7/31/2012 12:23:23 AM -- Instance1944.1 (complete), Instance1944.2 (complete), Instance1944.3 (complete), Instance1944.4 (complete), Instance1944.5 (complete), Instance1944.6 (complete), Instance1944.7 (complete), Instance1944.8 (complete), Instance1944.9 (complete), Instance1944.10 (complete), Instance1944.11 (complete) and Instance1944.12 (complete) 7/31/2012 12:23:23 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R10_3GB_mbox_2000 Users\Performance Test\Performance_2012_7_30_22_19_54.blg has 490 samples. 7/31/2012 12:23:23 AM -- Creating test report... 7/31/2012 12:23:28 AM -- Instance1944.1 has 16.0 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.1 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.1 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.2 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.2 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.2 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.3 has 14.5 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.3 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.3 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.4 has 14.4 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.4 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.4 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.5 has 14.3 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.5 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.5 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.6 has 14.3 for Reads Latency. 28

7/31/2012 12:23:28 AM -- Instance1944.6 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.6 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.7 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.7 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.7 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.8 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.8 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.8 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.9 has 14.3 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.9 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.9 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.10 has 14.1 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.10 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.10 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.11 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.11 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.11 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.12 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.12 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.12 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Test has 0 Maximum Page Fault Stalls/sec. 7/31/2012 12:23:28 AM -- The test has 0 Page Fault Stalls/sec samples higher than 0. 7/31/2012 12:23:28 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R10_3GB_mbox_2000 Users\Performance Test\Performance_2012_7_30_22_19_54.xml has 477 samples queried. 29

Performance Test Checksums Result: CB10 Checksum Statistics - All Seen pages Bad pages Correctable pages Wrong page-number pages File length / seconds taken C:\dbluns\0A_P1_db1\Jetstress001001.edb 9137442 0 0 0 285545 MB/9878 sec C:\dbluns\0A_P1_db2\Jetstress002001.edb 9137442 0 0 0 285545 MB/9552 sec C:\dbluns\0A_P1_db3\Jetstress003001.edb 9137442 0 0 0 285545 MB/9874 sec C:\dbluns\0A_P1_db4\Jetstress004001.edb 9136674 0 0 0 285521 MB/9539 sec C:\dbluns\0A_P1_db5\Jetstress005001.edb 9136418 0 0 0 285513 MB/9875 sec C:\dbluns\0A_P1_db6\Jetstress006001.edb 9137442 0 0 0 285545 MB/9546 sec C:\dbluns\0C_P2_db7\Jetstress007001.edb 9137186 0 0 0 285537 MB/9868 sec C:\dbluns\0C_P2_db8\Jetstress008001.edb 9137698 0 0 0 285553 MB/9557 sec C:\dbluns\0C_P2_db9\Jetstress009001.edb 9138466 0 0 0 285577 MB/9856 sec C:\dbluns\0C_P2_db10\Jetstress010001.edb 9137698 0 0 0 285553 MB/9551 sec C:\dbluns\0C_P2_db11\Jetstress011001.edb 9137442 0 0 0 285545 MB/9849 sec C:\dbluns\0C_P2_db12\Jetstress012001.edb 9137698 0 0 0 285553 MB/9558 sec (Sum) 109649048 0 0 0 3426532 MB/9878 sec 30

Disk Subsystem Performance (of checksum) LogicalDisk Avg. Disk sec/read Avg. Disk sec/write Disk Reads/sec Disk Writes/sec Avg. Disk Bytes/Read C:\dbluns\0A_P1_db1 0.052 0.000 461.612 0.000 65536.000 C:\dbluns\0A_P1_db2 0.050 0.000 477.610 0.000 65536.000 C:\dbluns\0A_P1_db3 0.052 0.000 461.919 0.000 65536.000 C:\dbluns\0A_P1_db4 0.050 0.000 478.410 0.000 65536.000 C:\dbluns\0A_P1_db5 0.052 0.000 462.109 0.000 65536.000 C:\dbluns\0A_P1_db6 0.050 0.000 478.180 0.000 65536.000 C:\dbluns\0C_P2_db7 0.052 0.000 461.874 0.000 65536.000 C:\dbluns\0C_P2_db8 0.050 0.000 477.270 0.000 65536.000 C:\dbluns\0C_P2_db9 0.052 0.000 462.971 0.000 65536.000 C:\dbluns\0C_P2_db10 0.050 0.000 477.875 0.000 65536.000 C:\dbluns\0C_P2_db11 0.052 0.000 463.366 0.000 65536.000 C:\dbluns\0C_P2_db12 0.050 0.000 477.212 0.000 65536.000 Memory System Performance (of checksum) Counter Minimum Maximum % Processor Time 1.836 1.057 2.425 Available MBytes 61177.550 61157.000 61195.000 Free System Page Table Entries 33555793.085 33555793.000 33555795.000 Transition Pages RePurposed/sec 0.000 0.000 0.000 Pool Nonpaged Bytes 86487363.696 86245376.000 87080960.000 Pool Paged Bytes 131557668.571 131469312.000 132812800.000 Test Log7/30/2012 10:19:28 PM -- Jetstress testing begins... 7/30/2012 10:19:28 PM -- Preparing for testing... 7/30/2012 10:19:41 PM -- Attaching databases... 7/30/2012 10:19:41 PM -- Preparations for testing are complete. 7/30/2012 10:19:41 PM -- Starting transaction dispatch.. 7/30/2012 10:19:41 PM -- cache settings: (minimum: 384.0 MB, maximum: 3.0 GB) 31

7/30/2012 10:19:41 PM -- flush thresholds: (start: 30.7 MB, stop: 61.4 MB) 7/30/2012 10:19:54 PM -- read latency thresholds: (average: 20 msec/read, maximum: 100 msec/read). 7/30/2012 10:19:54 PM -- Log write latency thresholds: (average: 10 msec/write, maximum: 100 msec/write). 7/30/2012 10:20:04 PM -- Operation mix: Sessions 5, Inserts 40%, Deletes 20%, Replaces 5%, Reads 35%, Lazy Commits 70%. 7/30/2012 10:20:04 PM -- Performance logging started (interval: 15000 ms). 7/30/2012 10:20:04 PM -- Attaining prerequisites: 7/30/2012 10:23:11 PM -- \MSExchange (JetstressWin)\ Cache Size, Last: 2909118000.0 (lower bound: 2899103000.0, upper bound: none) 7/31/2012 12:23:11 AM -- Performance logging has ended. 7/31/2012 12:23:11 AM -- JetInterop batch transaction stats: 29865, 29599, 29588, 29552, 29908, 29806, 29695, 29900, 29803, 29811, 29798 and 29997. 7/31/2012 12:23:11 AM -- Dispatching transactions ends. 7/31/2012 12:23:11 AM -- Shutting down databases... 7/31/2012 12:23:23 AM -- Instance1944.1 (complete), Instance1944.2 (complete), Instance1944.3 (complete), Instance1944.4 (complete), Instance1944.5 (complete), Instance1944.6 (complete), Instance1944.7 (complete), Instance1944.8 (complete), Instance1944.9 (complete), Instance1944.10 (complete), Instance1944.11 (complete) and Instance1944.12 (complete) 7/31/2012 12:23:23 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R10_3GB_mbox_2000 Users\Performance Test\Performance_2012_7_30_22_19_54.blg has 490 samples. 7/31/2012 12:23:23 AM -- Creating test report... 7/31/2012 12:23:28 AM -- Instance1944.1 has 16.0 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.1 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.1 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.2 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.2 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.2 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.3 has 14.5 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.3 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.3 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.4 has 14.4 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.4 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.4 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.5 has 14.3 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.5 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.5 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.6 has 14.3 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.6 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.6 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.7 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.7 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.7 has 0.7 for Reads Latency. 32

7/31/2012 12:23:28 AM -- Instance1944.8 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.8 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.8 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.9 has 14.3 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.9 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.9 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.10 has 14.1 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.10 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.10 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.11 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.11 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.11 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.12 has 14.2 for Reads Latency. 7/31/2012 12:23:28 AM -- Instance1944.12 has 0.7 for Writes Latency. 7/31/2012 12:23:28 AM -- Instance1944.12 has 0.7 for Reads Latency. 7/31/2012 12:23:28 AM -- Test has 0 Maximum Page Fault Stalls/sec. 7/31/2012 12:23:28 AM -- The test has 0 Page Fault Stalls/sec samples higher than 0. 7/31/2012 12:23:28 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R10_3GB_mbox_2000 Users\Performance Test\Performance_2012_7_30_22_19_54.xml has 477 samples queried. 7/31/2012 12:23:29 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R10_3GB_mbox_2000 Users\Performance Test\Performance_2012_7_30_22_19_54.html was saved. 7/31/2012 12:23:30 AM -- Performance logging started (interval: 30000 ms). 7/31/2012 12:23:30 AM -- Verifying database checksums... 7/31/2012 3:08:08 AM -- C:\dbluns\0A_P1_db1 (100% processed), C:\dbluns\0A_P1_db2 (100% processed), C:\dbluns\0A_P1_db3 (100% processed), C:\dbluns\0A_P1_db4 (100% processed), C:\dbluns\0A_P1_db5 (100% processed), C:\dbluns\0A_P1_db6 (100% processed), C:\dbluns\0C_P2_db7 (100% processed), C:\dbluns\0C_P2_db8 (100% processed), C:\dbluns\0C_P2_db9 (100% processed), C:\dbluns\0C_P2_db10 (100% processed), C:\dbluns\0C_P2_db11 (100% processed) and C:\dbluns\0C_P2_db12 (100% processed) 7/31/2012 3:08:08 AM -- Performance logging has ended. 7/31/2012 3:08:08 AM -- C:\HUS110_PE108_C1B1_SAS7K_ESRP_R10_3GB_mbox_2000 Users\Performance Test\DBChecksum_2012_7_31_0_23_29.blg has 329 samples. 33