An Oracle White Paper September 2010. Implementing Microsoft Exchange with the Sun ZFS Storage 7420



Similar documents
Using Symantec NetBackup with VSS Snapshot to Perform a Backup of SAN LUNs in the Oracle ZFS Storage Appliance

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

An Oracle White Paper November Backup and Recovery with Oracle s Sun ZFS Storage Appliances and Oracle Recovery Manager

Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E

An Oracle White Paper June Oracle Database Firewall 5.0 Sizing Best Practices

An Oracle White Paper October Realizing the Superior Value and Performance of Oracle ZFS Storage Appliance

An Oracle White Paper December Oracle Virtual Desktop Infrastructure: A Design Proposal for Hosted Virtual Desktops

Virtual Compute Appliance Frequently Asked Questions

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

MICROSOFT EXCHANGE best practices BEST PRACTICES - DATA STORAGE SETUP

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

An Oracle White Paper May Exadata Smart Flash Cache and the Oracle Exadata Database Machine

An Oracle White Paper January Using Oracle's StorageTek Search Accelerator

SAN Conceptual and Design Basics

An Oracle White Paper September Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups

An Oracle White Paper August Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

Managed Storage Services

Disaster Recovery for Oracle Database

An Oracle White Paper November Oracle Real Application Clusters One Node: The Always On Single-Instance Database

An Oracle White Paper June Deploying a Microsoft SQL Server 2008 Database on the Sun ZFS Storage Appliance

An Oracle White Paper July Accelerating Database Infrastructure Using Oracle Real Application Clusters 11g R2 and QLogic FabricCache Adapters

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Microsoft Windows Server Multiprotocol Multipathing with the Oracle ZFS Storage Appliance

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide. An Oracle White Paper October 2010

Sun ZFS Storage Appliance Rule-Based Identity Mapping Between Active Directory and Network Information Services Implementation Guide

An Oracle Technical White Paper May How to Configure Kaspersky Anti-Virus Software for the Oracle ZFS Storage Appliance

An Oracle White Paper January A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c

Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud

StarWind iscsi SAN: Configuring Global Deduplication May 2012

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

SUN ORACLE DATABASE MACHINE

High Availability with Windows Server 2012 Release Candidate

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

How To Test A Mailbox On A Sun Zfs Storage Appliance With A Powerpoint 2.5D (Dag) On A Server With A 32K Volume Of Memory On A 32,000 Mailbox On An Uniden (Daga) On

An Oracle White Paper July Expanding the Storage Capabilities of the Oracle Database Appliance

SUN STORAGE F5100 FLASH ARRAY

An Oracle White Paper March Integrated High-Performance Disk-to-Disk Backup with the Oracle ZFS Storage ZS3-BA

Next Generation Siebel Monitoring: A Real World Customer Experience. An Oracle White Paper June 2010

An Oracle Technical White Paper June Oracle VM Windows Paravirtual (PV) Drivers 2.0: New Features

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

An Oracle White Paper April How to Install the Oracle Solaris 10 Operating System on x86 Systems

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

Driving Down the High Cost of Storage. Pillar Axiom 600

SUN ORACLE EXADATA STORAGE SERVER

An Oracle White Paper June High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

An Oracle White Paper May Distributed Development Using Oracle Secure Global Desktop

An Oracle White Paper October BI Publisher 11g Scheduling & Apache ActiveMQ as JMS Provider

ORACLE VIRTUAL DESKTOP INFRASTRUCTURE

VERITAS Storage Foundation 4.3 for Windows

An Oracle Benchmarking Study February Oracle Insurance Insbridge Enterprise Rating: Performance Assessment

An Oracle White Paper November Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Performance with the Oracle Database Cloud

How to Configure Symantec Protection Engine for Network Attached Storage for the Oracle ZFS Storage Appliance

An Oracle White Paper October Oracle Database Appliance

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

An Oracle White Paper July Introducing the Oracle Home User in Oracle Database 12c for Microsoft Windows

An Oracle White Paper September Oracle WebLogic Server 12c on Microsoft Windows Azure

An Oracle White Paper March Oracle s Single Server Solution for VDI

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Oracle Database Backup Service. Secure Backup in the Oracle Cloud

Quantum StorNext. Product Brief: Distributed LAN Client

ADVANCED NETWORK CONFIGURATION GUIDE

An Oracle White Paper March Backup and Recovery Strategies for the Oracle Database Appliance

An Oracle Technical White Paper January How to Configure the Trend Micro IWSA Virus Scanner for the Oracle ZFS Storage Appliance

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

An Oracle White Paper August Oracle WebCenter Content 11gR1 Performance Testing Results

Microsoft SMB File Sharing Best Practices Guide

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

HP ProLiant DL380p Gen mailbox 2GB mailbox resiliency Exchange 2010 storage solution

Deploying Microsoft Exchange Server 2010 on the Hitachi Adaptable Modular Storage 2500

WHITE PAPER PPAPER. Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions. for Microsoft Exchange Server 2003 & Microsoft SQL Server

An Oracle White Paper May Oracle Database Cloud Service

How To Write An Article On An Hp Appsystem For Spera Hana

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

STORAGETEK VIRTUAL STORAGE MANAGER SYSTEM

SMB Direct for SQL Server and Private Cloud

An Oracle White Paper September Oracle Database Smart Flash Cache

An Oracle White Paper May Oracle Audit Vault and Database Firewall 12.1 Sizing Best Practices

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

WHITE PAPER: ENTERPRISE SECURITY. Symantec Backup Exec Quick Recovery and Off-Host Backup Solutions

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

NetApp FAS Mailbox Exchange 2010 Mailbox Resiliency Storage Solution

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

An Oracle Technical White Paper November Oracle Solaris 11 Network Virtualization and Network Resource Management

An Oracle White Paper. Oracle Database Appliance X4-2

Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015

New!! - Higher performance for Windows and UNIX environments

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

Oracle Big Data Appliance: Datacenter Network Integration

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

Transcription:

An Oracle White Paper September 2010 Implementing Microsoft Exchange with the Sun ZFS Storage 7420

Disclaimer The following is intended to outline Oracle's general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle s products remains at the sole discretion of Oracle.

Introduction...3 Audience...3 Executive Overview...4 Configuring Sun ZFS Storage Appliance for Microsoft Exchange...5 Network Configuration...5 Configuration > Net... 5 Source-Aware Routing...7 Network > Routing... 7 Data Services...8 Configuration > Services... 8 Storage Pools...9 Configuration > Storage... 9 Storage Network...14 Targets and Initiators... 14 iscsi SAN Configuration... 14 Fibre Channel SAN Configuration... 15 LUNs... 17 Shares > Project... 17 Project > Shares... 20 Cluster... 22 Sun ZFS Storage Volume Shadow Copy Service (VSS) Hardware Provider...24 Using Analytics...25 CPU: broken down by percent utilization :... 25 Protocol: iscsi/fibre Channel operations per second broken down by initiator...25 Protocol: Fibre Channel bytes per second broken down by target... 26 Disk: I/O bytes per second broken down by disk... 26 Disk: I/O operations per second of type write broken down by latency... 26

Disk: I/O operations per second of type read broken down by latency...26 Network: interface bytes per second broken down by interface... 26 Cache: ARC size... 27 Cache: ARC accesses per second broken down by hit/miss... 27 Cache: L2ARC accesses per second broken down by hit/miss... 27 Cache: L2ARC size... 27 Preparing Windows for Microsoft Exchange with Sun ZFS Storage Appliance...28 Use Microsoft Disk Manager to create partitions...28 Exchange 2010 Overview...29 Exchange 2010 Sizing...30 Exchange 2010 Reference Configurations...32 Standalone Configuration with Volume Shadow Copy Services (VSS)... 32 Exchange Mailbox Resiliency... 34 Exchange Mailbox Resiliency with Storage-based Synchronous Replication... 36 Exchange 2007 Overview...38 Exchange 2007 Sizing...38 Exchange 2007 Reference Configurations...39 Local Continuous Replication (LCR)... 39 Cluster Continuous Replication (CCR)... 39 Conclusion... 41 Reference Material...41

Introduction Oracle's ZFS Storage Appliance is an excellent platform for implementation of Microsoft Exchange deployments. With it's comprehensive list of features all available without license fee's it's a solution that will save you capital dollars as well as ongoing expense overhead. This document provides key discussion points and relevant data regarding the implementation of Exchange 2010 and 2007 on the Sun ZFS Storage 7420 using both iscsi and Fibre Channel target modes. The mailbox server role will be the focus of this document where the Microsoft (MS) Exchange databases and user mailboxes are stored. Note, that hub transport, edge transport, client access and unified messaging roles will not be discussed in this document. Audience This document is intended for technical users experienced with Exchange on Windows Server 2003 or Windows Server 2008, Microsoft iscsi initiator, Fibre Channel storage, disk management and the Sun ZFS Storage Appliance Systems. 3

Executive Overview The Sun ZFS Storage Appliances is available in four platforms that meet diverse requirements which include price, performance, capacity and data protection. The Sun ZFS Storage 7120 for example, is an entry level storage system for workgroup environments that do not have medium to high READ performance requirements and thus are not equipped with a read optimized flash device. On the other hand, the remaining three platforms offer up to 2TB of read cache, which substantially enables many applications to show much faster response times that are typically in the low single digit milliseconds. The write flash on all Sun ZFS Storage Appliances can improve response times for synchronous write I/O intensive applications. With faster CPUs that have up to 8 threads per core can provide up to 32 threads to process data in each Sun ZFS Storage Appliance controller. Specifically, the Sun ZFS Storage 7320 offers 72GB of primary cache and offers up to 512GB of DRAM memory. With this release of the Sun ZFS storage appliance, up to 2.5 TB of cache storage is offered which can dramatically improve READ intensive application throughput(iops). Platform Storage Capacity Processor Memory (DRAM) Write Optimized SSD Sun ZFS Storage 7120 Up to 60 x 2TB SAS Disks [ 120TB ] 1 x Quad Core Intel Westmere EP E5620 @ 2.4GHz Up to 36GB Up to 96GB Sun ZFS Storage 7320 [ details are per controller ] Up to 96 x 2TB SAS Disks [ 192TB ] 2 x Quad Core Intel Westmere EP E5620 @ 2.4GHz Up to 72GB Sun ZFS Storage 7420 [ details are per controller ] Up to 576 x 2TB 4 x 6C Intel Nehalem SAS Disks EX E7530 @ 1.86GHz [ 1.1PB] [or] Read Optimized SSD Cluster Option N/A N Up to 16 x 18GB Up to 4 x 512GB Y Up to 512GB Up to 96 x 18GB Up to 4 x 512GB Y Up to 512GB per controller 2 x 18GB per cage Up to 4 x 512GB per controller Y 4 x 8C Intel Nehalem EX X7550 @ 2GHz Sun ZFS Storage 7720 Expandable racks. Each Rack 720 TB 4 x 8C Intel Nehalem EX X7550 @ 2GHz 4

Table 1: Sun ZFS Storage Appliance Features Configuring Sun ZFS Storage Appliance for Microsoft Exchange Network Configuration Configuration > Net When configuring the Sun ZFS Storage Appliance for MS Exchange, it is important to note the following. While the Exchange 2007 database workload is highly random and uses 8k page sizes, Exchange 2010 is more synchronous and uses a 32k page size making it an ideal application solution for the Sun ZFS Storage Appliance. Note, that a separate, private data network should be dedicated to iscsi traffic. During testing a heavy workload simulation, network traffic rarely rose above 100 Mb. Though a single Exchange mailbox server and single Sun ZFS Storage Appliance data store with a 1 GbE interface should suffice, a second path for redundancy is recommended. Depending on the number of attached mailbox servers and combined network load, you may choose to have several 10 GbE connections on the storage head. It is important to note that there are non-transactional operations that the Exchange mailbox server must service that occur during specific types of operations that can impact performance. For example, heavy sequential reads for database checksum verification during the backup process or streaming backups of the storage groups can quickly saturate a single 1 GbE connection. Depending on the service-level requirements with respect to backups and application storage response times, larger network bandwidth or TCP offload cards may be required. Options for increasing bandwidth and/or throughput include: MPIO using multiple sessions per target (MS/T). Link aggregation control protocol (LACP). 10Gb/s network cards. 5

What follows are examples of demonstrated connectivity options available with the Sun Storage Appliance and recommended for MS Exchange deployments. Illustration 1 shows a three-port LACP link aggregation for dedicated iscsi traffic. Jumbo frames are also enabled which allows for larger packets to be transferred as opposed to the default 1500 bytes. The Exchange mailbox server may also use port teaming if its network interfaces support it. LACP, jumbo frames and port teaming must also be configured on the network switch. Consult your switch vendor's documentation for instructions. Illustration 1: LACP Link Aggregation Illustration 2 shows a 10 Gb interface labeled LabNet-10GbE with jumbo frames enabled Illustration 2: 10 Gb Interface 6

Source-Aware Routing Network > Routing Source-Aware Routing can be used to avoid network packet collisions and work in concert with network routers to load balance network traffic. If a system is configured with more than one IP interface, there may be multiple equivalent routes to a given destination, forcing the system to choose which IP interface to send network traffic(packets). Similarly, a packet may arrive on one IP interface, but be destined to an IP address that is hosted on another IP interface. The system's behavior in such situations is determined by the selected multi-homing policy. Three policies are supported: Loose: Do not enforce any binding between an IP packet and the IP interface used to send or receive the packet Adaptive: Identical to loose, except prefers routes with a gateway address on the subnet where the packet's source IP address resides Strict: Strict binding is required between an IP packet and the IP interface used to send or receive the packet The Sun ZFS Storage Appliance with it's intuitive user interface for configuring routing table entries provides these options for Source-Aware Routing and are illustrated below: Illustration 3: Source-Aware Routing 7

Data Services Configuration > Services The browser user interface (BUI) services page lists all the data services, along with state information and buttons for administration. When using the BUI, double-click a service line to go or return to the service screen. It is mandatory that iscsi services, in addition to any other data services running on the system are turned on. See Illustration 4 which shows that iscsi data services are enabled. Note, that isns and RADIUS can be setup on the iscsi services screen. Illustration 4: Enable iscsi Data Service 8

Storage Pools Configuration > Storage Transaction Logs are a key element of the ZFS filesystem along with storage pools. Transactional logs can be replayed to update a corrupted or failed database if the database LUNs fail. Note that it is recommended that a single-head 7420 Appliance should store the database volumes in a pool separate from transactional logs. In addition, consider the following when planning: Database LUNs and transactional log LUNs should not use the same spindles within a pool. The high I/O requirements of the Exchange database demand the use of mirroring for its storage pool. Parity and double parity may be used but will affect performance. The transactional log and database elements of ZFS send synchronous writes that require acknowledgment from persistent media. For this reason, Write Flash Acceleration SSDs greatly improve the write performance of the ZFS Storage appliance. 9

Illustration 5 shows the drive selection on a 7420 Appliance with 44 data drives, two read-cache and two write-cache devices available. In this case, all available drives are being selected to create this storage pool. Illustration 5: Storage Pool Drive Selection Illustration 6 shows the data profile selection. The mirrored profile has been selected with (NSPF), this stands for 'no single point of failure' and indicates that data is arranged in mirrors or RAID stripes such that a pathological JBOD failure will not result in data loss. Illustration 6: Storage Pool Data Profile 10

Illustration 7 shows the selection of the Log profile. Using a striped profile has the advantage of being both faster and handing more data, without any risk of losing data. Illustration 7: Storage Pool Log Profile 11

Illustration 8 shows a pool for the database LUNs which includes most of the six J4410 trays with 2TB disk drives. Five spares are allocated, four Log devices are striped, and the remaining 111 drives are mirrored across enclosures. The database pool also shows four Read Flash Accelerator SSDs. Cache devices are used as an L2ARC drawing data from the ARC (RAM) of the appliance heads before it is evicted. The ARC is populated with Most Recently Used (MRU) and Most Frequently Used (MFU) lists. Read performance of an Exchange installation should improve when the cache devices have warmed up. This warm up time and cache hit ratio depends on the I/O characteristics of the Exchange site. Illustration 9 shows the transaction log pool which is four drives of the six J4410s with two log device, one spare and the drives mirrored. Illustration 8: Database Pool Illustration 9: Transaction Log Pool 12

The graphical representation of the storage pools shown above are detailed in Illustration 10: Illustration 10: 3x10TB database LUNs and 3x500GB transaction log LUNs 13

Storage Network Targets and Initiators Note that only block-level protocols are allowed for storage with respect to the Exchange Server mailbox application because NFS and SMB are not allowed. Therefore file system and share level ACLs are not required to be used on the appliance nor the Exchange server for purposes of restricting access to data and storage. Protocol targets and initiators are used to control access to specific Exchange LUNs. There is no requirement for the 7420 to be part of a domain because the Exchange server will only be accessing the LUNs via iscsi or Fibre Channel. This being true, no Active Directory authentication is needed. When creating a LUN, it is important to specify the proper target and initiator group for each mailbox server, as well as some type of authentication such as CHAP when using iscsi. If a LUN is set to allow default initiator or target access, the Exchange database could be at risk for accidental corruption or deletion. iscsi SAN Configuration Illustration 11 shows the creation of an iscsi target on the 10 Gb interface with CHAP enabled: Illustration 11: New iscsi Target on the 10Gb Interface w/chap enabled. 14

Illustration 12 shows the creation of an iscsi initiator with CHAP enabled: Illustration 12: New iscsi Initiator Fibre Channel SAN Configuration After the Fibre Channel ports on the ZFS Appliance have been configured for Target mode, the ports can be added to a Fibre Channel Target Group. Illustration 13 shows the creation of a Fibre Channel target group with 4 FC ports. Currently each port is allowed in only one target group at a time. Illustration 13: New Fibre Channel Target Group 15

Illustration 14 shows a new Fibre Channel initiator by manually adding the World Wide Name (WWN) and an Alias. There is also the option of aliasing discovered Fibre Channel initiators from the Fibre Channel Target page. Illustration 14: New Fibre Channel Initiator 16

LUNs Shares > Project After the storage pools are in place, database and log LUNs can be created. Illustration 15 shows the project profile used to set the default attributes of the shares created for the exchange 2010 database LUNs. The main item to note is the volume block size size set to 32k. This 32k volume block size has been found to yield the highest performance on the Sun ZFS Storage Appliance with Exchange 2010 databases. Illustration 15: Exchange 2010 Database Project Profile 17

Illustration 16 shows the project profile used to set the default attributes of the shares created for the exchange 2007 database LUNs. The main item to note is the volume block size size set to 8k. This 8k volume block size has been found to yield the highest performance on the Sun ZFS Storage Appliance with Exchange 2007 databases. llustration 16: Exchange 2007 Database Project Profile I 18

Illustration 17 shows the project profile used to set the default attributes of the shares created for the exchange transaction log LUNs. The main item to note here is the volume block size set to 128k. This 128k volume block size has been found to yield the highest performance on the Sun ZFS Storage Appliance with Exchange 2007 and 2010 logs. Illustration 17: Exchange Transactional Log Project Profile Aside from the volume block size, defaults are recommended from a performance point of view. For environments that require virus scanning and compression, the performance impact should be evaluated before configuring options that could cause undesirable performance. 19

Project > Shares At the time of LUN creation, several options are populated from the defaults selected at the project level such as volume size and volume block size. Illustration 18 shows the creation of a database LUN in the exchange_db project. When a new LUN is created, both the target and initiator groups are selected and mapped providing LUNlevel granularity. Note that Thin Provisioning is also selected to enable the system to allocate space as needed in the share up to the 2 TB volume size selected. Exchange best practice recommend not using LUNs greater than 2 TB in size. Illustration 18: Create Exchange Database LUN Mapped to iscsi 20

Illustration 19 shows the creation of a database LUN in the exchange_log project. Volume size and volume block size are populated from the project-level defaults. This LUN has been mapped to the Fibre Channel target and initiator groups previously set up. Exchange recommends a log LUN size of about 10% of the database size. Illustration 19: Create Exchange Transactional Log LUN Mapped to Fibre Channel 21

Cluster Illustration 20 and Illustration 21 show a default configuration for a 7420 cluster: Illustration 20: Network setup on cluster head 1 Illustration 21: Network setup on cluster head 2 22

In Illustration 22, Head 1 owns the database LUNs and head 2 owns the transaction log LUNs in an active/active configuration. For highly available storage pools, 7420 cluster configurations are perfect fits for Exchange 2007 and 2010. Illustration 22 shows the cluster resource configuration on a 7420. This configuration would withstand a head failure as the resources would fail over therefore eliminating an Exchange outrage. Illustration 22: Cluster configuration on 7420C 23

Sun ZFS Storage Volume Shadow Copy Service (VSS) Hardware Provider VSS provides the ability to create snapshots which are point-in-time (PIT) copies of volumes. Snapshots are images of the data precisely as it looks at a specific time. By maintaining these timely images of data, users and administrators can quickly recover individual files or whole volumes directly from disk as they appeared at the time the snapshot was taken, similar to a restore from tape but much faster. NetBackup includes a set of software libraries called snapshot providers. The providers enable the Snapshot Client to access the snapshot technology in the storage subsystem. Each snapshot provider is designed for a particular subsystem. The Sun ZFS Storage disk array provider enables NetBackup to create and manage hardware snapshots in the Sun ZFS Storage Appliance: The method is specified in the NetBackup policy. When the policy runs, the snapshot method calls the snapshot provider library. The provider then accesses the underlying commands in the storage subsystem to create and manage the snapshot. Illustration 23: Sun ZFS Storage Appliance VSS hardware provider overview 24

Using Analytics DTrace storage analytics is a key feature of the Sun ZFS Storage Appliance and can be used to monitor and manage MS Exchange storage workloads for purposes of diagnosis and maintaining MS Exchange storage related performance. This section describes common metrics provided through Dtrace Analytiscs and should be monitored during an Exchange workload. The BUI screen shots below show how beneficial the Sun ZFS Storage Appliance Analytics can be for monitoring various workloads. CPU: broken down by percent utilization : Heavy workloads have shown a CPU utilization of roughly 25%. Illustration 24: CPU: Broken down by percent utilization Protocol: iscsi/fibre Channel operations per second broken down by initiator This view shows the amount of iscsi or Fibre Channel operations that the Exchange mailbox server sending over each initiator, as well as the total. Illustration 25: Protocol: Fibre Channel operations per second broken down by 25

Protocol: Fibre Channel bytes per second broken down by target. This view shows the amount of iscsi or Fibre Channel operations that the Exchange mailbox server is sending over each target, as well as the total. Illustration 26: Protocol: Fibre Channel bytes per second broken down by target Disk: I/O bytes per second broken down by disk. It is important to verify that your write-cache devices are included in the correct pool and are being written to. Disk: I/O operations per second of type write broken down by latency Disk: I/O operations per second of type read broken down by latency High latencies lead to a poor user experience with Exchange. Database read and write latency spikes should be below 50ms and the average should be below 20ms. Transaction log writes should stay below 10ms. Network: interface bytes per second broken down by interface. Verification of load balancing across multiple NICs and proper routing can be done here. It is also a way to see if your network is being saturated. Illustration 27: Network: Interface bytes per second broken down by interface 26

Cache: ARC size This is a view into the system's RAM or ARC. If the size of the ARC is approaching your system's RAM paired with a high percentage of cache hits, adding additional memory is recommended. Cache: ARC accesses per second broken down by hit/miss. This views cache hits and misses: Illustration 28: Cache: ARC accesses per second broken down by hit/miss Cache: L2ARC accesses per second broken down by hit/miss. This view shows cache hits and misses. Cache: L2ARC size. Depending on the I/O characteristics of the Exchange site, the L2ARC should warm up and be effectively increase the read-hit rate. 27

Preparing Windows for Microsoft Exchange with Sun ZFS Storage Appliance Use Microsoft Disk Manager to create partitions 1. Open Server Manager. 2. Expand Storage, and click Disk Management. 3.Click OK if a pop-up asks for new disks to be initialized. If a pop-up does not come up, right-click on the new drives and select Initialize. If the LUN is greater than 2048GB, you must use a GUID partition table (GPT). Otherwise, use Mater Boot Record (MBR). Note: If using Windows Server 2003 clustering (required for CCR/SCR), GPT partitions are not supported. 4.Sector or storage track misalignment is common in Windows Server 2003 and prior. Make sure that the starting offset is a multiple of 8 kb for Exchange 2007 and 32 kb for Exchange 2010. Failure to do so may cause a single I/O operation spanning two tracks, causing performance degradation. Use Diskpart.exe to align the boundaries to the recommended setting of 64K as shown in Illustration 29: Illustration 29: diskpart 28

Windows Server 2008 takes care of the track alignment automatically. 5.From disk management, right click on the disk and choose New Partition. Choose primary, assign a drive letter, then format the partition using a 64K allocation unit size. Make note of which volume letters are going to be used for the databases and which are going to be used for the transaction logs. 6.On Windows Server 2003, since the partition has already been created from diskpart, right click on the partition and choose format. 7.Exchange 2007 recommend a 64K allocation unit size when creating the NTFS file system on the database LUNS and the default allocation unit size for the transactional log LUNs. 8.Exchange 2010 recommend a 64K allocation unit size when creating the NTFS file system on the database LUNS and the transactional log LUNs. 9.Exchange best practices recommend the use of basic disks for both database and logs Exchange 2010 Overview Exchange 2010 went through a major overhaul again from Exchange 2007. The result of the changes was another 70% reduction in IOP requirements for the storage sub-system. A few of biggest contributors to the IOP reduction were: Increased IO size from 8-32k Allocation of database space in a contiguous manner Increase in cache effectiveness Maintenance performed 24x7 with auto-throttle DB Write Smoothing The other big change was the introduction of Database Availability Groups (DAGs). A database availability group (DAG) is the base component of the high availability and site resilience framework built into Microsoft Exchange Server 2010. A DAG is a group of up to 16 Mailbox servers that host a set of databases and provide automatic database-level recovery from failures that affect individual servers or databases. 29

Exchange 2010 Sizing Exchange 2010 is a horizontally scaling application. Scaling from 100 mailboxes to 10000 mailboxes can be managed within one mailbox server by adding more CPUs, memory and storage. In order to scale above 10000+ mailboxes, additional servers and storage need to be rolled out in a horizontal fashion. There are several factors to consider when sizing both the server and storage. Illustration 30 And Illustration 31 shows user profile sizing guidelines for exchange 2010. Messages Sent/Received per mailbox per day (~75k average message size) Database Cache per mailbox(mb) Unprotected DB copy (Standalone): Estimated IOPS per mailbox Protected DB copy (Mailbox Resiliency): Estimated IOPS per mailbox 50 100 150 200 250 300 350 400 450 500 3 6 9 12 15 18 21 24 27 30.060.120.180.240.300.360.420.480.540.600.050.100.150.200.250.300.350.400.450.500 Illustration 30: User Profile and Message Activity 30

Messages Sent/Received per mailbox per day (~75k average message size) Mailbox Size (MB) Messages Sent/Received per mailbox per day (~75kb average message size) Mailbox Size (MB) 50 100 150 200 250 300 350 400 450 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 10,000 50 100 150 200 250 300 350 400 450 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 10,000 Illustration 31: Mailbox Size Recommendation Based on Profile Factors that need to be considered when sizing an Exchange configuration based on the previous tables are: CPU Configuration Memory Configuration Storage Connectivity: iscsi / Fibre Channel Network Connectivity: Gb Ethernet, 10 Gb Ethernet Disk System Sizing1 1 Processor and memory configurations for Exchange Server 2010 Server Roles http://technet.microsoft.com/en-us/library/dd351192.aspx 31

Exchange 2010 Reference Configurations Standalone Configuration with Volume Shadow Copy Services (VSS) Illustration 32 shows a standalone solution comprising two Sun Fire x4170 servers used for the mailbox server role, two Brocade 300 8 Gb Fibre Channel switches and one Sun ZFS Storage 7420 with 4 JBODs. This solution was created to supported 10,000 mailboxes with a mailbox size of 1GB. Each mailbox server hosted 5,000 users, and had five active databases, with 1,000 users per database This configuration also leverages the Sun ZFS Storage Volume Shadow Copy Services hardware provider in conjunction a with Symantec NetBackup (NBU) solution for disaster recovery. NBU is one of the backup applications certified on the Sun ZFS Storage devices. The Exchange mailbox servers contain a NetBackup snapshot client that enables access to the backup application and coordinates with the Exhange databases to facilite the snapshots. Note: Exchange Databases and Logs can not share the same drive spindles. 32

Illustration 32: Exchange 2010 Standalone Configuration with Sun ZFS Storage VSS Hardware provider 33

Exchange Mailbox Resiliency To qualify as an Exchange 2010 Resiliency configuration, the following requirements must be met: Database and Log copies do not share storage array controllers. Database and Log copies do not share disk spindles. Database and Log copies do not share storage paths or storage paths are redundant (e.g. MPIO) The following solution was created to support 24,000 mailboxes with a size of 1GB. Illustration 33 shows a single DAG solution comprising four Sun Fire x4170 servers used for the mailbox server role, two Brocade 300 8 Gb Fibre Channel switches, and two Sun ZFS Storage 7420s (each with four JBODs). Each mailbox server hosts 6,000 users and has four active databases with 1,500 users per database. The DAG contains two copies of every database -- one local; one on another mailbox server connected to the second 7420 storage controller. This configuration can provide both high availability and disaster recovery scenarios. Database replication is handled by the mailbox servers over the Ethernet connections between the servers. The databases and copies are equally distributed across both storage appliances. A failure of any database will cause the database to mount and become active on the server hosting the database copy, this is transparent to the end-users. This configuration could just as easily have been built using iscsi LUNs. 34

Illustration 33 shows a single DAG solution Il lustration 33: Sun Storage ZFS 7420 Exchange Mailbox Resiliency 35

Exchange Mailbox Resiliency with Storage-based Synchronous Replication To qualify as an Exchange 2010 Resiliency configuration, the following requirements must be met: Database and Log copies do not share storage array controllers. Database and Log copies do not share disk spindles. Database and Log copies do not share storage paths or storage paths are redundant (e.g. MPIO) Storage-based replication does not operate on the Exchange computers. This kind of replication occurs at the storage device level. The following solution utilizes a third offsite 7420 appliance to asynchronously replicate copies of data. This configuration can provide both high availability and disaster recovery scenarios. Database replication in the DAG is handled by the mailbox servers over Ethernet connections. The storage-based replication is handled by the 7420 Storage appliance. A failure of any database will cause the database to mount and become active on whichever server is hosting the database copy, this is transparent to the end-users. The offsite replication solution provides disaster recovery in the event both local 7420 appliances were impacted by an outage. 36

Illustration 34 shows the same Sun ZFS Storage7420 single DAG solution in Illustration 33 with the addition of Storage-based replication. Illustration 34: Sun Storage ZFS 7420 Exchange Mailbox Resiliency with Storage level Asynchronous Replication 37

Exchange 2007 Overview One of the biggest differences between Exchange 2003 and 2007 was the requirement of 64-bit architecture for the server roles. This change not only meant that a software upgrade to Exchange 2007 was required, but in most situations, a hardware refresh was necessary as well. The 64-bit change allowed for a greater amount of RAM on the mailbox server, which can be utilized as user cache. The larger cache size plus a page size increase from 4-8k greatly reduced the I/O requirements of the storage subsystem by approximately 70% from Exchange 2003. Exchange 2007 Sizing Exchange 2007 is also a horizontally scaling application. Scaling from 500 mailboxes to 6000 mailboxes can be managed within one mailbox server by adding more CPU, memory and storage. In order to scale above 6000+ mailboxes, additional servers and storage need to be rolled out in a horizontal fashion. There are several factors to consider when sizing both the server and storage. Illustration 35 shows user profile sizing guidelines for exchange 2007. Illustration 35: User Profiles CPU Configuration Memory Configuration Storage Connectivity: iscsi / Fibre Channel Network Connectivity: Gb Ethernet, 10 Gb Ethernet Disk System Sizing2 2 Processor and memory configurations for Exchange Server 2010 Server Roles http://msexchangeteam.com/archive/2006/09/25/428994.aspx 38

Exchange 2007 Reference Configurations Local Continuous Replication (LCR) Single server, 2 copies of the databases and transaction logs. This solution is recommended for smaller installations that require a certain amount of high availability, but do not want to roll out clustered servers. Only one database per storage group is allowed in LCR. Because there are two copies of the database, non-clustered or single headed 7420s can be used. If one appliance fails, the secondary copy would go live. 7420 clusters may also be used in LCR for stronger resiliency. Illustration 36: LCR with a single server and two separate 7420s Cluster Continuous Replication (CCR) 39

Two servers, 2 separate copies of the databases and transaction logs in a co-located server room or over a high speed site data connection. Asynchronous log shipping occurs between the cluster interconnect between the active and passive nodes. A third server (not shown) acts as a file share witness within majority node set clustering. This role is commonly placed on the hub transport server. The passive copy of the database may be used as the backup source so that no additional resources are used from the primary active mailbox server. Only one database per storage group is allowed in CCR. Clustered 7420s are recommended, but not required. CCR fail-over may take up to 2 minutes; therefore, depending on the SLA, a single head node may be deployed. Illustration 37: CCR Configuration with two discrete 7420 clusters 40

Conclusion The Sun ZFS Storage Appliance series is a family of unified storage systems that are ideal solutions for MS Exchange environments and also for numerous other applications. It's comprehensive list of data services, storage efficiency features, analytics, massive scalability, intuitive user interface and excellent performance along with it's cost effectiveness make it an excellent candidate for the data center. For more information visit the following websites: Reference Material Sun Unified-Storage Solution http://www.oracle.com/us/products/servers-storage/storage/unifiedstorage/index.html. Microsoft Exchange Solution Center http://support.microsoft.com/ph/13965 Exchange 2010 Mailbox Server Role Requirements Calculator http://msexchangeteam.com/archive/2009/11/09/453117.aspx Implementing Microsoft Exchange with the Sun ZFS Storage 7420 Copyright 2010, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be September 2010 Author: Art Larkin error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by Oracle Corporation World Headquarters this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 0410 41