EMC Symmetrix with Microsoft Windows Server 2003 and 2008

Size: px
Start display at page:

Download "EMC Symmetrix with Microsoft Windows Server 2003 and 2008"

Transcription

1 Best Practices Planning Abstract This white paper outlines the concepts, procedures, and best practices associated with deploying Microsoft Windows Server 2003 and 2008 with EMC Symmetrix DMX-3 and DMX-4, and Symmetrix V-Max storage. October 2009

2 Copyright 2009 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part Number h6665 Best Practices Planning 2

3 Table of Contents Executive summary...5 Introduction...5 Audience...5 Windows storage connectivity...5 Symmetrix front-end director flags...6 Additional director flag information...7 SCSI-3 persistent group reservations...9 LUN mapping and masking...9 Connectivity recommendations...11 Multipathing...12 Symmetrix storage...14 Understanding hypervolumes...14 Understanding metavolumes...15 Metavolume configurations...15 Gatekeepers...16 RAID options...17 Disk types...17 Virtual Provisioning...18 Discovering storage...19 Windows Server 2008 SAN Policy...20 Offline Shared...21 Automount...22 Initializing and formatting storage...22 Disk types...22 Master Boot Record (MBR)...22 GUID partition table (GPT)...22 Basic disks...23 Dynamic disks...23 Veritas Storage Foundation for Windows...24 Disk type recommendations...24 Large volume considerations...24 Partition alignment...25 Partition alignment prior to Windows Server 2003 SP Partition alignment with Windows Server 2003, SP1, or later versions...25 Partition alignment with Windows Server Querying alignment...26 Formatting...27 Allocation unit size...27 Quick format vs. regular format...28 Windows Server 2003 format...28 Windows Server 2008 format...28 Volume expansion...28 Striped metavolume expansion example...29 Best Practices Planning 3

4 Symmetrix replication technologies and management tools...32 EMC TimeFinder family...32 EMC SRDF family...34 Open Replicator overview...35 Symmetrix Integration Utilities...37 EMC Replication Manager...38 Managing storage replicas...39 Symmetrix device states...39 Read write (RW)...39 Write disabled (WD)...39 Not ready (NR)...40 Managing the mount state of storage replicas...41 Conclusion...47 References...47 Best Practices Planning 4

5 Executive summary The success of deploying and managing storage in Windows environments is heavily dependent on utilizing vendor-qualified and vendor-supported configurations while ensuring the proper processes and procedures are used during implementation. Supported configurations and defined best practices are continually changing, which requires a high level of due diligence to ensure new, as well as existing, environments are properly deployed. EMC Symmetrix V-Max and Symmetrix DMX storage systems undergo rigorous qualifications to ensure supported topologies throughout the storage stack (operating system, driver, host bus adapter, firmware, switch, and so on) provide the highest levels of stability and performance available in the industry. Additionally, best practices and recommendations are continually tested and re-evaluated to ensure deployments are optimized as new operating system versions, patches, and features are made available. EMC provides a myriad of delivery mechanisms for relaying the information found during qualification and testing, including documentation and white papers, support forums, technical advisory notifications, and extensive support matrices as qualified by EMC s quality assurance organizations, including EMC E-Lab. By combining best-of-breed software and hardware technologies like the Symmetrix DMX and Symmetrix V-Max with thorough qualification, support, and documentation facilities, EMC provides the most comprehensive set of tools to ensure five 9s availability in the most demanding environments. Introduction Critical information for deploying Windows-based servers on Symmetrix storage is available today but can be spread across various white papers, technical documentation, and knowledgebase articles. The goal of this paper is to define and consolidate key concepts and frequently asked questions for implementing Windows Server 2003 and 2008-based operating systems with Symmetrix storage. Some topics will be directly addressed in this paper, while others will reference more in-depth information available from other resources where detailed step-by-step guidance is required. The general topics covered include settings and best practices in the context of storage connectivity, device presentation, multipathing, Windows and Symmetrix disk configurations, and LUN management including growth and replication. Additional documentation will be referenced where appropriate and a list of related resources will be included in the References section on page 47. Audience This white paper is intended for storage architects and administrators responsible for deploying Microsoft Windows Server 2003 and 2008 operating systems on Symmetrix V-Max, and Symmetrix DMX-4 and DMX-3 and storage systems. Windows storage connectivity Symmetrix storage systems support several modes of connectivity for Windows hosts including Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and Internet Small Computer System Interface (iscsi). Additionally the Symmetrix can support direct connections from host bus adapters (HBA) utilizing Fibre Channel Arbitrated Loop or connections via switched architectures (FC-SW). FCoE environments currently require an FCoE switch to convert from native Fibre Channel from the Symmetrix array. For each of these connectivity options, specific host and operating system functionality can be supported, including boot from SAN and clustering configurations. For detailed information on supported hardware and software configurations with these technologies, please see the EMC Host Connectivity Guide for Windows and the EMC Support Matrix (ESM), both available at (access required). Beyond the supported configurations listed within the ESM, specific configurations are qualified as part of the Microsoft Windows Server Catalog (WSC), also referred to as the hardware compatibility list (HCL). Best Practices Planning 5

6 For clustering with Windows Server 2003, referred to as Microsoft Cluster Service (MSCS), Microsoft Customer Support Services (CSS) only supports clusters where the hardware and software, in their entirety, are listed on the WSC. Microsoft Knowledge Base (KB) article , which can be found at has additional details. For failover clustering with Windows Server 2008, officially supported solutions require software and hardware components to receive a Certified for Windows Server 2008 logo. Windows Server 2008 failover clusters, however, do not need to be listed in the WSC in contrast to the requirements of Windows Server For Windows Server 2008 failover clustering, the fully configured cluster must pass a validation test. The validation test is provided as part of the validate a configuration wizard included with the Windows Server 2008 operating system. The cluster validation runs a set of tests against the defined cluster nodes in the environment, including tests for processor architecture, drivers, networking configuration, storage, and Active Directory, among other components. By allowing specific configurations to be tested by an end user, the validation process allows for a much simpler and streamlined procedure for qualifying a specific clustered environment. Because of this change in support policy, specific Windows Server 2008 failover clustering configurations will not necessarily be listed in the ESM or WSC. Geographically dispersed clusters are unique in the way they are validated with Windows Server Geographically dispersed clusters are clusters where nodes and storage arrays are separated across data centers for the purposes of disaster recovery. The Symmetrix Remote Data Facility/Cluster Enabler, or SRDF /CE, is an EMC-developed extension to Windows Server configurations, which implements support of a geographically dispersed cluster. With SRDF/CE, nodes within a cluster will access different storage arrays, depending on their geographic locations, and subsequently different LUNs where data is replicated consistently with SRDF. With nodes potentially accessing separate LUNs, some of the storage specific tests performed by the validation wizard, including SCSI-3 persistent reservation tests, will not be successful. The storage test failures are expected, and due to the nature of geographical clusters such as SRDF/CE, Microsoft does not require them to pass the storage tests within the validation process. For more information regarding cluster validation with Windows Server 2008, including Microsoft policy around geographically dispersed clusters, please see Microsoft Knowledge Base article ( Symmetrix front-end director flags The EMC Support Matrix is the definitive guide for information regarding Symmetrix director flags and should be consulted prior to server deployments or operating system upgrades. The ESM can be viewed at which is also known as the E-Lab Interoperability Navigator. One method for using the Navigator to determine the appropriate director flags is to utilize the Advanced Query option. From within the Navigator as depicted in Figure 1, under the Advanced Query tab, select the appropriate host operating system and storage array. Once selected and queried via get results, support statements will become available for the selected components. Within the support statements, under Networked Storage a link called Director Bit/Flag Information appears. This link contains the most up-to-date information regarding the appropriate director flags for the selected operating system and Symmetrix storage array. Best Practices Planning 6

7 Figure 1. E-Lab Interoperability Navigator Table 1 outlines the director flags required for Windows Server 2003 and 2008 standalone or clustered hosts on Symmetrix V-Max and Symmetrix DMX-3/DMX-4 arrays at the time of this paper s publication. Please note for Windows Server 2008 failover clustering an additional device level flag is required to enable SCSI-3 persistent reservations. Please see the section SCSI-3 persistent group reservations for additional details. Table 1. Windows Server 2003 and 2008 required Symmetrix port flags Bit Common_Serial_Number (C) SCSI_3 (SC3) SPC-2 Compliance (SPC-2) Host SCSI Compliance 2007 (OS2007) Description This flag should be enabled for multipath configurations or hosts that need a unique serial number to determine which paths lead to the same device. When enabled, the Inquiry data is altered when returned by any device on the port to report that the Symmetrix supports the SCSI_3 protocol. Provides compliance to newer (SCSI primary commands - 2) protocol specifications. For more information, see the SPC-2 section. When enabled, this flag provides a stricter compliance with SCSI standards for managing device identifiers, multi-port targets, unit attention reports, and the absence of a device at LUN 0. For more information please see the OS2007 section. Additional director flag information For Symmetrix V-Max, volume masking is enabled via the ACLX director flag. For Symmetrix DMX- 3/DMX-4, volume masking is enabled via the VCM director flag. In most switched Fibre Channel environments it is recommended to enable masking. For iscsi environments it is required to have masking enabled in order to allow initiators to log in to the Symmetrix. The section LUN mapping and masking has additional information. For FC Loop-based topologies, logically enable the following base setting in addition to the required Windows settings in Table 1: EAN (Enable Auto Negotiation), UWN (Unique WWN). For FC switched- Best Practices Planning 7

8 based topologies, logically enable the following base setting with the required Windows settings in Table 1: EAN (Enable Auto Negotiation), PP (Point-to-Point), UWN (Unique WWN). SPC-2 With Windows Server 2003 versions prior to SP1, SPC-2 was not a required director flag. With Windows 2003 SP1 and later, specific Microsoft applications began checking for SPC-2 storage compliance, including the Microsoft Hardware Compatibility Test (HCT) 12.1, as well as the Volume Shadow Copy Service (VSS) when used in conjunction with Microsoft clusters. Due to specific applications requiring SPC-2 compliance, it was recommended to enable SPC-2 in legacy Windows Server 2003 SP1 environments. Current Windows Server 2003-based qualifications for the Windows Server Catalog are executed with the SPC-2 flag enabled; therefore it is a requirement to have SPC-2 enabled in environments for compliance. For Windows Server 2008 environments, the SPC-2 flag has always been required. Should any software modifications, including service packs, hotfixes, or driver updates, be made to a legacy Windows Server 2003 environment where SPC-2 is not enabled, the SPC-2 director flag should be enabled at that time. Specific Windows Server 2003 hotfixes (including Microsoft hotfix ) may require SPC-2 compliance and could otherwise cause an outage in the environment if this flag is not set. OS2007 Windows Server 2008 configurations require the OS2007 director flag be enabled. For Windows Server 2003 environments it is recommended to have this setting enabled; however, it is not required in legacy Windows Server 2003 environments. Having the OS2007 flag enabled in Windows Server 2003 environments does not affect the OS and is recommended to be enabled in case there is a future upgrade to Windows Server As with the SPC-2 flag, future Windows 2003 Windows Server Catalog qualifications will be executed with the OS2007 flag enabled, which will impact Windows Server 2003 compliance where OS2007 is not enabled in new or upgraded environments. Methods for setting director flags Director flags can be configured at the director port level or at the HBA level. When director flags are set at the director port level, all hosts connected to those ports will be presented with the same settings. In a heterogeneous environment where ports are shared, different host operating systems may require different flags. In such cases it is possible to enable specific settings based on a HBA-to-director port relationship. Director-level flags can be set via configuration changes commonly done with the Solutions Enabler (SE) command line interface (CLI) symconfigure. HBA-level director flags are enabled via masking operations, such as with the symmask or symaccess hba_flag functionality. Director- or HBA-level settings can also be managed via the Symmetrix Management Console (SMC) graphical user interface (GUI), or with EMC Ionix ControlCenter (ECC). The following is an example of using the symconfigure CLI command to enable the OS2007 flag at the director port (port 0 on director 7f) level: symconfigure -cmd "set port 7f:0 SCSI_Support1=enable;" -sid 94 commit The following is an example of using the symaccess CLI command to enable the OS2007 flag for a specific WWN: symaccess -sid 94 set hba_flags on OS2007 -enable -wwn c96d0a50 Conflicts regarding director flags can occur in existing environments where requirements change based on the introduction of new or updated operating systems. Most flags can be modified while the director port remains online, however, the hosts connected to those ports may need to be restarted for the operating system to properly detect and otherwise manage the change in settings. The requirement for restarting is especially true for director-level changes that cause modification to SCSI inquiry data, such as the SPC-2 or OS2007 director flags. Best Practices Planning 8

9 For configurations where changes to flags are required for some, but not all hosts, connected to a common set of director ports, modifying the flags at the HBA level will ensure the smallest impact to the existing environment. The tradeoff for setting director flags at the HBA level is the additional overhead for managing the settings at a more granular level, which can be problematic in large environments. It is also important to ensure in multipathed or clustered environments that all paths for all cluster nodes have the same director flag settings. To configure director flags inconsistently across ports and HBAs or in a piecemeal fashion, in an effort to avoid system reboots, is not supported and could lead to instability in the environment. Recommendations regarding the ability to modify director flags without impact to Windows or other operating systems are outside of the scope of this paper. For the most up-to-date and detailed resources regarding director configuration changes and their impact on specific operating systems, please see the ESM or query the EMC support knowledgebase available on Powerlink ( SCSI-3 persistent group reservations Functionality new to Windows Server 2008 failover clustering is the use of SCSI-3 persistent group reservations. Persistent reservations allow for multiple hosts to register unique keys with a storage array through which a persistent reservation can be taken against a specified LUN. Persistent reservations introduce several improvements over the previously used SCSI-2 reserve/release commands utilized by MSCS with Windows Server 2003, including the ability to maintain reservations such that a shared LUN is never left in an unprotected state. For a Symmetrix to support SCSI-3 persistent reservations, and subsequently support Windows Server 2008 clustering, a logical device level setting must be enabled on each LUN requiring persistent reservation support. This setting is commonly referred to as the PER bit, or the SCSI3_persist_reserv attribute from a Solutions Enabler perspective. The SCSI3_persist_reserv attribute can be enabled via configuration changes commonly done with the Solutions Enabler command line interface (CLI) symconfigure. The setting can also be managed via SMC or ControlCenter. Metavolumes require that all member devices have the same attributes prior to forming the metadevice. With this in mind, it is necessary to set the SCSI3_persist_reserv attribute against any hypervolumes intended to form metavolumes in the future. For existing metavolumes, this attribute needs only to be set on the metavolume head device when making configuration changes using Solutions Enabler. The following is an example of using the symconfigure CLI command to set SCSI-3 persistent reservation support for a contiguous range of devices: symconfigure -sid 94 -cmd "set dev 42D:430 attribute = SCSI3_persist_reserv;" commit When the persistent reservation attribute is enabled, the Symmetrix is required to store and otherwise query the reservation status of the device. Because of this, it is generally recommended to only enable persistent reservation support for the devices that require this functionality. If the environment is dynamic enough such that enabling the persistent reservation attribute on demand creates significant administrative overhead, it is possible to set the attribute on all devices. LUN mapping and masking Symmetrix arrays manage the presentation of devices to operating systems through front-end director ports via a combination of mapping and masking functionality. Mapping Symmetrix devices is the process by which a LUN address is assigned to a specific device on a given front-end director. Should masking be disabled on a director port (VCM or ACLX director flags set to disabled) any hosts zoned to or directly attached to that director will have access to all mapped devices. The LUN address assigned to the device is the LUN number by which the host will discover and access the storage. For example, if the LUN address is defined on the director as F0 in hex (240 decimal), the host will discover the device as LUN 240. Best Practices Planning 9

10 In switched environments, where multiple hosts commonly access the same front-end directors, an additional level of device presentation granularity can be accommodated with the Symmetrix masking functionality. Masking operations allow for the restriction of access for a given WWN (defined on an HBA) to mapped devices regardless of the physical or zoned connectivity in the environment. Masking records define which WWN is allowed to access which Symmetrix devices on which director ports. Masking operations also allow for the modification of LUN addresses as seen by the host to provide a more predictable, uniform approach. In iscsi environments it is required for masking to be enabled on the Symmetrix front-end directors. iscsi connectivity to a Symmetrix requires the iscsi Qualified Name (IQN) to have masking entries that subsequently allow an HBA or NIC to log in to a front-end director. One exception to the rule that masking prevents access to all mapped devices involves the VCM or ACLX device. The VCM or ACLX flag is a special device attribute that allows a LUN, when mapped, to be viewed by hosts regardless of masking entries. In older versions of the Symmetrix operating environment Enginuity the VCM device was the repository where masking records were maintained. With newer versions of Enginuity the VCM or ACLX device is simply a gatekeeper that can be used for the initial configuration of the Symmetrix from a host. The VCM or ACLX device need not be mapped to Symmetrix front-end adapters or otherwise presented to hosts in order to perform masking operations. Masking can be performed through regular gatekeeper devices. Additionally the VCM or ACLX device, when presented to potential cluster nodes undergoing cluster validation with Windows Server 2008, may cause validation warnings. These warnings can be avoided by removing the VCM or ACLX from being mapped to the front-end directors When mapping and masking Symmetrix devices to a host, it is important to note the Windows maximum limit of 255 usable LUNs per HBA target. While this number applies to the total number of addressable LUNs per target, it also impacts the LUN numbers through which Windows allows access for devices. The LUN address range for Windows is from 0 to 254. Should a LUN have an address higher than 254, even if the operating system is not accessing more than 255 total LUNs on that target, the device will not be detected for use by the operating system. To some degree this limitation can be managed by the HBA driver. For instance the Emulex SCSIPort driver with Windows Server 2003 allows for higher LUN addresses to be managed (up to 512) via an adjusted LUN mapping. However, with Windows Storport and HBA miniport drivers, the 254 LUN address limit is enforced as a part of the operating system limit. A Symmetrix can support a much higher number of mapped devices per director, well beyond 255. Therefore the ability to modify LUN addresses via masking can be an important feature in large environments. With older versions of Solutions Enabler and Enginuity, a lun offset feature was used to adjust the starting LUN address for a given HBA and director combination. The lun offset functionality, however, has become obsolete with newer code revisions and is otherwise replaced by Dynamic LUN Addressing (DLA). DLA allows for Symmetrix devices, regardless of their LUN address on the front-end director, to start at address 0 for a given HBA and director port pairing. In addition, DLA can be used to directly specify a LUN address for a given device. The Symmetrix V-Max, with the use of Autoprovisioning Groups, not only automates director LUN mapping but also utilizes DLA to simplify LUN addressing. For more information regarding dynamic LUN addressing, please see the Symmetrix Dynamic LUN Addressing Technical Note available on Powerlink. The LUN address value is very different from the Symmetrix device number. The Symmetrix device number is assigned to a Symmetrix addressable volume upon its creation and will remain the same independent of the LUN address used across directors. When using multiple paths to a Symmetrix device, or when presenting shared storage to a cluster, it is recommended to ensure the LUN address is the same across all given directors. This guideline is more for ease of troubleshooting and not a hard requirement, as it is possible for LUNs to be multipathed to a Windows host or presented to multiple clustered hosts with different LUN addresses. Best Practices Planning 10

11 Connectivity recommendations It is recommended to configure at least two HBAs per Windows server with the goal of presenting multiple unique paths to the Symmetrix system. The benefits of multiple paths include high availability from a host, switch, and Symmetrix front-end director perspective, as well as enhanced performance. From a high-availability perspective, given the possibility for director maintenance, each Windows server should have redundant paths to multiple front-end directors. For a Symmetrix V-Max, this can be accomplished by connecting to opposite even and odd directors within a V-Max Engine, or across directors within multiple V-Max Engines (recommended when multiple engines are available). In the case of a Symmetrix DMX array this can be accomplished by ensuring a given host is connected to different numbered directors (director 4a and director 13a for example). For each HBA port at least one Symmetrix front-end port should be configured. For I/O intensive hosts in the environment, it could prove beneficial to connect each HBA port to multiple Symmetrix front-end ports. Connectivity to the Symmetrix front-end ports should consist of first connecting unique hosts to port 0 of the front-end directors before connecting additional hosts to port 1 of the same director and processor. This methodology for connectivity ensures all front-end directors and processors are utilized, providing maximum potential performance and load balancing for I/O intensive operations. As port 0 and port 1 of a given director number and letter or slice share a given processor complex, it is not recommended to connect the same HBAs for a given host to both port 0 and port 1 of the same director. Ideally individual hosts should be connected to port 0 or port 1 from different directors. For Windows Server 2008 failover clustering environments it is currently required to ensure a given HBA is not presented to both port 0 and port 1 from the same front-end director processor. For example, to zone map and mask devices from director 7A port 0 and director 7A port 1 to the same HBA is not supported in a Windows Server 2008 failover cluster. At the time this paper was published, the SCSI-3 persistent reservations of a given initiator are maintained at the front-end processor level. Because port 0 and port 1 of a given director slice share the same processors it is not supported to have an application that utilizes SCSI-3 persistent reservations access a LUN on an HBA sharing both ports. Figure 2 uses a physical view of a Symmetrix V-Max Engine to provide a depiction of the aforementioned recommendations. Figure 2. Connectivity recommendations for a Symmetrix V-Max Engine Best Practices Planning 11

12 Multipathing Configurations with multiple paths to storage LUNs will require a path management software solution on the Windows host. The recommended solution for multipathing software is EMC PowerPath, the industry-leading path management software with benefits including: Enhanced path failover and failure recovery logic Improved I/O throughput based on advanced algorithms such as the Symmetrix Optimization load balancing and failover policy Ease of management including a Microsoft Management Console (MMC) GUI snap-in and CLI utilities to control all PowerPath features Value-added functionality including Migration Enabler, to aid with online data migration, and LUN encryption utilizing RSA technology Product maturity with proven reliability over years of development and use in the most demanding enterprise environments While PowerPath is recommended, an alternative is the use of the native Multipath I/O (MPIO) capabilities of the Windows operations system. The MPIO framework has been available for the Windows operating system for many years; however, it was not until the release of Windows Server 2008 where a generic device specific module (DSM) was provided by Microsoft to manage Fibre Channel devices. For more information regarding the Windows MPIO DSM implementation, please see the Multipath I/O Overview article at Should native MPIO be chosen as the method for path management, the default failover policy with the RTM release of Windows Server 2008, for devices that do not report ALUA support such as the Symmetrix, is Fail Over Only. For performance reasons, especially in I/O intensive environments, it will be beneficial to modify this default behavior to one of the other options, including but not limited to, Least Queue Depth. The load-balance policy can be found under the MPIO tab within the Properties of each physical disk resource in the Windows Device Manager, as depicted in Figure 3. Best Practices Planning 12

13 Figure 3. MPIO load-balance policy for Windows Server 2008 RTM With Windows Server 2008 R2, the default load-balance policy for non-alua reporting devices, including Symmetrix, has changed from Fail Over Only to Round Robin. MPIO also has an additional load-balance policy with Windows Server 2008 R2 called least block. To help with managing MPIO more efficiently, Windows Server 2008 R2 has an enhanced mpclaim CLI with the ability to modify the default load-balance policy at either a device, target hardware ID (such as Symmetrix), or global DSM level. The following section gives an example of how to set the default load-balancing policy at the target hardware ID level using the mpclaim CLI. To view the target hardware identifier: mpclaim /e "Target H/W Identifier " Bus Type MPIO-ed ALUA Support "EMC SYMMETRIX " Fibre NO ALUA Not Supported To claim all devices for the Microsoft MPIO DSM based on target hardware ID (if not already done), please do the following. Note that the spaces are required within the EMC Symmetrix hardware ID string. mpclaim -n -i -d "EMC SYMMETRIX " Success, reboot required. To set the load-balance policy to least queue depth (4 in this example) based on target hardware ID: Best Practices Planning 13

14 mpclaim -l -t "EMC SYMMETRIX " 4 To view target-wide load-balance policies after being set: mpclaim -s -t "Target H/W Identifier " LB Policy "EMC SYMMETRIX " LQD With the preceding commands completed all existing and any future Symmetrix devices discovered by MPIO will have a load-balance policy of least queue depth. Additional information regarding connectivity and multipathing can be found in the EMC Host Connectivity Guide for Windows. Symmetrix storage Understanding hypervolumes To provide data storage, a Symmetrix system s physical devices must be configured into logical volumes called hypervolumes. Hypervolumes are the unit of storage at which RAID protection is defined. A given open systems, Fixed Block Architecture (FBA) hypervolume can have a RAID 1, RAID 5, or RAID 6 configuration. Cache-only hypervolumes, such as thin devices or virtual (TimeFinder/Snap) devices, are unique in that they do not have a direct RAID protection. RAID protection for the physical storage used by cache only devices is defined within the pools that provide the storage area for cache-based hypervolumes. Symmetrix systems allow a maximum of 512 logical volumes on each physical drive, depending on the hardware configuration and the type of RAID protection used. Prior to Enginuity 5874 on the Symmetrix V-Max, the largest single hypervolume that could be created on a Symmetrix was 65,520 cylinders, approximately GB. With Enginuity version 5874, a hypervolume can be configured up to a maximum capacity of 262,668 cylinders, or approximately GB, about four times as large as with Enginuity version 577x. Figure 4 shows four disks with hypervolumes configured in a logical-to-physical ratio of 8 to 1. Figure 4. Symmetrix physical disks with hypervolumes In general, fewer larger hypervolumes are recommended where applicable in a Symmetrix environment; however, to ensure the best possible performance experience, large hypervolumes should be carefully considered in a traditional, fully provisioned environment. For example, to assign a single large Best Practices Planning 14

15 hypervolume that is RAID 1 protected would only allow for two physical spindles to support the workload intended for that LUN. Should the RAID protection for a single large hypervolume be RAID 5 7+1, however, then this concern is lessened as eight disks would be available to service the workload. Additionally, striped metavolumes, outlined in the next section, provide the ability to spread a given workload across a larger number of physical spindles. Large hypervolumes provide additional value in Virtual Provisioning environments. In these environments, administrators may strive to overprovision the thin pool as a means to improve storage utilization. Furthermore, Virtual Provisioning deals with the performance needs by utilizing a striping mechanism across all data devices allocated to the thin pool. Performance limits can be mitigated by the total number of spindles allocated to the thin pool. Additional information about Virtual Provisioning is provided later. Understanding metavolumes A metavolume is an aggregation of two or more Symmetrix hypervolumes presented to a host as a single addressable device. Creating metavolumes provides the ability to define host volumes larger than the maximum size of a single hypervolume. A single Symmetrix system metavolume can contain a maximum of 255 hypervolumes. When combining the maximum hypervolume size with the maximum number of metavolume members, the largest addressable single LUN is TB ( GB * 255 members) for a Symmetrix V-Max and TB (59.99 GB * 255 members) for a DMX-3/DMX-4. Configuring metavolumes helps to reduce the number of host-visible devices as each metavolume is counted as a single logical volume. Devices that are members of the metavolume, however, are counted toward the maximum number of host-supported logical volumes for a given Symmetrix director. Metavolumes contain a head device, which provides control information, member devices, and a tail device. All devices defined for the metavolume are used to store data. Metavolumes also provide the mechanism by which a host addressable LUN can be expanded. Metavolumes allow for additional members to be added for the purposes of presenting additional storage within an existing LUN. The section Volume expansion on page 28 provides additional details. Figure 5 shows a metavolume comprised of four hypervolumes on different physical devices. Figure 5. Symmetrix metavolume Metavolume configurations Metavolumes provide two ways to access data: Concatenated and striped. Concatenated metavolumes organize addresses for the first byte of data at the beginning of the first volume, and continue sequentially to the end of the volume. Once the first hypervolume is full, data is then written to the next member device, again sequentially, beginning with the first byte until the end of the volume. Figure 6 shows a concatenated metavolume. Best Practices Planning 15

16 Figure 6. Concatenated metavolume Integration Striped metavolumes organizes addresses across all members, by using addresses that are interleaved between hypervolumes. The interleave or striping of data across the metavolume is done at a default stripe depth of 960K (one or two cylinders depending on the Enginuity version). Data striping benefits configurations with random operations by avoiding stacking I/O on a single hypervolume, spindle, and director. In this fashion data striping helps to balance the I/O activity between the drives and the Symmetrix system directors. Figure 7 shows a striped metavolume. Figure 7. Striped metavolume Gatekeepers Low-level I/O commands executed using Solutions Enabler SYMCLI are routed to the Symmetrix array by way of a Symmetrix storage device that is specified as a gatekeeper. The gatekeeper device allows SYMCLI commands to retrieve configuration and status information from the Symmetrix array without interfering with normal Symmetrix operations. A gatekeeper is not intended to store data and is usually configured as a small device (typically six cylinders or 2.8 MB.) The gatekeeper must be accessible from the host where the commands are being executed. Gatekeepers should be dedicated to the specific host that will be issuing commands to control or otherwise query a Symmetrix. In Microsoft failover clustering environments it is recommended to not cluster gatekeeper devices and to present unique gatekeepers to each cluster node as required. When presented to a Windows host, there is no requirement to signature or otherwise format a gatekeeper device. It will automatically become available for use by the host to communicate with the Symmetrix. Best Practices Planning 16

17 Detailed information regarding gatekeeper devices can be found in the EMC Solutions Enabler Symmetrix Array Management CLI Product Guide available on Powerlink. RAID options Symmetrix systems support varying levels of RAID protection. RAID protection options are configured at the physical drive level based on hypervolumes. Multiple types of RAID protection can be configured for different datasets in a Symmetrix system. Table 2 shows the levels of RAID protection available for open systems hosts like Microsoft Windows. Table 2. RAID protection options RAID option RAID 1 Provides the following The highest level of performance and availability for all mission-critical and business-critical applications. Maintains a duplicate copy of a volume on two drives: If a drive in the mirrored pair fails, the Symmetrix system automatically uses the mirrored partner without interruption of data availability. When the drive is (nondisruptively) replaced, the Symmetrix system re-establishes the mirrored pair and automatically re-synchronizes the data with the drive. Configuration considerations Withstands failure of a single drive. RAID 1 provides 50% data storage capacity. For a single write operation from a host RAID 1 devices will perform two disk I/O operations (a write to each mirror member). RAID 5 RAID 6 Distributed parity and striped data across all drives in the RAID group. Options include: RAID 5 (3 + 1) Consists of four drives with parity and data striped across each device. RAID 5 (7 + 1) Consists of eight drives with data and parity striped across each device. Striped drives with double distributed parity (horizontal and diagonal). Options include: RAID 6 (6 + 2) Consists of eight drives with dual parity and data striped across each device. RAID 6 (14 + 2) Consists of 16 drives with dual parity and data striped across each device. RAID 5 (3 + 1) provides 75% data storage capacity. RAID (7 + 1) provides 87.5% storage capacity. Withstands failure of a single drive. For a single random write operation from a host, RAID 5 devices will perform four disk I/O operations (two reads and two writes). RAID 6 (6 + 2) provides 75% data storage capacity. RAID 6 (14 + 2) provides 87.5% storage capacity. Withstands failure of two drives. For a single random write operations from a host, RAID 6 devices will perform six disk I/O operations (three reads and three writes). Disk types Along with the aforementioned RAID technologies, Symmetrix storage can be configured across a wide range of disk technologies. Symmetrix storage systems support high-capacity, low-cost SATA II drives, high-performing 10k rpm and 15k rpm Fibre Channel drives, as well as ultra-high-performance solid state Best Practices Planning 17

18 Enterprise Flash Drives. Supported drive types, capacities, and speeds are continually changing as new technology becomes available. Please see Powerlink for the most up-to-date lists of supported drive types and capacities for Symmetrix systems. Virtual Provisioning Virtual Provisioning, generally known in the industry as thin provisioning, enables organizations to enhance performance and increase capacity utilization in their Symmetrix storage environments. Virtual Provisioning features provide: Simplified storage management Allows storage to be provisioned independent of physical constraints and reduces the steps required to accommodate growth. Improved capacity utilization Reduces the storage that is allocated but unused. Simplified data layout Includes automated wide striping that can provide similar and potentially better performance than standard provisioning. Symmetrix thin devices are host-accessible devices that can be used in many of the same ways that Symmetrix devices have traditionally been used. Unlike regular host-accessible Symmetrix devices, thin devices do not need to have physical storage completely allocated at the time the device is created and presented to a host. The physical storage that is used to supply disk space to thin devices comes from a shared storage pool called a thin pool. The thin pool is comprised of devices called data devices that provide the actual physical storage to support the thin device allocations. When a write is performed to a part of the thin device for which physical storage has not yet been allocated, the Symmetrix allocates physical storage from the thin pool that covers that portion of the thin device. Enginuity satisfies the requirement by providing a block of storage from the thin pool called a thin device extent. This approach allows for on-demand allocation from the thin pool and reduces the amount of storage that is consumed or otherwise dedicated to a particular device. When more storage is required to service existing or future thin devices, data devices can be added to the thin storage pools. Virtual Provisioning data devices are supported on all RAID types. However, thin pools cannot be protected by a mixture of RAID types. The architecture of Virtual Provisioning creates a naturally striped environment where the thin extents are allocated across all volumes in the assigned storage pool. By striping the data across all devices within a thin storage pool, a widely striped environment is created. The larger the storage pool for the allocations, then the greater number of devices that can be leveraged for a thin device. It is this wide and evenly balanced striping across a large number of devices in a pool that allows for optimized performance in the environment. If metavolumes are required for the thin devices in a particular environment, it is recommended that the metavolume be concatenated rather than striped since the thin pool is already striped using thin extents. Concatenated metavolumes also support fast expansion capabilities, as new metavolume members can easily be appended to the existing concatenated metavolume. This functionality may be applicable when the provisioned thin device has become fully allocated at the host level, and it is required to further increase the thin device to gain additional space. Striped metavolumes are supported with Virtual Provisioning and there may be workloads that will benefit from multiple levels of striping. For additional information on the use of Virtual Provisioning with Windows operating systems, please see the white papers Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft Exchange 2007and Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft SQL Server 2005, both available on Powerlink. Best Practices Planning 18

19 Discovering storage Once the appropriate steps have been taken from the connectivity, zoning, volume creation, mapping, and masking perspectives, devices can be discovered by the operating system. In most cases the discovery of new devices can be done by performing a rescan operation from the disk management console or from the diskpart command line interface as depicted in Figure 8. Figure 8. Diskpart rescan In some instances the discovery of the initial target requires either a host reboot or an HBA reset. Once the target and first device(s) are discovered, the host should not need to be rebooted and the HBA should not need to be reset in order to discover additional storage. A reset should not be issued on a host already accessing in-use storage devices from the HBA to be refreshed as this may interrupt access to in-use devices. Should a disk management console or diskpart rescan not prove successful in discovering new devices, a plug and play rescan can also be issued. Plug and play rescans can be executed from Windows Device Manager using the Scan for hardware changes option. With Windows Server 2003, the devcon CLI, a free download from Microsoft, can also be used to perform these kinds of rescans. EMC also offers ways to perform this operation with the Symmetrix Integration Utilities (SIU). Among its functions the SIU CLI symntctl has a rescan function to assist in discovering storage. SIU is available as a free download from Powerlink and is now included with Solutions Enabler 7.0 or later. Rescan operations for storage are generally not synchronous with regards to the completion of the rescan command that initiated the discovery. What this means is that a rescan may return complete; however, the actual discovery and surfacing of the LUNs to the operating system may happen several seconds after the command finishes. This behavior is important to note when scripting operations that surface LUNs and then perform a subsequent action against those LUNs. In this case it may be necessary to sleep, loop, or provide additional checks in scripts to allow all LUNs to be discovered and otherwise be available to the operating system. Once LUNs are discovered they are given a physicaldrive or disk number generally based on the order of discovery by the operating system based on LUN address. There are several methods to ensure the correct Symmetrix devices are being seen as disks by the host. One method is to use the EMC inq utility available at ftp://ftp.emc.com/pub/symm3000/inquiry. The inq CLI uses SCSI inquiry information to list Symmetrix specific information, including Symmetrix serial number and device numbers associated with a given physical drive. Figure 9 gives an example of using the inq utility with the sym_wwn option. Best Practices Planning 19

20 Figure 9. Inq utility In addition to inq, Solutions Enabler can be installed on the host for the purposes of querying physical drive specific information. Similar to the inq utility, Solutions Enabler includes a syminq CLI that performs a SCSI inquiry collection and returns the current disk information. Along with syminq SE provides a sympd CLI that can return additional Symmetrix specific information associated with a physical drive. It should be noted that the drive associations used by the sympd command are cached within the SE (symapi) database. To update this cached information a symcfg discover command should be run if any changes were made to the drives presented to the host. At the time of publication, a symcfg sync command does not update the physical drive specific information in the symapi database. In environments where masking is enabled, it is possible for the VCM or ACLX device to be mapped to director ports. As previously discussed, this means the VCM or ACLX device will be available to all hosts connected to those directors. In multipathed environments, where EMC PowerPath is used, the VCM or ACLX device is the only Symmetrix LUN where PowerPath will not automatically manage multipathing. With this in mind it should be expected that the VCM or ACLX device will be seen multiple times by the operating system. Windows Server 2008 SAN Policy Functionality new to Windows Server 2008, referred to as SAN Policy, allows administrators to control how newly discovered storage devices are managed by the operating system. With Windows Server 2003, new disks discovered by Windows would automatically be brought online for potential use by the operating system. With Windows Server 2008, the SAN Policy allows administrators to control the way disks are brought online. Specifically, the SAN Policy determines if new disks are brought online or remain offline or marked as read-only or read/write. Best Practices Planning 20

21 The specific options offered by the SAN policy are shown in Table 3. Table 3. SAN Policy options SAN Policy Offline Shared Description Offline Shared is the default policy for Windows Server 2008 Enterprise and Data Center editions. This policy makes any storage discovered on a shared bus (FC, SCSI, iscsi, SAS, and so on), to be made offline and read-only. Any storage discovered on a non-shared bus, as well as the boot disk, will be brought online read/write. Online Offline All Symmetrix devices presented to a Windows Server 2008 host with the Offline Shared policy will be placed offline and read-only. The only exception would be the boot device in a boot from SAN configuration. This policy will bring all discovered storage devices online and read/write automatically. In this case all disks, except for the boot disk, will be marked offline and read-only. To modify the policy the diskpart CLI can be used. Specifically the SAN option within diskpart can be used to view and change the policy. The full syntax of the SAN command can be obtained by typing help san from a diskpart command prompt. The state of the disks can be managed from either the disk management console or the diskpart CLI. Changing online or offline status for disks from the disk management console will also affect the read/write state of the device. For example, an online from disk management will also read/write enable the disk, and conversely an offline of a disk will subsequently mark the device as read-only automatically. The diskpart CLI offers more granular control, as an offline or online (using the online disk syntax) does not modify the read/write state of the device. To modify the read/write state of the disk, the disk specific setting must also be modified via the attributes disk diskpart command. Figure 10 provides an example of how to online and read/write a specific disk using diskpart. Figure 10. Diskpart command to online and read/write a disk Best Practices Planning 21

22 Automount Windows Server 2003 and 2008 include the ability to automatically mount newly discovered basic disk storage to the next available drive letter upon discovery. For Windows Server 2003, this setting is disabled by default, while for Windows Server 2008 this setting is enabled by default. To view or otherwise modify this setting the diskpart CLI can be used, specifically using the automount command. The mountvol CLI can also be used to disable or enable automounting of new devices. In most SAN environments it is not necessary to have Windows automatically mount storage, as applications or scripts are used to manage the device state. With this said, it may not be necessary to change the automount setting unless otherwise recommended by the application vendor or as required due to unwanted behavior in a specific environment. Initializing and formatting storage Newly presented and previously unused storage will display as not initialized when marked as online to Windows. The act of initializing a disk performs several functions including the assignment of a disk signature, boot record, and partition table as written to the disk. Prior to initializing the storage, the disk type, be it Master Boot Record (MBR) or GUID partition table (GPT), needs to be determined. Additionally, whether the disks are basic or dynamic needs to be considered and defined based on storage requirements. The following sections outline the definitions and capabilities of MBR and GPT style basic or dynamic disk storage with Windows Server 2003 and Disk types Master Boot Record (MBR) The MBR partitioning has historically been the most commonly used disk type on the Windows platform. MBR disks create a 512-byte record in the first sector of a disk containing boot information, disk signature, and a table of primary partitions. The following list highlights the main features and limitations of MBR disks on Windows operating systems: Support up to four primary partitions. Support for more than four partitions requires an extended partition in which logical drives are created. Support 32-bit entries for partition length and partition starting address, which limits the maximum size of the disk to 2^32 blocks (512 bytes) or 2 TB. Contain a 32-bit, eight-character hexadecimal signature. Partition GUIDs for MBR basic disk volumes are not stored on disk and are otherwise assigned by the operating system and maintained in the registry. Support with Windows Server 2003 and 2008 standalone or clustered hosts. GUID partition table (GPT) The GPT disk format was designed to overcome the limitations in the MBR style of partitioning. GPT disks start with a protective MBR in the first sector of the disk. The protective MBR is designed to prevent operating systems that do not recognize the GPT format from assuming the disk is not partitioned. After the protective MBR, GPT information is maintained in the next 32 sectors of the disk. This information includes the primary GPT header and self-identifying partition entries. GPT disks also maintain a redundant copy of its information at the end of the disk and have CRC32 checksums for added integrity. The following list highlights additional features and limitations of GPT disks on Windows operating systems. Support up to 128 partitions Support 64-bit partition table entries, which in theory can produce disks or partitions that are zettabytes (2^64 blocks) in size. Best Practices Planning 22

23 Windows limits supportable disk sizes to 18 exabytes where raw partitions are used and 256 terabytes for NTFS formatted partitions. Maintain a 128-bit Globally Unique ID (GUID) for each disk Maintain a 128-bit GUID for each partition on a disk. Support Windows Server 2003 SP1 or later Windows Server 2003 clustering support requires a hotfix ( Full support with Windows Server 2008 Basic disks Basic disks utilize the native partitioning capabilities of the MBR and GPT formats. MBR basic disks will support primary partitions, extended partitions and logical drives. GPT basic disks will support the partition table entries native to this format. Volumes on MBR or GPT basic disks cannot span across multiple disks, but can be expanded in-place assuming there is space available on the disk where the partition resides. Basic disks are also natively supported in Microsoft clusters. Dynamic disks The native Microsoft logical disk manager (LDM) also offers the ability to create so called dynamic disks. Dynamic disks maintain a 1 MB private region on each disk to store the LDM database. The LDM database stores the relevant information regarding dynamic disks in the system including volume types, offsets, memberships, and drive letters for each volume. Dynamic disks can be either MBR- or GPT-based and include the capability to distribute filesystems across multiple disks as presented to the OS. Dynamic disks, while providing for enhanced functionality, are not supported in Microsoft clusters when using the base LDM. Dynamic disks can be used to create several types of volumes in non-clustered environments including simple, spanned, striped, mirrored, and RAID 5. Simple A simple dynamic volume is a volume that resides on a dynamic disk but does not span across multiple disks. Simple volumes can be created from free space on a dynamic disk, or by converting a basic disk with existing partitions. The value of a simple volume is the ability to subsequently create a spanned volume (assuming it is not a system or boot partition) or a mirrored volume. A simple volume cannot be used to create a striped or RAID 5 volume. Spanned A spanned dynamic volume is a concatenation of multiple volumes across one or multiple dynamic disks. Spanned volumes write data sequentially to each volume, filling one before moving onto the next volume in the spanned set. The value of a spanned volume is the ability to grow a filesystem across multiple dynamic disks non-disruptively. A spanned volume can be created or expanded between two to 32 dynamic disks, but is not fault-tolerant. Should any one member of the spanned volume become unavailable, the entire volume will go into a failed state. Striped A striped dynamic volume, as it sounds, is a dynamic volume that stripes a filesystem across multiple disks. The stripe depth (amount of data written to one disk before moving onto the next in the stripe) is 64 KB. A striped dynamic volume can be formed with anywhere between two and 32 dynamic disks. Once created, a striped volume cannot be expanded with the base Windows LDM. A striped volume is not fault-tolerant and is considered to be a RAID 0 device. Should any one member of the striped volume become unavailable, the entire volume will go into a failed state. Mirrored Mirrored dynamic volumes are volumes synchronized across two physical disks. Mirrored dynamic volumes are considered RAID 1 protected and provide for fault-tolerance should one of the disks fail. A mirrored volume will required twice the amount of storage for the same amount of usable space. Mirrored Best Practices Planning 23

24 dynamic disks can be created or broken online without disruption to the availability of the volume. Once created a mirrored volume cannot be extended RAID 5 RAID 5 dynamic volumes are fault-tolerant volumes that contain data and parity striped across a set of at least three and up to 32 dynamic disks. The parity space required will consume an amount of storage equal to one full member of the RAID 5 set. Should any one disk fail the RAID 5 volume will remain online. Data and parity can be rebuilt from the remaining members upon recovery of the failed disk. Once created a RAID 5 volume cannot be extended. Veritas Storage Foundation for Windows The dynamic disk functionality and restrictions listed above apply to the base LDM included with Windows Server 2003 and 2008 operating systems. With Veritas Storage Foundation for Windows (SFW), dynamic disk support and capabilities are expanded to include additional functionality. The following list details some but not all of the additional functionality provided by SFW with dynamic disks over and above the base Windows LDM: Simple volumes can be dynamically converted to striped volumes. Spanned volumes can support up to 256 dynamic disks. Mirrored volumes can be extended and striped to create RAID devices. Mirrored volumes can also be assigned a preferred mirrored disk or plex. Striped volumes can be mirrored and extended to create RAID devices. Striped volumes can also be dynamically modified to change stripe characteristics including change to a concatenated volume. Stripe depth can also be controlled. RAID 5 volumes can be extended. Multiple dynamic disk groups are supported. Microsoft clustering is supported with dynamic disks. Additional functionality provided by Veritas Storage Foundation for Windows can be found on the Symantec website. Disk type recommendations In most environments, MBR basic disks with a single partition fulfill the majority of storage requirements. MBR basic disks offer a disk type supported by all Microsoft and third-party applications. The functionality offered by dynamic disks, including striped volumes, RAID protection, and volume growth is somewhat mitigated as they can occur with more efficiency in the Symmetrix array. Additionally, the restriction that dynamic disks are not supported with Microsoft clustering when using the base LDM prohibits their use in many environments. The GPT disk type is generally reserved for environments that require volumes larger than 2 TB in size. While the GPT disk type, upon first release on Windows platforms, did have some support limitations, most of those limits have been removed by both Microsoft and third-party applications. In the future, GPTbased disks should become the standard partitioning format. Before utilizing GPT disks, ensure the disk type is supported by the required Microsoft or third-party applications. Large volume considerations While GPT disks allow for larger disk sizes, volumes that are multiple terabytes in size should be created and used with some degree of caution. The main concerns regarding large volumes are generally performance-related or tied to the ability to perform administrative tasks in a timely manner. Common administrative tasks where very large volumes become a concern include backup and restore activities, defragmentation, or filesystem verification tasks like chkdsk. The amounts of time to perform administrative tasks like chkdsk have as much to do with the number of the files in the filesystem as the Best Practices Planning 24

25 size of the volume itself. A small number of large files will chkdsk much faster than a large number of small files in a comparably sized file system. Performance concerns also come from the fact that a single large volume could contain enough data that, when accessed with enough user concurrency, would potentially saturate the performance capabilities of the underlying disks. This concern can be mitigated on the Symmetrix by creating metavolumes with enough meta members to spread the workload across a larger number of physical spindles. The use of Virtual Provisioning can also provide a mechanism to spread large LUNs across a greater number of drives. Partition alignment Historically Windows operating systems have calculated disk geometry based on generic SCSI information including Cylinder-Head-Sector or CHS values as reported by SCSI controllers. The perceived or assumed geometry of the disk based on CHS values led Windows to create partitions based on 63 sectors per track. Generally speaking this meant Windows would create the first partition in the 63 rd sector or at an offset 32,256 bytes into the physical drive, assuming 512-byte sectors. The creation of partitions based on the assumption of 63 sectors per track led to the partition and subsequently data within the partition to be misaligned with storage boundaries in the Symmetrix. Misalignment with these storage boundaries could potentially lead to performance problems. The logical geometry of Symmetrix host addressable logical volumes is listed in Table 4. Table 4 Symmetrix device geometry Symmetrix DMX-2 and prior Cylinder = 15 tracks (480K) Track = 8 sectors (32K) Sector = 8 blocks (4K) RAID 5/6 Stripe Boundary = 4 tracks (128K) Metavolume default stripe boundary = 2 Cylinders (960K) Symmetrix DMX-3 and later, including V-Max Cylinder = 15 tracks (960K) Track = 8 sectors (64K) Sector = 16 blocks (8K) RAID 5/6 Stripe Boundary = 2 tracks (128K) Metavolume default stripe boundary = 1 Cylinder (960K) Based on these values, misaligned I/O could cause partial sector write activity and additional, unwanted I/O within the Symmetrix from crossing track and/or stripe boundaries. Depending on the version of Windows, there are several ways to correct alignment and ensure optimal performance. In all cases it is recommended that the partition offset or alignment be equal to some increment of 64 KB. This could mean that the partition may start 128 sectors or 65,536 bytes into the disk, or at some number larger but evenly divisible by 128 sectors or 64 KB. In either case, the partition will be considered aligned. Partition alignment prior to Windows Server 2003 SP1 Prior to Windows Server 2003 SP1, the diskpar utility could be used to manually create partitions on a specific offset or boundary within a physical drive. The recommended offset value when creating a partition using the diskpar command is 128 sectors. For dynamic disks a filler partition must first be created with diskpar prior to converting the disk to dynamic and subsequently creating volumes for user data. For detailed information regarding partition alignment with the diskpar command, please see Using diskpar and diskpart to Align Partition on Windows Basic and Dynamic Disks available on Powerlink. Partition alignment with Windows Server 2003, SP1, or later versions With Windows Server 2003 SP1 or later, Microsoft introduced a version of the diskpart CLI command that included an option to align a partition upon creation. The recommended value when creating a partition using the diskpart command is 64 KB. Figure 11 gives an example of how to align an MBR basic partition using the diskpart command including the align option. Best Practices Planning 25

26 For dynamic disks, the diskpart command cannot be used to create aligned volumes. The first reason for this is that the align option is not available as a part of the diskpart command when creating dynamic volumes. Secondly, the diskpart command cannot be used to create a filler partition in order to force alignment for subsequent volumes (the filler partition created by diskpart starts aligned, but does not end aligned). The diskpar command must be used to create a filler partition prior to converting the disk to dynamic and creating subsequent volumes for user data. For detailed information regarding partition alignment with the diskpart command, please see the white paper Using diskpar and diskpart to Align Partition on Windows Basic and Dynamic Disks and the Aligning GPT Basic and Dynamic Disks For Microsoft Windows 2003 Technical Note available on Powerlink. Figure 11. Using Diskpart to create an aligned partition on an MBR basic disk Partition alignment with Windows Server 2008 With Windows Server 2008, the issues around partition alignment when using default tools, such as the disk management MMC, have been corrected. By default Windows Server 2008 will create partitions based on a 1 MB boundary or offset. Specifically, for disks larger than 4 GB, Windows will create partitions with an offset of 1 MB increments. For disks smaller than 4 GB, Windows will default to an offset of 64 KB. In both cases, the partition will be aligned with the recommended Symmetrix best practice of 64 KB increments. Querying alignment One method to query and otherwise ensure alignment is to use the WMI interfaces native to Windows Server 2003 and These versions of Windows include a WMI CLI interface called wmic that can be used to determine if a partition is properly aligned. The example in Figure 12 uses the wmic CLI to return specific partition information including the starting offset, from an MBR basic disk and a GPT basic disk created specifying a 64 KB alignment with diskpart. Best Practices Planning 26

27 Figure 12. Using the wmic CLI to query partition alignment The starting offset provided by the wmic command is in bytes. To ensure proper alignment, this number should be evenly divisible by Alternatively, the provided offset in bytes can be divided by the block size (512 bytes) to get the number of blocks or sectors for the offset. The sector offset should then be evenly divisible by 128. Formatting Once a partition is created it will generally be formatted with an NTFS file system. The process of formatting a partition with a filesystem performs several functions such as creating NTFS metadata including the Master File Table (MFT), defining the allocation unit size and determining if a quick format should be performed. Allocation unit size The allocation unit size, or cluster size, is the smallest amount of storage that can be allocated to an object or fragment of an object in a filesystem. The ideal allocation unit size, generally speaking, should represent the average file size for the filesystem in question. An allocation unit size that is too large could lead to wasted space in the filesystem, while an allocation unit size that is too small could lead to excessive fragmentation. In the context of alignment, the allocation unit size will also determine where an object resides in the filesystem. To have a properly aligned partition is the first step in ensuring aligned operations in the environment. However, files do not live or otherwise start in the first sector of an aligned and formatted partition (which is reserved for the NTFS header). Files will start in the filesystem at some offset based on the allocation unit size. For example, to have an aligned partition at 64 KB with an allocation unit size of 4 KB, would cause files to be created at 64 KB, plus some number of 4 KB into the filesystem. This may not be an issue for general purpose filesystems, but for database applications such as Microsoft Exchange and Microsoft SQL Server, this could cause the internal structures of the data file to be misaligned with some of the critical storage boundaries as mentioned in the Partition alignment section. Because of the impact to alignment caused by the allocation unit size, it is recommended, especially for database applications, to format a volume with a cluster size of 64 KB. The 64 KB allocation unit size will ensure that the file(s) created in the filesystem will maintain a 64 KB offset from the beginning of the partition. Assuming the partition is also aligned with a 64 KB offset, this will ensure I/O operations are as aligned as possible with the critical boundaries in the Symmetrix. Querying allocation unit size The allocation unit size can be determined by using the wmic CLI. Specifically, the WMI volume object can be queried to determine the blocksize (in bytes) of the filesystem. Figure 13 gives an example of using wmic to determine the allocation unit size. Best Practices Planning 27

28 Figure 13. Using the wmic CLI to query allocation unit size Quick format vs. regular format Depending on the version of the Windows operating system, the behavior of a non quick or regular format will differ. In either case, references to files in an existing filesystem will be removed. But the difference in behavior is most interesting in the context of Virtual Provisioning, specifically the potential impact to thin pool allocations. Windows Server 2003 format With Windows Server 2003, the difference between a regular and quick format is that a regular format will scan the entire disk for bad sectors. The scan for bad sectors (SCSI verify command) is a read operation. In virtually provisioned environments, this read operation will not cause space to be allocated in the thin pool. When a read is requested from a thin device on an area of the LUN that has not been allocated, the device will simply return zeroes to the application. Since a full format is an unnecessary operation when considering there is no actual allocation or disk to verify in virtually provisioned environments, a quick format should be used. However, no harm will be done should a regular format accidentally be selected; there will simply be unnecessary I/O to the array. So whether a regular format or a quick format is selected, only a small number of writes will occur against the thin device, causing minimal allocation within the thin pool. Windows Server 2008 format With Windows Server 2008 the difference between a regular and a quick format is that a regular format will write zeroes to every block in the filesystem. From a Virtual Provisioning perspective this will cause a thin device to become fully allocated within its respective thin pool. With this behavior in mind, it is important to select the quick format option (/Q from the command line) when formatting any thin device on Windows Server A quick format will perform similarly to Windows Server 2003 where only a small number of tracks will become allocated within a thin pool. Volume expansion Storage administrators are continually looking for flexibility in the way that storage is provisioned and may be altered in-place and online. Administrators in Microsoft environments may find the need to increase storage for a given filesystem due to an increase in storage requirements. One method to account for growth in storage needs is to expand the LUN on which a given partition or filesystem resides. Previous versions of Enginuity have provided methods in which to grow Symmetrix volumes. The method to expand volumes in place and online would involve adding additional members to an existing metavolume. If the metavolume was concatenated, then only the additional volumes to be added to the meta would be required to expand the volume online with no disruption to the application. Striped metavolume expansion, however, required not only the additional volumes but also a mirrored BCV in order to perform the expansion with data protection. The requirement for a mirrored BCV excluded other more cost-effective protection types, such as RAID 5, which may be more desirable for BCV volumes. Best Practices Planning 28

29 With Enginuity 5874 and Symmetrix V-Max arrays, users may now use other protection types for the BCV used in conjunction with striped metavolume expansion, including RAID 5 or RAID 6. The following section provides an example of online striped metavolume expansion using a RAID 5 BCV. Striped metavolume expansion example This example focuses on Symmetrix metavolume 41F, which happens to hold a Microsoft Exchange database. We will expand metavolume 41F with four new devices (42D, 42E, 42F, and 430) that reside in the same 15k rpm Fibre Channel disk group that holds the existing metavolume. The RAID 5 BCV metavolume to be used in order to protect data during the migration, device 431, exists on a separate disk group. In preparation for expanding a striped metavolume with data protection, it is necessary to ensure there are no existing Symmetrix-based replication sessions occurring against the device. This includes ensuring TimeFinder, SRDF, and Open Replicator sessions have been removed, terminated, or canceled as appropriate to the respective technology. The requirement to remove all replication sessions also applies to the TimeFinder BCV to be used for protecting data during the expansion. The BCV cannot be synchronized or otherwise have a relationship with the metavolume prior to running the expansion procedure using Solutions Enabler 7.0. It is also important to ensure that the devices being added to the existing metavolume have the same attributes. In this example the metavolume is a clustered resource within a Windows Server 2008 failover cluster. A Symmetrix device within a Windows Server 2008 failover cluster requires that the SCSI-3 persistent reservation attribute be set. Since at the beginning of this example the SCSI-3 persistent reservation attribute is not set on the volumes being used for the expansion, the following command needs to be issued: symconfigure -sid 94 -cmd "set dev 42D:430 attribute = SCSI3_persist_reserv;" commit Once the environment is prepared, the LUN expansion can be executed. The expansion procedure will be executed from a host with gatekeeper access to the required Symmetrix using the symconfigure CLI command. Figure 14 shows the partition for the LUN in this example, as seen from the disk administrator, prior to it being expanded. Figure 14. Striped metavolume prior to expansion To expand the metavolume, the following command was executed: symconfigure -sid 94 -cmd "add dev 42d:430 to meta 41f protect_data=true bcv_meta_head=431;" commit Once the expansion process has begun, the following high-level steps will be taken: 1. The BCV metadevice specified for data protection will begin a clone copy operation from the source metavolume. 2. During the clone copy operation, writes from an application, like Exchange, will be mirrored between the source metavolume and the BCV. Best Practices Planning 29

30 3. When the BCV copy is complete, the BCV is split from the source and all read and write I/O is redirected to the BCV device. 4. While the I/O is redirected, the source metavolume will be expanded with the specified volumes. 5. After the metavolume is expanded, the data from the BCV is copied back and restriped across all members of the newly expanded metavolume. 6. During the copy from the BCV, I/O is redirected back to the expanded metavolume. 7. Once the copy back is complete, the BCV clone relationship is terminated and the expansion completes. Due to the nature of the volume expansion there will be a performance impact for reads and writes to the LUN. With this in mind it is recommended to perform any expansion operations during maintenance windows or times of low I/O rates to the LUN. The symconfigure command will monitor the expansion throughout the process as seen in Figure 15. Once the expansion is complete symconfigure will exit. Figure 15. symconfigure command during the expansion process After the symconfigure command completes, it will now require the administrator to extend the partition that resides on the now larger LUN. To do this the first step is to perform a rescan from the host via the disk manager console or the diskpart cli command. Since this is a clustered environment we will need to perform a rescan from all nodes in order to discover the new LUN size. Once the rescan is executed the new size of the LUN should be seen from all hosts as depicted in Figure 16. Best Practices Planning 30

31 Figure 16. Metavolume after the expansion At the completion of the metavolume expansion the diskpart command can be used to grow the partition into the newly discovered free space. From the diskpart command, either the volume or the partition needs to be selected prior to issuing the extend option. Figure 17 gives an example of using diskpart to select the target disk, selecting the appropriate partition on the disk, followed by issuing the extend command. Figure 17. diskpart commands to expand the NTFS partition The extend command will grow the partition into the free space on the disk, as shown in Figure 18. Another rescan will need to be issued on all cluster nodes in order to discover the now larger partition. This completes the expansion process and the Exchange database can grow into the now larger volume on which it resides. Figure 18. Metavolume following the diskpart extend command This particular example was tested on a Windows Server 2008 failover cluster while running a light loadgen workload against the database LUN being expanded (~200 IOPS). The ~88 GB LUN was expanded in roughly 35 minutes. Best Practices Planning 31

32 Symmetrix replication technologies and management tools In many environments a key aspect of managing Symmetrix storage involves storage replication. The Symmetrix offers several native forms of replication including TimeFinder, SRDF, and Open Replicator. Each of these technologies offers LUN-based replication either within a Symmetrix array (TimeFinder), between multiple Symmetrix arrays (SRDF or Open Replicator), or between the Symmetrix and other qualified storage arrays (Open Replicator). The following sections offer an introductory description to these technologies. EMC TimeFinder family The TimeFinder family of software provides a local copy or image of data, independent of the host and operating sytem, application, and database. TimeFinder local replication software helps to manage backup windows while minimizing or eliminating any impact on the application and host performance. It allows for immediate application and host access during restores, also referred to as instant restore. TimeFinder also allows for fast data refreshes for activities such as data warehousing and decision support as well as test and development. The TimeFinder family of software includes: TimeFinder/Clone, depicted in Figure 19, creates full-volume copies of production data within a Symmetrix system. TimeFinder/Clone allows up to 16 active clones of a single production device. Clone devices can have RAID 1, RAID 5, or RAID 6 protection schemes. TimeFinder/Clone can be used to copy data between Symmetrix standard devices (which can optionally be labeled target devices), between standard and business continuance volumes (BCV), or between BCV volumes. Figure 19. TimeFinder/Clone example TimeFinder/Snap, depicted in Figure 20, creates space-saving copies of production data within a Symmetrix system. TimeFinder/Snap allows up to 128 active snapshot copies of a single production device. TimeFinder/Snap utilizes cache-only devices referred to as VDEVs to create a pointer-based copy of a production standard device. Should any writes occur to the standard device or the VDEV, data representing the point-in-time image of the VDEV will be copied to what is called a save pool. Save pools are comprised of save devices that are durable storage in a RAID 1, RAID 5, or RAID 6 Best Practices Planning 32

33 configuration. The mechanism used to maintain the pointer-based copy of a VDEV is commonly referred to as copy-on-first-write. In a Symmetrix system this copy-on-first-write activity can be done asynchronously, resulting in minimal impact to the first writes performed on the production standard devices. Figure 20. TimeFinder/Snap example The TimeFinder family of software also consists of additional options to help a wider range of business needs. The TimeFinder options include: TimeFinder/Consistency Groups (TF/CG) provides, at no additional cost, dependent-write consistency of an application or group of applications when creating a point-in-time image across multiple devices either within a single Symmetrix system or which span multiple Symmetrix systems. TimeFinder/Exchange Integration Module (TF/EIM) provides a CLI driven recovery management interface for Windows servers that support Microsoft Exchange databases in Symmetrix systems. TF/EIM automates the process of creating TimeFinder copies for backup and restore operations in Exchange Server 2003 and Exchange Server 2007 environments. TF/EIM utilizes the Windows Volume Shadow Copy Service (VSS) to coordinate the operation of creating an Exchange-based full, copy only or log (vssdiff) TimeFinder replica of production databases. TF/EIM utilizes the EMC VSS hardware provider to coordinate the TimeFinder replica creation with the necessary Exchange processes, including a freeze and thaw of write I/O activity to an Exchange database, mounting and checksum verification of the TimeFinder replica, and log truncation following a successful backup. TimeFinder/SQL Integration Module (TF/SIM) provides a CLI driven recovery management interface for Windows servers that support Microsoft SQL Server databases residing in Symmetrix systems. TF/SIM automates the process of creating TimeFinder copies for backup and restore operations in SQL Server 2005 and 2008 environments. TF/SIM can utilize either the Virtual Device Interface (VDI) native to SQL Server or the Windows VSS framework in order to coordinate TimeFinder replica creation with the given instance of SQL Server. Best Practices Planning 33

34 TimeFinder/Clone Emulation Mode (included at no charge with TimeFinder/Clone) enables customers to easily leverage their existing TimeFinder/Mirror scripts with new Symmetrix systems that utilize TimeFinder/Clone functionality. EMC SRDF family SRDF is a business continuance solution that maintains a replica of data at the device level in Symmetrix arrays located in physically separate sites. The Solutions Enabler SRDF component extends the basic SYMCLI command set to include SRDF commands that allow you to perform control operations on remotely located RDF devices. SRDF provides a recovery solution for component or site failures between remotely mirrored devices, as shown in Figure 21. SRDF mirroring reduces backup and recovery costs and significantly reduces recovery time after a disaster. Figure 21. SRDF bidirectional configuration In an SRDF configuration, the individual Symmetrix devices are designated as either a source mirror or a target mirror to synchronize and coordinate SRDF activity. If the source (R1) device fails, the data on its corresponding target (R2) device can be accessed. When the source (R1) device is replaced, the source (R1) device can be resynchronized. SRDF configurations have at least one source (R1) device mirrored to one target (R2) device. SRDF site configurations provide for either a unidirectional or a bidirectional data transfer from one storage site to another. In a unidirectional SRDF configuration, all source (R1) devices reside in the local Symmetrix array and all target (R2) devices in the remote site Symmetrix array. Data flows from the source (R1) devices over an SRDF link to the target (R2) devices. In a bidirectional configuration, both source (R1) and target (R2) devices reside in each Symmetrix array, as the master copy point and the mirror copy point, in the SRDF configuration. Data flows from the source (R1) devices to the target (R2) devices. The SRDF family of software provides the following products: SRDF/Synchronous (SRDF/S) maintains realtime synchronous remote data replication from one Symmetrix production site to one or more Symmetrix systems located within campus, metropolitan, or regional distances. SRDF/S provides for a recovery point objective (RPO) of zero data loss Best Practices Planning 34

35 SRDF/Asynchronous (SRDF/A) maintains asynchronous data replication usually at extended distances and provides an RPO that could be as minimal as a few seconds. SRDF/A maintains dependent write consistent copies of data across a group of devices by creating delta sets as a unit of consistency for asynchronous replication between sites. SRDF/Data Mobility (SRDF/DM) provides for the transfer of a Symmetrix data volume to a secondary Symmetrix locally or across an extended distance. General uses of SRDF/DM can include disaster restart, information sharing for decision support or data warehousing, or migration of data between Symmetrix systems. The SRDF family of software also consists of other add-on options including advanced three-site capabilities using the combination of SRDF/S, SRDF/A, SRDF/DM, and TimeFinder. The other SRDF options and advanced three-site solutions include: SRDF/Automated Replication (SRDF/AR) enables rapid disaster restart over any distance with a two-site single hop option using SRDF/DM in combination with TimeFinder, or a three-site multi-hop option using a combination of SRDF/S, SRDF/DM, and TimeFinder. SRDF/Cluster Enabler (SRDF/CE) enables automated or semi-automated site failover using SRDF/S or SRDF/A with Microsoft failover clusters. SRDF/CE allows Windows Server 2003 and Windows Server 2008 editions running Microsoft failover clusters to operate across pairs of SRDF-connected Symmetrix arrays as geographically distributed clusters. SRDF/Star is a three-site disaster-restart solution that can enable zero data loss with SRDF/S between two sites, while preserving SRDF/A replication to a third site. With this SRDF/Star offers a combination of continuous protection, incremental data resynchronization, and enterprise consistency between two remaining sites in the event of the workload site going offline due to a site failure, fault, or disaster event. SRDF/Concurrent provides the ability to remotely mirror a Symmetrix production-site device to two secondary-site Symmetrix arrays simultaneously using either SRDF/S or a combination of SRDF/S and SRDF/A. SRDF Cascaded is an advanced three-site solution that utilizes SRDF/S between a workload site and a secondary-site Symmetrix, then SRDF/A from that secondary-site Symmetrix to an out-of-region third Symmetrix array. This configuration offers zero data loss achievable in the out-of-region site in the event of a production-site disaster event. SRDF/Extended Distance Protection (SRDF/EDP) is a new disaster-restart solution providing customers the ability to achieve no data loss at an out-of-region site at a lower cost. Using cascaded mode SRDF operations as a building block for this solution, SRDF/EDP allows the intermediate site to provide data pass-through to the out-of-region site without the need to allocate an equal amount of storage within the intermediate site. SRDF/Consistency Groups (SRDF/CG) ensures dependent-write consistency of an application or group of applications being remotely mirrored by SRDF. SRDF/CG helps allow for a business point of consistency for remote-site disaster restart for all applications associated with a business function. Open Replicator overview Open Replicator provides a method for copying device data from various types of arrays within a storage area network (SAN) infrastructure to or from a Symmetrix DMX or Symmetrix V-Max storage array. For example, Symmetrix Open Replicator provides a tool that can be used to migrate data from older Symmetrix arrays, EMC CLARiiON arrays, and certain third-party storage arrays to a Symmetrix DMX or V-Max storage array. Alternatively, the Open Replicator command can also be used to migrate data from a Symmetrix DMX or V-Max storage array to other types of storage arrays within the SAN infrastructure. Copying data from a Symmetrix DMX or V-Max storage array to devices on remote storage arrays allows for data to be copied fully or incrementally. Open Replicator is commonly used for the following functions: Best Practices Planning 35

36 Migrate data between Symmetrix DMX or V-Max storage arrays and third-party storage arrays within the SAN infrastructure without interfering with host applications and ongoing business operations. Back up and archive existing data within the SAN infrastructure as part of an information lifecycle management solution. Open Replicator copy operations are controlled from a local host attached to the Symmetrix V-Max or DMX storage array. Data copying is accomplished as part of the storage system process and does not require host resources. Optionally, the data can be copied online between the Symmetrix array and remote devices, allowing host applications, such as a database or file server, to remain operational (function normally) during the copy process. Data is copied in sessions with up to 512 sessions allowed per Symmetrix array. The Symmetrix V-Max or DMX array and its devices will always be referred to as the control side of the copy operation. Older Symmetrix arrays, CLARiiON arrays, or third-party arrays on the SAN will always be referred to as the remote array/devices. With the focus on the control side, there are two types of copy operations, push and pull. A push operation copies data from the control device to the remote device(s). A pull operation copies data to the control device from the remote device(s). Copy operations are either hot (online) or cold (offline). Open Replicator can be used to migrate data into a Symmetrix V-Max or DMX or array from older Symmetrix arrays, CLARiiON, or other third-party arrays. Figure 22 shows two Open Replicator copy sessions performing a pull operation, where data is copied through the SAN infrastructure from remote devices to the Symmetrix array. Figure 22. Open Replicator pull operation Best Practices Planning 36

37 Open Replicator can be used to copy data from a Symmetrix V-Max or DMX array to older Symmetrix and CLARiiON arrays. Figure 23 shows two Open Replicator copy sessions performing a push operation, where data is copied from the Symmetrix array to remote devices within the SAN infrastructure. Figure 23. Open Replicator push operation Symmetrix Integration Utilities Symmetrix Integration Utilities (SIU) is a CLI (symntctl) that integrates and extends the Windows Server 2003 and 2008 disk management functionality to better operate with EMC Symmetrix storage devices. SIU provides particular value in environments where TimeFinder, SRDF, or Open Replicator is being used. SIU is not a replacement for the Windows logical disk manager (LDM), but bridges lacking functionalities necessary for Windows administrators to optimally work with EMC storage devices. Specifically, SIU enables administrators to perform the following actions: View the physical disk, volume, and VMware datastore configuration data. Update the partition table on a disk. Set and clear volume flags. Flush any pending cached file system data to disk. Show individual disk, volume, or VMware datastore details. Mount and unmount volumes to a drive letter or mount point. Manipulate disk signatures. Scan the drive connections and discover any new disks available to the system. Mask devices to and unmask devices from the Windows host. Best Practices Planning 37

38 With the release of Solutions Enabler 7.0, the command line utility symntctl, also referred to as SIU, is now included. The typical install of Solutions Enabler 7.0 installs the symntctl CLI onto Windows platforms automatically. It should be noted that versions of SIU at 4.2 or later are separate from and not dependent on the SIU service. The symntctl CLI functions that previously relied on the SIU service are now managed directly by SIU or the operating system, therefore the SIU service need not be installed in order to utilize symntctl. Additional details regarding SIU functionality and recommended operation can be found in the EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide available on Powerlink. EMC Replication Manager EMC Replication Manager is an EMC software application that dramatically simplifies the management and use of disk-based replications to improve the availability of user s mission-critical data and rapid recovery of that data in case of corruption. Replication Manager helps users manage replicas as if they were tape cartridges in a tape library unit. Replicas may be scheduled or created on demand, with predefined expiration periods and automatic mounting to alternate hosts for backups or scripted processing. Individual users with different levels of access ensure system and replica integrity. In addition to these features, Replication Manager is fully integrated with many critical applications such as DB2 LUW, Oracle, and Microsoft Exchange. Replication Manager makes it easy to create point-in-time, disk-based replicas of applications, file systems, or logical volumes residing on existing storage arrays. It can create replicas of information stored in the following environments: Windows file systems Microsoft SQL Server databases Microsoft Exchange databases Microsoft Office SharePoint Server Oracle databases DB2 LUW databases UNIX file systems Replication Manager has a generic storage technology interface that allows it to connect and invoke a wide range of replication technologies. Replicas created by Replication Manager can be stored on Symmetrix TimeFinder/ Mirrors, TimeFinder/Clone, or TimeFinder/Snap (VDEVs); CLARiiON clones or snapshots; Invista clones, Celerra SnapSure local snapshots, or Celerra Replicator remote snapshots. Replication Manager also supports data using the RecoverPoint Appliance storage service. Replication Manager allows for local and remote replications using TimeFinder, SRDF, SAN Copy, Navisphere, Celerra iscsi, Celerra NFS, and/or replicas of MirrorView /A or MirrorView/S secondaries using SnapView /Snap and SnapView/Clone replication technologies where they are appropriate. Some of the use cases for Replication Manager include: Create point-in-time replicas of production data in seconds. Facilitate quick, frequent, and non-destructive backups from replicas. Mount replicas to alternate hosts to facilitate offline processing (for example, decision-support services, integrity checking, and offline reporting). Restore deleted or damaged information quickly and easily from a disk replica. Set the retention period for replicas so that storage is made available automatically. Best Practices Planning 38

39 Specific to managing Symmetrix replicas, Replication Manager utilizes Solutions Enabler software and interfaces to the TimeFinder family of products. Replication Manager automatically controls the complexities associated with creating, mounting, restoring, and expiring replicas of data. Replication Manager performs all of these tasks and offers a logical view of the production data and corresponding replicas via the Replication Manager console, as depicted in Figure 24. Figure 24. Replication Manager console Additional information can be found in the Replication Manager Product Guide available on Powerlink.. Managing storage replicas Symmetrix device states Symmetrix host addressable devices can be placed into several states depending on their use. The expected device states will depend on several factors including the type of device and whether it is being used for replication. The following section outlines the possible device states, when they are expected, and how Windows manages the state. Read write (RW) The expected device state of host-accessible, mounted, and in-use Symmetrix devices will be read/write. Read/write Symmetrix devices report as RW when queried using various Solutions Enabler commands. As the state implies the device is open for read and write access from a host. Windows will be able to perform all expected disk management operations when a Symmetrix device is RW. Write disabled (WD) A device that is RW can be placed into a write disabled or WD state in the Symmetrix. When a device is WD, any write activity to the device will fail, including initializing, partitioning, formatting, and setting volume attributes. If a WD device has an existing filesystem, Windows will allow the mounting of the device for read-only access. Windows Server 2008 will automatically detect the write disabled state of the device and mark the disk attribute as read-only (note that the volume attribute will not be set to read-only). By automatically marking the disk as read-only any options to manipulate the disk or volume from disk Best Practices Planning 39

EMC Symmetrix V-Max and Microsoft SQL Server

EMC Symmetrix V-Max and Microsoft SQL Server EMC Symmetrix V-Max and Microsoft SQL Server Applied Technology Abstract This white paper examines deployment and integration of Microsoft SQL Server solutions on the EMC Symmetrix V-Max Series with Enginuity.

More information

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe White Paper CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe Simplified configuration, deployment, and management for Microsoft SQL Server on Symmetrix VMAXe Abstract This

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

IMPLEMENTING EMC FEDERATED LIVE MIGRATION WITH MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING SUPPORT

IMPLEMENTING EMC FEDERATED LIVE MIGRATION WITH MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING SUPPORT White Paper IMPLEMENTING EMC FEDERATED LIVE MIGRATION WITH MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING SUPPORT Abstract This white paper examines deployment and integration of Federated Live Migration

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

EMC Symmetrix V-Max with Veritas Storage Foundation

EMC Symmetrix V-Max with Veritas Storage Foundation EMC Symmetrix V-Max with Veritas Storage Foundation Applied Technology Abstract This white paper details the benefits of deploying EMC Symmetrix V-Max Virtual Provisioning and Veritas Storage Foundation

More information

Windows Host Utilities 6.0.2 Installation and Setup Guide

Windows Host Utilities 6.0.2 Installation and Setup Guide Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Windows Host Utilities 6.0 Installation and Setup Guide

Windows Host Utilities 6.0 Installation and Setup Guide Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP

More information

Compellent Storage Center

Compellent Storage Center Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

More information

EMC FEDERATED TIERED STORAGE (FTS) Allows Seamless Integration Between EMC Symmetrix VMAX Series and Third-Party Storage Arrays

EMC FEDERATED TIERED STORAGE (FTS) Allows Seamless Integration Between EMC Symmetrix VMAX Series and Third-Party Storage Arrays White Paper EMC FEDERATED TIERED STORAGE (FTS) Allows Seamless Integration Between EMC Symmetrix VMAX Series and Third-Party Storage Arrays Abstract This white paper describes the external provisioning

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874

Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874 Storage Tiering for Microsoft SQL Server and EMC Symmetrix VMAX with Enginuity 5874 Applied Technology Abstract This white paper examines the application of managing Microsoft SQL Server in a storage environment

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

EMC Support Matrix. Interoperability Results. P/N 300 000 166 ECO 36106 Rev B30

EMC Support Matrix. Interoperability Results. P/N 300 000 166 ECO 36106 Rev B30 EMC Support Matrix Interoperability Results P/N 300 000 166 ECO 36106 Rev B30 Table of Contents Copyright EMC Corporation 2006...1 EMC's Policies and Requirements for EMC Support Matrix...2 Selections...4

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

STORAGE TIERING FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAX WITH ENGINUITY 5875

STORAGE TIERING FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAX WITH ENGINUITY 5875 White Paper STORAGE TIERING FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAX WITH ENGINUITY 5875 Abstract This white paper examines the application of managing Microsoft SQL Server in a storage environment

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC PowerPath Family

EMC PowerPath Family DATA SHEET EMC PowerPath Family PowerPath Multipathing PowerPath Migration Enabler PowerPath Encryption with RSA The enabler for EMC host-based solutions The Big Picture Intelligent high-performance path

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Symmetrix V-Max with SRDF/CE, EMC Replication Manager, and Enterprise Flash Drives Reference Architecture Copyright 2009 EMC Corporation.

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

NetApp Software. SANtricity Storage Manager Concepts for Version 11.10. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.

NetApp Software. SANtricity Storage Manager Concepts for Version 11.10. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. NetApp Software SANtricity Storage Manager Concepts for Version 11.10 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Virtualized Exchange 2007 Local Continuous Replication

Virtualized Exchange 2007 Local Continuous Replication EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Local Continuous Replication EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster #1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform 1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

VMware Best Practice and Integration Guide

VMware Best Practice and Integration Guide VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,

More information

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014 Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...

More information

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

SanDisk ION Accelerator High Availability

SanDisk ION Accelerator High Availability WHITE PAPER SanDisk ION Accelerator High Availability 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Introduction 3 Basics of SanDisk ION Accelerator High Availability 3 ALUA Multipathing

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

Vicom Storage Virtualization Engine. Simple, scalable, cost-effective storage virtualization for the enterprise

Vicom Storage Virtualization Engine. Simple, scalable, cost-effective storage virtualization for the enterprise Vicom Storage Virtualization Engine Simple, scalable, cost-effective storage virtualization for the enterprise Vicom Storage Virtualization Engine (SVE) enables centralized administration of multi-platform,

More information

Veritas CommandCentral Disaster Recovery Advisor Release Notes 5.1

Veritas CommandCentral Disaster Recovery Advisor Release Notes 5.1 Veritas CommandCentral Disaster Recovery Advisor Release Notes 5.1 Veritas CommandCentral Disaster Recovery Advisor Release Notes Copyright 2009 Symantec Corporation. All rights reserved. Product version:

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Windows 6.1 February 2014 Symantec Storage Foundation and High Availability Solutions

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide By Rick Andersen April 2009 Summary Increasingly, organizations are turning

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 9.0 User Guide 302-001-755 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published in USA. Published

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 8.2 Service Pack 1 User Guide 302-001-235 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published

More information

Windows Server 2008 R2 Essentials

Windows Server 2008 R2 Essentials Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution

More information

EMC Disk Library with EMC Data Domain Deployment Scenario

EMC Disk Library with EMC Data Domain Deployment Scenario EMC Disk Library with EMC Data Domain Deployment Scenario Best Practices Planning Abstract This white paper is an overview of the EMC Disk Library with EMC Data Domain deduplication storage system deployment

More information

HP StorageWorks Automated Storage Manager User Guide

HP StorageWorks Automated Storage Manager User Guide HP StorageWorks Automated Storage Manager User Guide Part Number: 5697 0422 First edition: June 2010 Legal and notice information Copyright 2010, 2010 Hewlett-Packard Development Company, L.P. Confidential

More information

AX4 5 Series Software Overview

AX4 5 Series Software Overview AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

EMC CLARiiON Asymmetric Active/Active Feature

EMC CLARiiON Asymmetric Active/Active Feature EMC CLARiiON Asymmetric Active/Active Feature A Detailed Review Abstract This white paper provides an overview of the EMC CLARiiON Asymmetric Active/Active feature. It highlights the configuration, best

More information

Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage

Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Technical Paper Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Paper Title Table of Contents Introduction... 1 BRIEF OVERVIEW OF VMAX ARCHITECTURE... 1 PHYSICAL STORAGE DISK TYPES, FA PORTS,

More information

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server A Dell Technical White Paper PowerVault MD32xx Storage Array www.dell.com/md32xx THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND

More information

Microsoft Hyper-V Server 2008 R2 Getting Started Guide

Microsoft Hyper-V Server 2008 R2 Getting Started Guide Microsoft Hyper-V Server 2008 R2 Getting Started Guide Microsoft Corporation Published: July 2009 Abstract This guide helps you get started with Microsoft Hyper-V Server 2008 R2 by providing information

More information

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Implementing EMC Symmetrix VMAX in a Cloud Service Provider Environment

Implementing EMC Symmetrix VMAX in a Cloud Service Provider Environment Implementing EMC Symmetrix VMAX in a Cloud Service Provider Environment Applied Technology Abstract This white paper documents the usage of the EMC Symmetrix VMAX in a cloud service provider environment.

More information

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. E-Series NetApp E-Series Storage Systems Mirroring Feature Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)

More information

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com EMC Backup and Recovery for SAP with IBM DB2 on IBM AIX Enabled by EMC Symmetrix DMX-4, EMC CLARiiON CX3, EMC Replication Manager, IBM Tivoli Storage Manager, and EMC NetWorker Reference Architecture EMC

More information

N_Port ID Virtualization

N_Port ID Virtualization A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

EMC STORAGE WITH MICROSOFT HYPER-V VIRTUALIZATION

EMC STORAGE WITH MICROSOFT HYPER-V VIRTUALIZATION White Paper EMC STORAGE WITH MICROSOFT HYPER-V VIRTUALIZATION Design and deployment considerations and best practices using EMC storage solutions EMC Solutions Abstract This white paper examines deployment

More information

Veritas Storage Foundation 4.3 for Windows by Symantec

Veritas Storage Foundation 4.3 for Windows by Symantec Veritas Storage Foundation 4.3 for Windows by Symantec Advanced online volume management technology for Windows Veritas Storage Foundation for Windows brings advanced volume management technology to Windows

More information

VMware Virtual Machine File System: Technical Overview and Best Practices

VMware Virtual Machine File System: Technical Overview and Best Practices VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:

More information

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960

Performance Validation and Test Results for Microsoft Exchange Server 2010 Enabled by EMC CLARiiON CX4-960 Performance Validation and Test Results for Microsoft Exchange Server 2010 Abstract The purpose of this white paper is to profile the performance of the EMC CLARiiON CX4-960 with Microsoft Exchange Server

More information

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

Deploying Global Clusters for Site Disaster Recovery via Symantec Storage Foundation on Infortrend Systems

Deploying Global Clusters for Site Disaster Recovery via Symantec Storage Foundation on Infortrend Systems Deploying Global Clusters for Site Disaster Recovery via Symantec Storage Foundation on Infortrend Systems Application Notes Abstract: This document describes how to apply global clusters in site disaster

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper

HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper Mimosa NearPoint for Microsoft Exchange Server and HP StorageWorks 1510i Modular Smart Array Executive

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

EMC Replication Manager for Virtualized Environments

EMC Replication Manager for Virtualized Environments EMC Replication Manager for Virtualized Environments A Detailed Review Abstract Today s IT organization is constantly looking for ways to increase the efficiency of valuable computing resources. Increased

More information

VMware Site Recovery Manager with EMC RecoverPoint

VMware Site Recovery Manager with EMC RecoverPoint VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright

More information

Installation Guide July 2009

Installation Guide July 2009 July 2009 About this guide Edition notice This edition applies to Version 4.0 of the Pivot3 RAIGE Operating System and to any subsequent releases until otherwise indicated in new editions. Notification

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

EMC VNXe HIGH AVAILABILITY

EMC VNXe HIGH AVAILABILITY White Paper EMC VNXe HIGH AVAILABILITY Overview Abstract This white paper discusses the high availability (HA) features in the EMC VNXe system and how you can configure a VNXe system to achieve your goals

More information

High Performance Tier Implementation Guideline

High Performance Tier Implementation Guideline High Performance Tier Implementation Guideline A Dell Technical White Paper PowerVault MD32 and MD32i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS

More information

EMC MID-RANGE STORAGE AND THE MICROSOFT SQL SERVER I/O RELIABILITY PROGRAM

EMC MID-RANGE STORAGE AND THE MICROSOFT SQL SERVER I/O RELIABILITY PROGRAM White Paper EMC MID-RANGE STORAGE AND THE MICROSOFT SQL SERVER I/O RELIABILITY PROGRAM Abstract This white paper explains the integration of EMC Mid-range Storage arrays with the Microsoft SQL Server I/O

More information

Configuring HP LeftHand Storage with Microsoft Windows Server

Configuring HP LeftHand Storage with Microsoft Windows Server Technical white paper Configuring HP LeftHand Storage with Microsoft Windows Server Table of contents Introduction 3 Target audience 3 Connecting Windows server to HP LeftHand volumes 3 Assigning a VIP

More information

HP StoreVirtual DSM for Microsoft MPIO Deployment Guide

HP StoreVirtual DSM for Microsoft MPIO Deployment Guide HP StoreVirtual DSM for Microsoft MPIO Deployment Guide HP Part Number: AX696-96254 Published: March 2013 Edition: 3 Copyright 2011, 2013 Hewlett-Packard Development Company, L.P. 1 Using MPIO Description

More information