EMC PowerPath Family. Product Guide. Version 5.7 P/N REV 04

Size: px
Start display at page:

Download "EMC PowerPath Family. Product Guide. Version 5.7 P/N 300-014-261 REV 04"

Transcription

1 EMC PowerPath Family Version 5.7 Product Guide P/N REV 04

2 Copyright , EMC Corporation. All rights reserved. Published in the USA. Published January 2014 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology, Common Information Model, Configuration Intelligence, Connectrix, CopyCross, CopyPoint, CX, Dantz, Data Domain, DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab, Xaminer, Xtender, Enginuity, eroom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, VMAX, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, WebXtender, xpression, xpresso, YottaYotta, the EMC logo, and the RSA logo, are registered trademarks or trademarks of EMC Corporation in the United States and other countries. Vblock is a trademark of EMC Corporation in the United States. VMware is a registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC online support website. 2 EMC PowerPath Family Version 5.7 Product Guide

3 CONTENTS Preface Chapter 1 Chapter 2 Chapter 3 Introduction About PowerPath Multipathing licenses CLARiiON AX-series support License key ordering and activation About this document PowerPath and related documentation PowerPath documentation Storage system documentation Other documentation PowerPath Overview Introduction Path management PowerPath features Using multiple ports Paths Active-active, active-passive, and ALUA storage systems Path sets Native devices Pseudo devices Dynamic multipath load balancing Load balancing with and without PowerPath Load balancing and failover policies Automatic path failover Proactive path testing and automatic path restoration Path states When are path tests done? Periodic testing of live paths Periodic testing and autorestore of dead paths How often are paths tested? Application tuning in a PowerPath environment Channel groups PowerPath management tools PowerPath CLI PowerPath Configuration Requirements PowerPath connectivity HBA and transport considerations High availability Fibre Channel configuration requirements FCoE configuration requirements High availability configurations iscsi configuration requirements Sample iscsi configurations Storage configuration requirements and recommendations EMC PowerPath Family Version 5.7 Product Guide 3

4 Contents 55 Appendix A 59 Appendix B 63 Appendix C All storage systems Symmetrix and VMAX storage systems Supported Hitachi TagmaStore, HP StorageWorks XP, and IBM ESS storage systems Supported HP StorageWorks EVA storage systems VNX and CLARiiON storage systems Invista storage devices VPLEX storage devices Dynamic reconfiguration Hot swapping an HBA PowerPath Standard Edition PowerPath SE functionality Installing PowerPath SE Using PowerPath SE PowerPath Family Functionality Summary PowerPath Family End-of-Life Summary Glossary Index 4 EMC PowerPath Family Version 5.7 Product Guide

5 FIGURES Title Page 1 Without PowerPath: One path to each logical device With PowerPath: Multiple paths to each logical device Path sets Native devices Pseudo devices I/O queuing without PowerPath I/O queuing with PowerPath Physical I/O path failure points Channel groups Highly available Fibre Channel configuration with PowerPath High-availability (multiple-fabric) Fibre Channel configuration Single-switch Fibre Channel configuration High-availability (multiple-fabric) Fibre Channel over Ethernet configuration High-availability (multiple-fabric) Fibre Channel over Ethernet to active-passive arrays Single NIC/HBA configuration Multiple NICs/HBAs to multiple subnets Multiple NICs/HBAs to one subnet PowerPath SE supported configuration EMC PowerPath Family Version 5.7 Product Guide 5

6 Figures 6 EMC PowerPath Family Version 5.7 Product Guide

7 TABLES Title Page 1 PowerPath Multipathing licenses PowerPath documentation set Reconfiguring pseudo devices PowerPath Family functionality summary by version and platform PowerPath end-of-life summary EMC PowerPath Family Version 5.7 Product Guide 7

8 Tableses 8 EMC PowerPath Family Version 5.7 Product Guide

9 PREFACE As part of an effort to improve its product lines, EMC periodically releases revisions of its software and hardware. Therefore, some functions described in this document might not be supported by all versions of the software or hardware currently in use. The product release notes provide the most up-to-date information on product features. Contact your EMC representative if a product does not function properly or does not function as described in this document. Note: This document was accurate at publication time. New versions of this document might be released on the EMC online support website. Check the EMC online support website to ensure that you are using the latest version of this document. Purpose Audience This document is part of the PowerPath documentation set, and is intended for use by storage administrators and other information system professionals responsible for using, installing, and maintaining PowerPath. Readers of this manual are expected to be familiar with the host operating system on which PowerPath runs, storage system management, and the applications used with PowerPath. Note that this Product Guide contains PowerPath version 5.7 and updates. Note: This manual applies to PowerPath on all supported platforms and storage systems, unless indicated otherwise in the text. Related documentation The PowerPath documentation set includes: EMC PowerPath Family Product Guide (this document) EMC PowerPath Family CLI and System Messages Reference Guide EMC PowerPath for AIX Installation and Administration Guide EMC PowerPath for HP-UX Installation and Administration Guide EMC PowerPath for Linux Installation and Administration Guide EMC PowerPath for Solaris Installation and Administration Guide EMC PowerPath and PowerPath/VE for Windows Installation and Administration Guide EMC PowerPath Family for AIX Release Notes EMC PowerPath Family for HP-UX Release Notes EMC PowerPath Family for Solaris Release Notes EMC PowerPath Family for Linux Release Notes EMC PowerPath Family Version 5.7 Product Guide 9

10 Preface EMC PowerPath and PowerPath/VE Family for Windows Release Notes EMC PowerPath Encryption with RSA User Guide EMC PowerPath Migration Enabler User Guide EMC PowerPath Management Pack for Microsoft Operations Manager User Guide These PowerPath manuals are updated periodically. Electronic versions of the updated manuals are available on the EMC Online Support site: If your environment includes Symmetrix and VMAX storage systems, to the EMC host connectivity guides, which are available on the EMC Online Support, site provide more information. If your environment includes CLARiiON storage systems, refer also to the following manuals: EMC host connectivity guides VNX Storage System Support website ( CLARiiON Storage-System Support website ( Revision history The following table presents the revision history of this document. Conventions used in this document Revision Date Description 04 January 21, 2014 Release of PowerPath 5.7 SP2 for Windows Support for XtremIO firmware Version 2.2 and later. 03 June 28, 2013 Addition of a new Microsoft Cluster Server (MSCS) functionality with PowerPath Migration Enabler beginning with PowerPath 5.7 SP1 for Windows. Modifications to the following section: PowerPath Migration Enabler on page February 22, 2013 Modifications to the following sections: About PowerPath Multipathing licenses on page 14 Virtual technologies on page 22 Remote monitoring and management on page 22 PowerPath Migration Enabler on page 23 PowerPath Family Functionality Summary on page 59 PowerPath Family End-of-Life Summary on page September 6, 2012 Release of PowerPath Family 5.7 Product Guide. EMC uses the following conventions for special notices: CAUTION, used with the safety alert symbol, indicates a hazardous situation which, if not avoided, could result in minor or moderate injury. 10 EMC PowerPath Family Version 5.7 Product Guide

11 Preface NOTICE is used to address practices not related to personal injury. Note: A note presents information that is important, but not hazard-related. IMPORTANT An important notice contains information essential to software or hardware operation. Typographical conventions EMC uses the following type style conventions in this document: Normal Bold Italic Courier Courier bold Courier italic Used in running (nonprocedural) text for: Names of interface elements, such as names of windows, dialog boxes, buttons, fields, and menus Names of resources, attributes, pools, Boolean expressions, buttons, DQL statements, keywords, clauses, environment variables, functions, and utilities URLs, pathnames, filenames, directory names, computer names, links, groups, service keys, file systems, and notifications Used in running (nonprocedural) text for names of commands, daemons, options, programs, processes, services, applications, utilities, kernels, notifications, system calls, and man pages Used in procedures for: Names of interface elements, such as names of windows, dialog boxes, buttons, fields, and menus What the user specifically selects, clicks, presses, or types Used in all text (including procedures) for: Full titles of publications referenced in text Emphasis, for example, a new term Variables Used for: System output, such as an error message or script URLs, complete paths, filenames, prompts, and syntax when shown outside of running text Used for specific user input, such as commands Used in procedures for: Variables on the command line User input variables < > Angle brackets enclose parameter or variable values supplied by the user [ ] Square brackets enclose optional values Vertical bar indicates alternate selections the bar means or { } Braces enclose content that the user must specify, such as x or y or z... Ellipses indicate nonessential information omitted from the example Where to get help EMC support, product, and licensing information can be obtained as follows: EMC PowerPath Family Version 5.7 Product Guide 11

12 Preface Product information For documentation, release notes, software updates, or information about EMC products, licensing, and service, go to the EMC online support site (registration required) at: Technical support For technical support, go to EMC online support and select Service Center. Note that to open a service request, you must have a valid support agreement. Contact your EMC sales representative for details about obtaining a valid support agreement or with questions about your account. Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Send your opinions of this document to: [email protected] 12 EMC PowerPath Family Version 5.7 Product Guide

13 CHAPTER 1 Introduction This chapter introduces PowerPath and provides a road map of PowerPath and related documentation. Topics include: About PowerPath Multipathing licenses PowerPath and related documentation Introduction 13

14 Introduction About PowerPath Multipathing licenses The EMC PowerPath Multipathing license type determines the PowerPath functionality available. Table 1 on page 14 summarizes the PowerPath licenses. The E-Lab Interoperability Navigator and PowerPath Family Release Notes provide supported storage system and OS details. As of PowerPath 5.7 for Windows and 5.7 SP1 for Linux, PowerPath Migration Enabler does not require a separate product license. Table 1 PowerPath Multipathing licenses PowerPath a PowerPath SE b PowerPath/VE Supported Storage Systems (Fibre Channel, Fibre Channel over Ethernet, and iscsi c ) Symmetrix Yes Yes Yes VNX Yes Yes Yes CLARiiON Yes Yes Yes Invista Yes Yes Yes VPLEX Yes Yes Yes XtremIO Yes Yes Yes Third-party Yes No Yes Supported Operating Systems Linux Yes Yes N/A UNIX Yes Yes No VMware vsphere No No Yes d Windows Yes e Yes Yes f Features Failover End-to-end failover. Backend failover only. g End-to-end failover. Load balancing Yes No Yes HBA support Two or more HBAs. Single HBA. Two or more HBAs. Path support 32 paths per logical device. Two paths only. 32 paths per logical device. a. Formerly called PowerPath Enterprise, PowerPath Enterprise Plus. b. PowerPath SE, PowerPath Fabric Failover, and Utility Kit PowerPath refer to the same product. c. The EMC Support Matrix PowerPath Family Protocol Support, available on EMC Online Support, provides protocol support information for the EMC PowerPath family of products d. PowerPath/VE for VMware vsphere. Linux and Windows are supported as Guest OSs on VMware vsphere. See the PowerPath/VE for VMware vsphere Release Notes for supported OS versions. Additional versions are supported through RPQ. e. Windows 2008 Hyper -V requires a separate license. f. PowerPath/VE for Windows Hyper-V. g. PowerPath Fabric Failover only for Symmetrix. PowerPath Storage Processor Failover only for VNX and CLARiiON. 14 EMC PowerPath Family Version 5.7 Product Guide

15 Introduction CLARiiON AX-series support License key ordering and activation About this document Certain versions of PowerPath (as listed in the E-Lab Interoperability Navigator on EMC Online Support) provide full functionality with or without a license when the host is connected exclusively to CLARiiON AX-series storage systems. Note that AX models earlier than CLARiiON AX4-5 are not supported with PowerPath for AIX or HP-UX. Effective February 15, 2011, instead of the physical Right To Use (RTU), the default delivery method for PowerPath licenses is electronic. An electronic License Authorization Code (LAC) is sent by in order to redeem the license key on the EMC Online Support Licensing Service Center. This does not affect upgrades because PowerPath retains existing license information. Physical RTU cards are still available as an option. EMC Global Support, at svc4emc (or ), can provide more information. The EMC PowerPath Family Electronic License Ordering Process Technical Notes, available on EMC Online Support, provides more information about the PowerPath license electronic ordering process. This document includes general PowerPath feature information about the PowerPath version 5.7 on the supported platforms. The exceptions to 5.7 support information are PowerPath Family Functionality Summary on page 59 and PowerPath Family End-of-Life Summary on page 63 which contain feature information about previous PowerPath releases. PowerPath/VE for VMware vsphere information is not included in this document. Table 2, PowerPath documentation set, on page 16 provides information on PowerPath/VE for VMware vsphere documentation. PowerPath and related documentation This section includes information about the PowerPath documentation set and about other related documentation. About this document 15

16 Introduction PowerPath documentation Table 2 on page 16 shows the PowerPath documentation set. Table 2 PowerPath documentation set (page 1 of 2) Title Description PowerPath and PowerPath/VE for Windows Installation and Administration Guide Describes how to install and remove the PowerPath and PowerPath/VE for Microsoft Hyper-V software and install and configure PowerPath and PowerPath/VE in Microsoft cluster environments. Discusses other issues and administrative tasks specific to PowerPath and PowerPath/VE on a Windows host. a PowerPath for AIX Installation and Administration Guide PowerPath for HP-UX Installation and Administration Guide PowerPath for Linux Installation and Administration Guide PowerPath for Solaris Installation and Administration Guide PowerPath and PowerPath/VE for VMware vsphere Installation and Administration Guide PowerPath Family Product Guide (this document) PowerPath Family CLI and System Messages Reference Guide PowerPath Family Release Notes Describes how to install and remove the PowerPath software, install and configure PowerPath in AIX cluster environments, configure a PowerPath device as the boot device. Discusses other issues and administrative tasks specific to PowerPath on an AIX host. Describes how to install and remove the PowerPath software, install and configure PowerPath in HP-UX cluster environments, configure a PowerPath device as the boot device. Discusses other issues and administrative tasks specific to PowerPath on an HP-UX host. Describes how to install and remove PowerPath on a Linux host, install and configure a PowerPath device as the boot device a. Discusses other issues and administrative tasks specific to PowerPath on a Linux host. Describes how to install and remove the PowerPath software, install and configure PowerPath in Solaris cluster environments, configure a PowerPath device as the boot device. Discusses other issues and administrative tasks specific to PowerPath on a Solaris host. Describes how to install and remove the PowerPath/VE software, configure a PowerPath/VE device as the boot device. Discusses other issues and administrative tasks specific to PowerPath/VE on a VMware vsphere host. Describes the load-balancing and failover features and configuration requirements of PowerPath and PowerPath/VE. Describes the command line utility used to monitor and manage a PowerPath environment. Discusses messages returned by the PowerPath driver, PowerPath installation process, powermt utility, and other PowerPath utilities, and suggests how to respond to them. Describes hardware and software requirements for the host and storage systems for your PowerPath physical and, where applicable, virtual, multipathing environment; describes hardware and software requirements for Migration Enabler, as well as known issues and limitations; and describes new features, configuration information, and supplemental information about PowerPath Encryption with RSA. Note: PowerPath Encryption with RSA requires a separate license key. 16 EMC PowerPath Family Version 5.7 Product Guide

17 Introduction Table 2 PowerPath documentation set (page 2 of 2) PowerPath Migration Enabler User Guide PowerPath Encryption with RSA User Guide PowerPath Management Pack for Microsoft Operations Manager User Guide Describes how to migrate data from one storage system to another with Migration Enabler and Open Replicator, TimeFinder/Clone, Host Copy, and Encapsulation. Describes the PowerPath Encryption with RSA product, how to configure a PowerPath Encryption with RSA environment, how to enable and disable encryption, and how to encrypt data. PowerPath Encryption with RSA requires a separate product license. Contains information about EMC PowerPath Management Pack 2.0 for MOM (Microsoft Operations Manager) 2005, SCOM (Systems Center Operations Manager) 2007, and System center Operations Manager. Provides installation and configuration procedures for the Windows SNMP Service, describes events supported by the management pack, and gives use cases and troubleshooting tips. This document is not available in the PowerPath documentation library on EMC Online Support; rather, it is packaged with the software and is available with the software on EMC Online Support. a. The PowerPath installer has been localized for this platform. Localized versions of the Installation and Administration Guide are available in Brazilian Portuguese, French, German, Italian, Korean, Japanese, Latin American Spanish, and simplified Chinese. Storage system documentation These manuals are updated periodically and posted on the EMC Online Support site ( The following documentation provides information on setting up your storage systems for PowerPath: EMC host connectivity guides (available on EMC Online Support site) VNX Storage System Support website ( CLARiiON Storage-System Support website ( The EMC Product Guide or Configuration Planning Guide for your storage-system model The EMC Installation Guide and vendor documentation for your HBA (host bus adapter) Other documentation Some PowerPath features can be administered through other EMC applications: EMC ControlCenter Overview provides information on using PowerPath with EMC ControlCenter. A subset of PowerPath functions are available through the Unisphere application for VNX Operating Environment (OE) systems. Refer to the VNX OE Storage System website ( PowerPath and related documentation 17

18 Introduction A subset of PowerPath functions are available through the Navisphere and the Unisphere applications for CLARiiON systems. Refer to CLARiiON Storage-System Support website ( EMC Invista documentation, available on the EMC Online Support site ( EMC VPLEX documentation, available on the EMC Online Support site ( 18 EMC PowerPath Family Version 5.7 Product Guide

19 CHAPTER 2 PowerPath Overview This chapter is an overview of PowerPath. Topics include: Introduction Using multiple ports Dynamic multipath load balancing Automatic path failover Proactive path testing and automatic path restoration Application tuning in a PowerPath environment PowerPath management tools PowerPath Overview 19

20 PowerPath Overview Introduction PowerPath is a host-based software that provides path management. PowerPath operates with several storage systems, on several operating systems, with Fibre Channel and iscsi data channels and with Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012 R2, and Windows Server 2012 non-clustered hosts only, parallel SCSI channels. Path management PowerPath works with the storage system to intelligently manage I/O paths. Path refers to the physical route between a host and a storage system Logical Unit (LU). This includes the host bus adapter (HBA) port, cables, a switch, a storage system interface and port, and an LU. LU refers to a physical or virtual device addressable as a single storage volume behind a storage system target. For the iscsi standard, path is the Initiator- Target-LUN, or ITL nexus and encompasses the connection between the HBA, Storage Port and LUN. Bus refers to two connected SAN edge points (for example, Fibre Channel fabric N-port addresses) in the storage configuration: an HBA port on the server on one end and an array port on the other. For the iscsi standard, bus is the Initiator-Target, or IT nexus. This differs from a storage path, which refers to a host's end-to-end storage connection with an LU. Typically, multiple storage paths traverse a single bus. PowerPath supports multiple paths to a logical device, enabling PowerPath to provide: Automatic failover in the event of a hardware failure. PowerPath automatically detects path failure and redirects I/O to another path. Dynamic multipath load balancing. PowerPath distributes I/O requests to a logical device across all available paths, thus improving I/O performance and reducing management time and downtime by eliminating the need to configure paths statically across logical devices. PowerPath path management features and functionality are described in this guide. PowerPath features PowerPath features include: Multiple paths, for higher availability and performance PowerPath supports multiple paths between a logical device and a host. Having multiple paths enables the host to access a logical device even if a specific path is unavailable. Also, multiple paths can share the I/O workload to a given logical device. Path management insight capabilities PowerPath characterizes I/O patterns and aides in diagnosing I/O problems due to flaky paths or unexpected latency values. Metrics are provided on: Read, write MB/seconds per LUN Latency distribution: high and low water marks per path Retries: number of I/Os that did not succeed down a path 20 EMC PowerPath Family Version 5.7 Product Guide

21 PowerPath Overview PowerPath also defines and measures performance on I/O throughput, fault detection, and path restore. Three new CLI commands (powermt set perfmon, powermt display perf, and powermt display perf bus) provide this information. The EMC PowerPath Family CLI and System Messages Reference Guide provides more information. The commands associated with path management insight may cause some performance overhead. However, EMC qualifications and in-house testing have displayed no measurable impact to performance. Expanded support for standby: autostandby An autostandby feature has been added to the standby feature to automatically put paths into autostandby that have intermittent I/O failures (also known as flaky paths) and/or to automatically select autostandby for high-latency paths in VPLEX cross-connected Metro configurations. The EMC PowerPath Family CLI and System Messages Reference Guide provides more information. Automatic host-array registration Host Registration for EMC arrays is a process that sends special I/O commands down each path to an EMC array to register the initiators. The information sent in the registration includes the initiator type, hostname, and IP address. This information is used by the array to determine the correct SCSI behavior to present to the host and in the EMC array to display hosts for connecting to storage. Host Registration is carried out automatically and simplifies the configuration process. Installation and administration features: Unattended installation Unattended PowerPath installation uses command-line parameters which do not require any user input. The PowerPath Installation and Administration Guide for your platform provides more information. NRU (no reboot upgrade) You need not restart the host after the upgrade, provided you close all applications that use PowerPath devices before you install PowerPath. The PowerPath Installation and Administration Guide for your platform provides platform-specific information on NRU. R1/R2 boot If a storage system device corresponding to a bootable emcpower device is mirrored through SRDF, it is possible in the event of a server failure at the local storage system for PowerPath to fail over the boot disk to the remote mirror disk and then boot the server on an identical remote host. The PowerPath Installation and Administration Guide for your platform provides platform-specific information on R1/R2 boot. Introduction 21

22 PowerPath Overview Virtual technologies PowerPath supports the following virtual technologies: Hyper-V PowerPath supports Hyper-V with Windows Server 2012 and Windows bit. Specifically: On the parent partition, all the features supported in physical environments are supported in the virtual environment: Multipathing load balancing and failover functionalities Migrations Encryption MSCS Clusters iscsi LUNs Remote monitoring and management Fibre Channel LUNs On the child partition, the following features are supported: PowerPath installation for Windows operating systems supported by PowerPath for iscsi LUNs exposed through Microsoft software iscsi initiator Multipathing functionalities only for iscsi LUNs exposed through Microsoft software iscsi initiator. The E-Lab Interoperability Navigator TM provides updated Hyper-V support information. The PowerPath and PowerPath/VE Family for Windows Release Notes provides information on supported child partition operating systems. PowerPath supports remote monitoring and management of the PowerPath environment through compatibility with the following software products: EMC PowerPath Viewer EMC PowerPath Viewer is a centralized monitoring utility that provides a consolidated display of events and allows you to view and monitor up to 1000 PowerPath hosts through a graphical user interface (GUI). You can view hosts, host groups, LUNs, individual paths to each LUN, and buses. PowerPath Viewer also provides alerts to any changes in the status of PowerPath devices. PowerPath Viewer includes two components: PowerPath Viewer Console, which is the interface that displays information and alerts about the remote PowerPath hosts that you are monitoring; and the PowerPath Management Component, which is the host-based software that monitors host, LUNs, data path, and bus events and sends information and alerts to the Viewer Console over IP. PowerPath Viewer is available as a separate download in the PowerPath software downloads section of EMC Online Support. The PowerPath Viewer 1.0 and Minor Releases Release Notes and PowerPath Viewer Installation and Administration Guide, available on EMC Online Support, provide more information. Systems Management Server (SMS) SMS is a systems management software product by Microsoft for managing large groups of Windows-based computer systems. Configuration Manager, a feature of SMS, provides remote control, patch management, software distribution, operating system deployment, and hardware and software inventory. 22 EMC PowerPath Family Version 5.7 Product Guide

23 PowerPath Overview The PowerPath and PowerPath/VE Family for Windows Release Notes provides for more information on SMS. SNMP management daemon PowerPath for Windows supports a management daemon that monitors PowerPath devices and alerts the administrator when access to devices is disrupted. This functionality is delivered through System Center Operations Manager, System Center Operations Manager (SCOM) 2007 and Microsoft Operations Manager (MOM) 2005 management packs. The PowerPath Event monitoring feature issues MOM alerts and SNMP traps for specific PowerPath events. The MOM alerts are viewable using the respective version of the MOM consoles. The SNMP traps are viewable through an SNMP manager. These events are generally multipathing events such as Path is Dead. Apart from event management, it also implements a couple of MOM tasks that can be run to retrieve the version and license capabilities of PowerPath installations. PowerPath Migration Enabler The PowerPath Management Pack for Microsoft Operations Manager User Guide provides more information. A similar SNMP management daemon is supported on Linux, Solaris, and AIX. The PowerPath for Linux Installation and Administration Guide provides more information on the SNMP management daemon. PowerPath Migration Enabler is a host-based migration tool that allows you to migrate data between storage systems. PowerPath Migration Enabler is independent of PowerPath Multipathing and does not require that you use PowerPath for multipathing. PowerPath Migration Enabler works in conjunction with other underlying technologies such as Open Replicator (OR) or Invista. As of version 5.7 and later for Windows and 5.7 SP1 and later for Linux, PowerPath Migration Enabler does not require a separate product license. The PowerPath Multipathing license includes Migration Enabler functionality and allows you to migrate data with Open Replicator, TimeFinder/Clone, Host Copy, and Encapsulation. The PowerPath Migration Enabler features include: Virtual encapsulation When using Migration Enabler with Invista, Invista encapsulates the source-device name. The original Symmetrix, VNX, or CLARiiON element is the source logical unit in the migration, and the Invista Virtual Volume is the target. The PowerPath Migration Enabler User Guide provides information on migrating data to an Invista Virtual Volume. The PowerPath Family Release Notes for your platform provides information on supported Invista versions for virtual encapsulation. Support for multiple migration sessions. Youcan use the -file and -all options from the powermig commands with Open Replicator, Host Copy, and TimeFinder/Clone, to manage simultaneous migration sessions. You can use a file of migration pairs with the powermig setup command (-file option) or manage all migration sessions that are in the state required for that powermig command (all option). The EMC PowerPath Family CLI and System Messages Reference Guide provides more information. Introduction 23

24 PowerPath Overview PowerPath Migration Enabler with Open Replicator When using Migration Enabler with Open Replicator for Symmetrix, data is copied through the fabric from the source logical unit to the target logical unit; the data movement is controlled by the Symmetrix system where the target resides. Migration Enabler mirrors I/O to keep the source and target logical units synchronized throughout the migration process. The PowerPath Migration Enabler User Guide provides information on Open Replicator. PowerPath Migration Enabler Host Copy When using Migration Enabler with host-based copy (also called Host Copy), Migration Enabler works in conjunction with the host operating system to migrate data from the specified source logical unit to the target logical unit. A Host Copy migration does not use or require a direct connection between the arrays containing the source and target logical units. Host Copy can be used to migrate plaintext data, or it can be used to migrate data to or from an encrypted logical unit. Because Host Copy migrations consume host resources, Migration Enabler provides parameters that allow you to control the degree of host-resource usage. The PowerPath Migration Enabler User Guide provides information on Migration enabler Host Copy. You can pause and resume a Host Copy migration that is in the Syncing state. Pausing a migration allows host resources to be released for other operations. The synchronization can then be resumed at a later time that is convenient. The PowerPath Migration Enabler User Guide provides information on Migration Enabler pause and resume. PowerPath Migration Enabler TimeFinder/Clone TimeFinder/Clone is a Solutions Enabler technology that can be used with Migration Enabler to create a full volume copy of a source device when the source and target devices are in the same Symmetrix. The PowerPath Migration Enabler User Guide provides information on PowerPath Migration Enabler with TimeFinder/Clone. PowerPath Migration Enabler in an MSCS environment The Microsoft Cluster Server (MSCS) provides a clustering technology that keeps server-based applications highly available, regardless of individual component failures. When migrating devices in an MSCS environment with PowerPath Migration Enabler, failover groups need not be disabled. You do not have to shutdown any nodes in the cluster to perform a cluster migration. PowerPath Migration Enabler supports multiple cluster migrations from multiple cluster nodes. You can perform non-cluster migrations between non-cluster disks in any cluster node. The PowerPath Migration Enabler User Guide provides information on migrating devices in an MSCS environment. Using the powermigcl command and its associated arguments, you can configure cluster resources for physical disks. The PowerPath Family CLI and System Messages Reference Guide provides additional information on configuring PowerPath Migration Enabler cluster resources. 24 EMC PowerPath Family Version 5.7 Product Guide

25 PowerPath Overview Thin device support PowerPath Migration Enabler supports migrations with thin (virtually provisioned) devices running Symmetrix Enginuity 5773, VNX OE and CLARiiON FLARE and later. Devices on these Symmetrix and CLARiiON versions are auto-detected by Migration Enabler Host Copy during powermig setup. The PowerPath Family Release Notes for your platform provides support information on Symmetrix Enginuity 5773, VNX OE, and CLARiiON FLARE and later microcode versions for your platform and version, which is required for thin device support. The Migration Enabler section of the PowerPath Family Release Notes for your platform provides more information on Migration Enabler support of thin devices. The EMC Host Connectivity Guide for your platform and the E-Lab Interoperability Navigator, available on EMC Online Support, provide more background on thin devices. Host Copy Ceiling Host Copy Ceiling lets you specify an upper limit on the aggregate rate of copying for all Host Copy migrations. A new powermig option command is used to set the ceiling value. The EMC PowerPath Migration Enabler User Guide provides more information. Remote management on PowerPath Migration Enabler PowerPath Migration Enabler supports the following remote management tools: Solutions Enabler (SE) Thin Client This allows you to run PowerPath's migration features using a remotely installed Solutions Enabler package instead of having to run PowerPath Migration Enabler and Solutions Enabler on the same host. The Migration Enabler section of the PowerPath Family Release Notes for your platform provides for more information on remote SE. The following documents, available on EMC Online Support, provide more information about PowerPath Migration Enabler: EMC PowerPath Migration Enabler User Guide EMC PowerPath Family Release Notes PowerPath Encryption with RSA PowerPath Encryption with RSA is host-based software distributed as part of the PowerPath package. A separate product license is needed to use PowerPath Encryption. PowerPath Encryption provides the following security benefits: Ensures the confidentiality of data on a disk drive that is physically removed from a data center. Prevents anyone who gains unauthorized access to the disk from reading or using the data on that device. PowerPath Encryption uses strong encryption protocols to safeguard sensitive data on disk devices. It transparently encrypts data written to a disk device and decrypts data read from it. Some features available with PowerPath Encryption with RSA are: Introduction 25

26 PowerPath Overview Interoperability with PowerPath Migration Enabler software Interoperability with Migration Enabler provides data migration capabilities. In a PowerPath Encryption environment, Migration Enabler migrates: Plaintext data on an unencrypted logical unit to an encrypted virtual logical unit. Encrypted data on a virtual logical unit to plaintext data on an unencrypted logical unit. Encrypted data on a virtual logical unit to a different virtual logical unit (rekeying). The PowerPath Migration Enabler User Guide provides general information on performing migrations. Encryption and volume managers PowerPath Encryption supports the Veritas Volume Manager (VxVM) on Windows hosts. You can use PowerPath Encryption encrypted virtual logical units to create LVM and SVM disk sets and VxVM disk groups, and to allocate volumes within the disk set or group. All LVM, SVM, and VxVM operations work with PowerPath Encryption encrypted virtual logical units, subject to the best practices described in the PowerPath Encryption with RSA User Guide. The Encryption with RSA section of the PowerPath Family Release Notes for your platform provides information on supported LVM, SVM, and VxVM versions for your environment. Thin device support PowerPath Encryption supports thin devices as described in PowerPath Migration Enabler on page 23. The Encryption with RSA section of the PowerPath Family Release Notes for your platform provides more information on thin devices. Host Client Configuration For encryption configuration and enablement, a utility is required for the host configuration files. This will assist in preventing errors during installation. PowerPath Encryption supports this utility. Encryption with DPM appliance support. The Encryption with RSA section of the PowerPath Family Release Notes for your platform provides information on supported DPM appliances for your environment. The PowerPath Encryption User Guide provides information on the DPM appliance. The following documents, available on the EMC Online Support site, contain more information about PowerPath Encryption: EMC PowerPath Encryption with RSA User Guide EMC PowerPath Family Release Notes Using multiple ports PowerPath can use multiple ports to each logical device. You can configure a logical device as a shared device using two or more interfaces. (An interface is, for example, a Fibre Adapter [FA] on a Symmetrix system or a Storage Processor [SP] on a VNX and CLARiiON system.) In this way, all logical devices can be visible on all ports, to enhance availability. 26 EMC PowerPath Family Version 5.7 Product Guide

27 PowerPath Overview Paths In the configuration shown in Figure 1 on page 27, without PowerPath, there can be at most one path to each logical device. A physical path comprises a route between a host and a logical device: the host bus adapter (HBA, a port on the host computer, through which the host can issue I/O), cables, a switch, a storage system interface and port, and the logical device. Since there are two logical devices in the figure, there can be at most two interface ports and two HBAs one port and one HBA per logical device. Figure 1 Without PowerPath: One path to each logical device Without PowerPath, the host s SCSI driver cannot take advantage of multiple paths to a logical device. This is because most operating systems view each path as a unique logical device, even when multiple paths lead to the same logical device; this can result in data corruption or a system crash. PowerPath eliminates this restriction. With PowerPath, you can take advantage of the multiple paths to a logical device that shared host and storage ports provide. The number of shared paths possible with fabric configurations is even greater. For example, PowerPath manages 1600 paths on a host with 4 HBAs connected via a fabric to 4 ports on a storage system with 100 logical devices (4 HBAs x 4 FAs x 100 logical devices = 1600). In contrast to Figure 1 on page 27, Figure 2 on page 27 shows multiple paths to each logical device, in a configuration with PowerPath. Figure 2 With PowerPath: Multiple paths to each logical device With PowerPath, both logical devices are accessible through both interface ports. This allows I/O to a logical device to flow through multiple paths. As shown, two paths lead to logical device 0 and two lead to logical device 1. PowerPath exploits the multipathing capability of storage systems. Depending on the capabilities of the storage system, PowerPath provides load-balanced or failure-resistant paths between a host and a logical device. This allows PowerPath to: Using multiple ports 27

28 PowerPath Overview Increase I/O throughput, by sending I/O requests targeted to the same logical device over multiple paths. Prevent loss of data access, by redirecting I/O requests from a failed path to a working path. Active-active, active-passive, and ALUA storage systems PowerPath works with three types of storage systems: Active-active For example, Symmetrix, Celerra, IBM TotalStorage Enterprise Storage Server (ESS), Hitachi TagmaStore, HP StorageWorks XP systems, HP StorageWorks EVA systems Active-passive For example, VNX and CLARiiON systems ALUA (asymmetric logical unit access) For example, VNX and CLARiiON CX3 systems with FLARE version Your platform release notes provide information on the supported storage systems for your PowerPath environment. Active-active Active-passive Active-active means all interfaces to a device are active simultaneously. In an active-active storage system, if there are multiple interfaces to a logical device, they all provide equal access to the logical device. Active-passive means only one interface to a device is active at a time, and any others are passive with respect to that device and waiting to take over if needed. In an active-passive storage system, if there are multiple interfaces to a logical device, one of them is designated as the primary route to the device; the device is assigned to that interface card. Typically, assigned devices are distributed equally among interface cards. I/O is not directed to paths connected to a nonassigned interface. Normal access to a device through any interface card other than its assigned one is either impossible (for example, on VNX and CLARiiON systems) or possible but much slower than access through the assigned interface card. In the event of a failure of an interface card or all paths to an interface card logical devices must be moved to another interface. If an interface card fails, logical devices are reassigned from the broken interface to another interface. This reassignment is initiated by the other, functioning interface. If all paths from a host to an interface fail, logical devices accessed on those paths are reassigned to another interface, with which the host can still communicate. This reassignment is initiated by PowerPath, which instructs the storage system to make the reassignment. The VNX and CLARiiON term for these reassignments is trespassing. Reassignment can take several seconds to complete; however, I/Os do not fail during this time. After devices are reassigned, PowerPath detects the changes and seamlessly routes data through the new route. 28 EMC PowerPath Family Version 5.7 Product Guide

29 PowerPath Overview After a reassignment, logical devices can be reassigned (trespassed back, in VNX and CLARiiON terminology) to their originally assigned interface. This occurs automatically if PowerPath s periodic autorestore feature is enabled. (See Periodic testing and autorestore of dead paths on page 36.) It occurs manually if powermt restore is run; this is the faster approach. (See PowerPath CLI on page 39 for information powermt commands.) Periodic autorestore reassigns logical devices only when restoring paths from a failed state. If paths to the default interface are not marked dead, you must use powermt restore. The PowerPath Family CLI and System Messages Reference Guide provides more information on powermt commands. ALUA Asymmetric logical unit access (ALUA) is an array failover mode available on VNX OE and CLARiiON systems with FLARE version or later in which one array controller is designated as the active/optimized controller and the other array controller is designated as the active/non-optimized controller. As long as the active/optimized controller is viable, I/O is directed to this controller. If the active/optimized array controller become unavailable or fails, I/O is directed to the active/non-optimized array controller. In case of ALUA, PowerPath trespasses all the volumes when all optimized paths to the volume go dead. It trespasses even though there is no I/O going through the volume. This behavior is implemented in PowerPath to avoid performance penalty on I/O. The E-Lab Interoperability Navigator provides the most recent qualification information. Path sets PowerPath groups all paths to the same logical device into a path set. PowerPath creates a path set for each logical device, and then populates the path set with all usable paths to that logical device. For the configuration in Figure 2 on page 27, with two logical devices, PowerPath creates two path sets, as shown in Figure 3 on page 29. Each contains the same two physical paths, for a total of four logical paths. Figure 3 Path sets In an active-active system, once PowerPath creates a path set, it can use any path in the set to service an I/O request. If a path fails, PowerPath can redirect an I/O request from that path to any other viable path in the set. This redirection is transparent to the application, which does not receive an error. In an active-passive system, path sets are divided into two load-balancing groups. The active group contains all paths to the interface to which the target logical device is assigned; the other group contains all paths to the other, nonassigned interface. Only one Using multiple ports 29

30 PowerPath Overview load-balancing group processes I/O requests at a time, and PowerPath load balances I/O across all paths in the active group. If a path in the active load-balancing group fails, PowerPath redirects the I/O request to another path in the active group. If all paths in the active load-balancing group fail, PowerPath reassigns the logical device to the other interface, and then redirects the I/O request to a path in the newly activated group. From an application's perspective, a path set appears as a single, highly available path to storage. PowerPath hides the complexity of paths in the set, between the host and the storage system. With the logical concept of a path set, PowerPath hides multiple HBAs, cables, ports, hubs, and switches. Applications such as DBMSs get the benefits of multiple I/O paths faster I/O throughput and highly available data access without the complexity of multiple paths or the vulnerability of single paths. Native devices This section discusses native devices. Note: This section does not apply to all PowerPath-supported platforms and versions. On Windows, native devices are not exposed to users. On Linux, native device support varies by version; the PowerPath Family for Linux Release Notes provides information on native device support. The operating system creates native devices to represent and provide access to logical devices. The device is native in that it is provided by the operating system for use with applications. A native device is path specific (as opposed to path independent) and represents a single path to a logical device. Figure 4 shows PowerPath s view of native devices. Figure 4 Native devices In the figure, there is a native device for each path. The storage system in the figure is configured with two shared logical devices, each of which can be accessed by four paths. There are eight native devices, four (in white, numbered 0, 2, 4, and 6) representing a unique path set to logical device 0, and four (in black, numbered 1, 3, 5, and 7) representing a unique path set to logical device 1. These are not shared logical devices: A shared device is accessed by multiple hosts simultaneously. 30 EMC PowerPath Family Version 5.7 Product Guide

31 PowerPath Overview How applications access native devices You need not reconfigure applications to use native devices; you simply use the existing disk devices created by the operating system. When using native devices, PowerPath is transparent to applications. PowerPath maintains the correspondence between an individual native device and the path set to which it belongs. On Linux, you have the option of either using native devices (with no conversion of applications) or converting to pseudo devices (see Pseudo devices on page 31). Note: Native device support for Linux varies by version; the PowerPath Family for Linux Release Notes provides information on native device support. Example Suppose you have three native devices in a path set. PowerPath maintains the association among these paths. When an application writes to any one of them, PowerPath redirects the I/O to whichever native device in the path set will provide the best throughput. Also, a problem with one native device does not disrupt data access. Instead, PowerPath shifts I/O processing to another native device in the path set, allowing applications to continue reading from and writing to native devices in the same path set. Pseudo devices A PowerPath pseudo device represents a single logical device and the path set leading to it, which can contain any number of physical paths. There is one (and only one) pseudo device per path set. For example, in Figure 5 on page 31, logical devices 0 and 1 are referred to by pseudo device names emcpower1c and emcpower2c, respectively. Each pseudo device represents the set of paths connected to its respective logical device: emcpower1c represents the set of paths connected to logical device 0, and emcpower2c represents the set of paths connected to logical device 1. Figure 5 Pseudo devices Table 3 describes whether applications need to be reconfigured to use pseudo devices. Using multiple ports 31

32 PowerPath Overview Table 3 Reconfiguring pseudo devices Platform Windows Linux Must applications be reconfigured to use pseudo devices? No Yes (including filesystem mounting tables and volume managers) Windows note As shown in Figure 5 on page 31, Windows users see only pseudo devices, not native devices. On Windows, each logical device has one name. They follow standard Windows naming conventions and appear like any other devices on a Windows system. These standard devices are pseudo because they are path independent. They also are native because, although created by PowerPath, they cannot be differentiated from devices created by the operating system. Dynamic multipath load balancing Without PowerPath, you must statically load balance paths to logical devices to improve performance. For example, based on current usage, you might configure three heavily used logical devices on one path, seven moderately used logical devices on a second path, and 20 lightly used logical devices on a third path. As usage changes, these statically configured paths may become unbalanced, causing performance to suffer. You must then reconfigure the paths, and continue to reconfigure them as I/O traffic between the host and the storage system shifts in response to usage changes. PowerPath tries to maintain maximum performance and reduce management through dynamic load balancing. PowerPath is designed to use all paths at all times. PowerPath distributes I/O requests to a logical device across all available paths, rather than requiring a single path to bear the entire I/O burden. (On active-passive storage systems, available paths are those paths leading to the active SP for each logical device.) PowerPath can distribute the I/O for all logical devices over all paths shared by those logical devices, so all paths are equally burdened. PowerPath load balances I/O on a host-by-host basis. It maintains statistics on all I/O for all paths. For each I/O request, PowerPath intelligently chooses the least-burdened available path, depending on the load-balancing and failover policy in effect. If an appropriate policy is specified, all paths in a PowerPath system have approximately the same load. PowerPath uses all the I/O processing and bus capacity of all paths. A path need never be overloaded and slow while other paths are idle. In addition to improving I/O performance, dynamic load balancing reduces management time and downtime, because administrators no longer need to configure paths statically across logical devices. With PowerPath, no setup time is required, and paths are always configured for optimum performance. 32 EMC PowerPath Family Version 5.7 Product Guide

33 PowerPath Overview Load balancing with and without PowerPath Figure 6 on page 33 shows I/O queuing on a host without PowerPath installed. The paths are out of balance. Figure 6 I/O queuing without PowerPath Figure 7 on page 33 shows I/O queuing on a host with PowerPath installed. I/O is balanced across all available paths. Figure 7 I/O queuing with PowerPath Load balancing and failover policies PowerPath confers the greatest benefit to environments that are pathbound. In a pathbound I/O environment, the time it takes to execute the I/O load from a particular job is limited by bus capacity for a given path. In pathbound environments, enough I/O regularly queues up on a single path to overload it. By spreading the load evenly across the paths, PowerPath significantly improves I/O performance. PowerPath selects a path for each I/O request according to the load-balancing and failover policy set by the administrator for that logical device. Note: Unlicensed versions of PowerPath support EMC arrays only. This configuration is supported if the host has a single HBA only. This configuration is also referred to as PowerPath/SE ( PowerPath Standard Edition on page 55 provides more information). With third-party arrays in an unlicensed PowerPath environment, unmanage the third-party array class (powermt unmanage class=class) or upgrade to a licensed version of PowerPath. Dynamic multipath load balancing 33

34 PowerPath Overview Automatic path failover Table 1 on page 14 provides a summary of the platform, array, and feature support available with each type of PowerPath license. The PowerPath Family CLI and System Messages Reference Guide, available on EMC Online Support, provides more information on all load-balancing and failover policies. PowerPath enhances application availability by eliminating the I/O path as a point of failure. Figure 8 on page 34 identifies points of failure in the I/O path: HBA/NIC Interconnect (cable and patch panel) Switch Interface Interface port Figure 8 Physical I/O path failure points With the proper hardware configuration, PowerPath can compensate for the failure of any of these components. If a path fails, PowerPath redistributes I/O traffic from that path to functioning paths. PowerPath stops sending I/O to the failed path and checks for an active alternate path. If an active path is available, PowerPath redirects I/O along that path. If no active paths are available, alternate, standby paths (if available) are brought into service, and I/O is routed along the alternate paths. On active-passive storage systems, all paths to the active SP are used before any paths to the passive SP. PowerPath continues testing the failed path. If the path passes the test, PowerPath resumes using it. This path failover and failure recovery process is transparent to applications. (Occasionally, however, there is a short delay.) Proactive path testing and automatic path restoration The PowerPath multipath module is responsible for selecting the best path to a logical device to optimize performance and for protecting applications from path failures. It detects failed paths and retries failed application I/O requests on other paths. 34 EMC PowerPath Family Version 5.7 Product Guide

35 PowerPath Overview To determine whether a path is operational, PowerPath uses a path test. A path test is a sequence of I/Os PowerPath issues specifically to ascertain the viability of a path. If a path test fails, PowerPath disables the path and stops sending I/O to it. After a path fails, PowerPath continues testing it periodically, to determine if it is fixed. If the path passes a test, PowerPath restores it to service and resumes sending I/O to it. The storage system, host, and application remain available while the path is restored. The time it takes to do a path test varies. Testing a working path takes milliseconds. Testing a failed path can take several seconds, depending on the type of failure. Path states PowerPath manages the state of each path to each logical device independently. From PowerPath s perspective, a path is alive or dead: A path is alive if it is usable; PowerPath can direct I/O to this path. A path is dead if it is not usable; PowerPath does not direct user I/O to this path. PowerPath marks a path dead when it fails a path test; it marks the path alive again when it passes a path test. When are path tests done? The state of a path changes based on the result of the path test; for example, the state of the path will change if a live path fails the test or if a dead path passes the test. Path states are listed with the powermt display command. PowerPath CLI on page 39 provides more information on powermt commands. PowerPath tests a path under the following conditions: A new path is added. Before any new path is brought into service, it must be tested. This is true for newly configured paths to both existing logical devices and newly configured logical devices. PowerPath is servicing an I/O request and there are no more live paths to try. PowerPath always tries to issue application I/Os, even if all paths to the target logical device are dead when the I/O request is presented to PowerPath. Before PowerPath returns the I/O with an error condition, it tests every path to the target logical device. Only if all these path tests fail does PowerPath return an I/O error. You run powermt load, powermt restore, or powermt config. These commands issue many path tests, so the state of many paths may change as a result of running the commands. PowerPath CLI on page 39 provides more information on powermt commands. PowerPath marks a path to be tested by the periodic test process when an I/O error is returned by the HBA driver. In this case, PowerPath marks for testing both the path with the error and related paths (for example, those paths that share an HBA and storage port with the failed path). Meanwhile, PowerPath reissues the failed I/O on another path. PowerPath avoids issuing I/Os on any path marked for a path test. Paths marked for testing are tested when the path test process next runs. Refer to How often are paths tested? on page 37. Proactive path testing and automatic path restoration 35

36 PowerPath Overview Path-testing optimization: Testing related paths Periodic testing of live paths In addition, all paths alive and dead are tested periodically, as described in Path-testing optimization: Testing related paths on page 36. When a path fails due to an I/O error, PowerPath marks all related paths (for example, paths on the same bus) for testing. Until these related paths are tested, PowerPath avoids selecting them for I/O. This optimization process avoids sending I/Os to a failed path, which in turn avoids timeout and retry delays throughout the entire I/O subsystem (application, operating system, fabric, and storage system). It also is important, however, to quickly identify paths that are still alive, so that overall I/O bandwidth is not unnecessarily reduced longer than necessary. PowerPath orders the testing of related paths, to minimize the time live paths are unavailable. The ordering is done to minimize the number of path tests needed to identify which path components failed. In simple topologies, where an HBA and storage port are directly attached to each other, a failed HBA makes the storage port inaccessible, so all related paths are dead. In this case, test ordering is relatively unimportant. In complex fabric topologies, however, where multiple paths share components (ports, switches, and cables), a failed HBA does not necessarily make any storage port inaccessible. In this case, well ordered path testing can substantially reduce the amount of time live paths are unavailable. PowerPath tests live paths periodically to identify failed paths, especially among those not used recently. This helps prevent application I/O from being issued on dead paths that PowerPath otherwise would not detect as dead. This in turn reduces timeout and retry delays. Periodic testing of live paths is a low-priority task. It is not designed to test all paths within a specific time, but rather to test all paths within a reasonable time, without interfering with application I/O. Live paths are tested when the path test process runs, provided the paths: Have not been tested for at least 1 hour. Are idle. An idle path is one that was not used for I/O within the last minute. Typically, all live, idle paths are tested at least hourly, although this is not guaranteed. In an active system, with few idle paths, live paths are rarely tested. Such testing is not necessary in an active system with application I/O on most paths since path testing is triggered promptly by I/O failures. Periodic testing and autorestore of dead paths PowerPath also tests dead paths periodically and, if they pass the test, automatically restores them to service. Like periodic testing of live paths, periodic autorestore is low priority. It is not designed to restore a path immediately after it is repaired, but rather to restore the path within a reasonable time after it is repaired. 36 EMC PowerPath Family Version 5.7 Product Guide

37 PowerPath Overview When reactive autorestore is on, PowerPath reactively tests dead paths and, if they pass the test, restores them to service. The powermt set reactive_autorestore command enables or disables the reactive autorestore facility. Note that reactive autorestore is not supported on Windows. The EMC PowerPath Family Version 5.7 CLI and System Message Reference provides additional information. Dead paths are tested when the path test process is run, provided the paths have not been tested for at least 1 hour. This frequency limits the number of I/Os that fail (the PowerPath test path I/Os fail on dead paths), so the impact on normal operations is negligible. The time it takes for all paths to be restored varies greatly. In lightly loaded or small configurations, paths typically are restored within an hour after they are repaired (on average, much sooner). In heavily loaded or large configurations, it may take several hours for all paths to be restored after they are repaired because periodic autorestore is pre-empted by higher priority tasks. The fastest way to restore paths is to use powermt restore. PowerPath CLI on page 39 provides more information on powermt commands. Windows note Because PowerPath is tightly integrated with Windows Plug and Play, it detects if Plug and Play has brought a device online or taken a device offline very quickly. If you use the powermt set periodic_autorestore=off command to disable PowerPath periodic autorestore functionality (which is enabled by default), you may notice that paths continue to be automatically restored, as a result of the tighter integration with Plug and Play. EMC recommends that you leave periodic autorestore enabled for cases where Plug and Play is not invoked when a path comes online. Note: When a cable is pulled on a host with iscsi connections, there is no immediate Plug and Play event. If there is no I/O on the affected paths, it may take up to 60 seconds for the paths to display as dead. How often are paths tested? PowerPath periodically runs the path test process. This process sequentially visits each path and tests it if required: Live paths are tested periodically if they have not been tested for at least one hour and are idle. Dead paths are tested periodically if they were marked for testing at least one hour ago and are idle. Any paths marked for testing as a result of the conditions listed in When are path tests done? on page 35 are tested the next time the path test process runs. Tests are spaced out such that at least one path on every HBA and every port is tested much more often than hourly. A path state change detected in this way is propagated quickly to all related paths. Proactive path testing and automatic path restoration 37

38 PowerPath Overview After all paths are visited and those marked for testing have completed their tests, the process sleeps for 10 seconds, and then restarts. This 10-second period is a compromise between using nonapplication system resources (CPU and I/O cycles) and keeping the state of paths current so the maximum number of paths is always available for use. The more paths that need testing, the longer it takes to complete the path test process. As a result, it is hard to predict exactly when a path will be tested. Application tuning in a PowerPath environment Channel groups You can use PowerPath to tune application performance by manually balancing the load with channel groups. PowerPath allows you to form a channel group of dedicated paths to a logical device, to increase application performance. (Note, however, that reserving paths for one application makes those paths unavailable for other applications, potentially decreasing their performance.) Channel groups keep a second set of paths in reserve in case the first set fails. As a form of manual load balancing, channel groups reserve bandwidth more precisely than automatic means. Channel groups require at least two paths; they work best in environments with more than two paths and at least two separately managed applications on the same host that use different logical devices. You create a channel group by using the powermt set mode command to label a group of paths to a logical device as active or standby. ( PowerPath CLI on page 39 provides more information on powermt commands.) An application accessing one or more logical devices designates one group of paths as active and another group as standby. A second application accessing different logical devices designates the first group of paths as standby and the second group as active. Each application has its own dedicated group of active paths, while the overall configuration provides channel failover protection. If a path in an application s active group fails, the application s I/O is redirected automatically to other active paths in the group. If all paths in the active group fail, the application s I/O is redirected automatically to the standby paths. Figure 9 shows an environment with two channel groups. The first channel group (everything in black) contains paths from HBAs a0 and a1, used by application 1 to access logical device 0. The second channel group (everything in dark gray) contains paths from 38 EMC PowerPath Family Version 5.7 Product Guide

39 PowerPath Overview HBAs a2 and a3, used by application 2 to access logical device 1. For application 1, the first channel group (black) is active and the second channel group (gray) is standby; for application 2, the first channel group is standby and the second channel group is active. Figure 9 Channel groups PowerPath management tools PowerPath CLI This section describes the tools used to manage the PowerPath environment. The PowerPath environment is managed by a CLI consisting of several commands: powermt Used to manage the PowerPath environment emcpadm Used to list or rename PowerPath pseudo devices; often undergoes enhancements for increased functionality emchostid Used to set the Host ID emcpreg Used to manage the PowerPath license registration powermig Used to manage migration operations (Migration Enabler) powervt Used to manage encryption of virtual logical units via Encryption with RSA The PowerPath Family CLI and System Messages Reference Guide provides more information on the PowerPath Family command line interface, which also includes the syntax and arguments, as applicable. Windows note On Windows, PowerPath Administrator is the easiest way to access powermt functions. PowerPath Administrator is a GUI that allows you to interactively manage PowerPath on Windows platforms. All powermt functions are accessible through PowerPath Administrator except powermt set write_throttle and powermt set write_throttle_queue. PowerPath Administrator is described in the EMC PowerPath and PowerPath/VE for Windows Installation and Administration Guide and the PowerPath Administrator online help. PowerPath management tools 39

40 PowerPath Overview 40 EMC PowerPath Family Version 5.7 Product Guide

41 CHAPTER 3 PowerPath Configuration Requirements This chapter provides a high-level overview of configuring PowerPath in Fibre Channel, Fibre Channel over Ethernet, iscsi, and SCSI environments. Topics include: Note: The information in this chapter is not intended to be comprehensive. Consult the E-Lab Navigator available at the EMC Online Support website ( for configuration information for a specific environment. PowerPath connectivity Fibre Channel configuration requirements FCoE configuration requirements iscsi configuration requirements Storage configuration requirements and recommendations Dynamic reconfiguration PowerPath Configuration Requirements 41

42 PowerPath Configuration Requirements PowerPath connectivity PowerPath works with: Fibre Channel physical connections in UNIX, Linux, and Windows environments. Each Fibre Channel Host Bus Adapter (HBA) connects to a port on a Fibre Channel interface on the storage system or to a Fibre Channel hub or switch. Hubs are not supported on all storage systems. Fibre Channel over Ethernet (FCoE) physical connections in UNIX, Linux, and Windows environments. Each FCoE Converged Network Adapter (CNA) connects to an FCoE switch, which in turn connects to an Ethernet LAN and an FC SAN. iscsi physical connections. Each iscsi Network Interface Card (NIC) or HBA connects to an iscsi switch or router. HBA and transport considerations The E-Lab Interoperability Navigator on EMC Online Support provides detailed information on supported configurations. The EMC Support Matrix PowerPath Family Protocol Support, available on EMC Online Support, provides protocol support information for the EMC PowerPath family of products. When mapping paths to logical devices, observe the following considerations with respect to HBAs and transport protocols: Linux, Windows Observe the following considerations with respect to HBAs and transport protocols for Linux and Windows: PowerPath does not support a logical device that has paths mapped from two different HBA vendors. This includes cluster nodes that share logical devices. PowerPath does not support a logical device that has paths mapped using different transport protocols (iscsi, FC, FCoE). Only paths from identical HBAs can be mapped to the same logical device. That is, the HBAs must be comparable in every way; they cannot even be different revisions of the same HBA. High availability An Enterprise Storage Network (ESN) provides high availability by configuring multiple paths between connections, configuring alternate paths to storage area network (SAN) components, and deploying redundant SAN components. Some SAN switches (such as the EMC Connectrix ) have redundant subsystems to ensure high availability and a reliable fabric. PowerPath supports multiple paths between an HBA and a logical device. This can offer higher availability or better performance, and it usually simplifies zoning. 42 EMC PowerPath Family Version 5.7 Product Guide

43 PowerPath Configuration Requirements Device configuration note Multiple connections between hosts and multiple fabrics can insulate I/Os from fabric-wide failure. PowerPath delivers maximum system-wide availability when dual HBAs in each server connect to separate fabrics, as in Figure 11 on page 45. An application that cannot use a path on one fabric can fail over to a different fabric, protecting the application from a fabric-wide outage. PowerPath supports load balancing and redundancy across the storage ports in a fabric. Be aware, however, that the number and complexity of connection points in a fabric multiplies rapidly in a multipath configuration. The number of connections you create depends on the bandwidth you require. PowerPath does not alter the allowable access to storage system logical devices. Devices with the following access without PowerPath have the same access with PowerPath installed: Read/write Read-only Not-ready On Symmetrix storage systems, PowerPath associates paths to a Business Continuance Volume (BCV) that is split from its standard logical device and has read/write access. PowerPath controls: All Symmetrix logical devices except SAN Manager Volume Configuration Management Databases (VCMDBs). All PowerPath-enabled VNX and CLARiiON LUNs. Supported third-party storage system devices. Note: Third-party storage devices are not supported in iscsi environments. You can exclude devices from PowerPath control through the powermt unmanage command. The PowerPath Family CLI and System Messages Reference Guide provides more information on powermt commands. Fibre Channel configuration requirements This section provides high-level guidelines for configuring PowerPath Fibre Channel environments. For more information on Fibre Channel configuration requirements, refer to the E-Lab Navigator on the EMC Online Support and the Configuration Planning Guide for the Fibre Channel storage system. Observe the following guidelines when configuring Fibre Channel connections for a PowerPath environment: EMC requires that no more than one HBA be configured in any zone. Figure 10 on page 44 shows a system in which both hosts (each with PowerPath installed) are connected to a storage system through two fabrics. There are four zones, each with one HBA: HBA 1, fabric 1, port X HBA 2, fabric 2, port Y Fibre Channel configuration requirements 43

44 PowerPath Configuration Requirements HBA 3, fabric 1, port X HBA 4, fabric 2, port Y Figure 10 Highly available Fibre Channel configuration with PowerPath In Figure 10 on page 44 (and the other figures in this section), an interface is, for example, an FA on a Symmetrix or other active-active system or an SP on a VNX and CLARiiON system. This configuration has several advantages: If either fabric fails, both hosts can still access both logical devices, through the other fabric. If either interface fails, both hosts can still access both logical devices, through the other interface. If either HBA in a host fails, that host can still access both logical devices, through the other HBA on that host. If multiple hosts access the same storage-system ports, PowerPath does not add any access control. Normal sharing considerations apply, using products such as SAN Manager, Volume Logix, Access Logix, Oracle Parallel Server, and clustering software. For redundancy, if multiple ports are used, they must be on multiple physical interfaces. In Figure 10 on page 44, the two ports are divided between two interfaces. For maximum availability, configure at least two fully redundant paths from each host to each logical device. Two paths are fully redundant if the paths do not share HBAs, fabrics, switches or hubs, or storage interface cards. Figure 10 on page 44 shows two redundant paths from each host to each logical device. The redundant design ensures minimum application impact on component failures and microcode loads. For maximum availability when designing a fabric topology, use dual fabrics. PowerPath insulates I/Os from a fabric-wide outage. For optimum performance, especially in a degraded state, present each logical device to different HBAs from different interface boards. Although some interface boards contain two (or more) ports, the second connection is used for performance purposes and does not provide high availability. The second connection may result in bandwidth loss if a back-end path (the part of a path between a port and a logical device) fails. 44 EMC PowerPath Family Version 5.7 Product Guide

45 PowerPath Configuration Requirements In Figure 11 on page 45, each HBA has two paths to each logical device through different interfaces. This configuration greatly improves performance if one fabric fails, as there are still two ports to handle the load. Figure 11 High-availability (multiple-fabric) Fibre Channel configuration PowerPath Base supports only configurations with one path to each interface. This includes direct-attached configurations as well as appropriately zoned switch (SAN) configurations. With a PowerPath license, some active-passive storage systems must be zoned so each host sees only one path to each interface. Refer to your storage-system documentation or contact EMC Customer Support. Any port can be configured into multiple Fibre Channel zones. Frequently, multiple HBAs share a port, making multiple zones per port typical. For example, in Figure 11 on page 45, there are four zones altogether. Each zone consists of one HBA and two ports. Each port is in two zones. In a single-fabric configuration, a host with one HBA can have multiple paths to a logical device. For example, in Figure 12 on page 46, each single-hba host has four paths to each logical device. While this does not provide maximum availability as described previously, it offers back-end path redundancy and load balancing, reduces zoning complexity, and enables multiple hosts to share a storage-system port. Fibre Channel configuration requirements 45

46 PowerPath Configuration Requirements Figure 12 Single-switch Fibre Channel configuration Only paths from identical HBAs can be mapped to the same logical device. That is, the HBAs must be comparable in every way; they cannot even be different revisions of the same HBA. For more information on Fibre Channel configuration requirements, refer to the Configuration Planning Guide for your storage system. FCoE configuration requirements High availability configurations This section provides high-level guidelines for configuring PowerPath FCoE environments. For definitive information on FCoE configuration requirements, refer to the EMC Networked Storage Topology Guide, available on the E-Lab Interoperability Navigator on EMC Online Support. Observe the following guidelines when configuring FCoE connections for a PowerPath environment. This section discusses high-availability FCoE configurations and zoning. Figure 13 on page 46 depicts an FA on a Symmetrix or other active-active system or an SP on a VNX and CLARiiON system. PowerPath Host CNA CNA PowerPath Host CNA CNA FCoE Switch Fabric Fabric Interface Port Port Interface Port Port Storage System Logical Device 1 Logical Device 2 FCoE Switch GEN Figure 13 High-availability (multiple-fabric) Fibre Channel over Ethernet configuration This configuration has several advantages: If either fabric fails, both hosts can still access both logical devices, through the other fabric. If either interface fails, both hosts can still access both logical devices, through the other interface. If either CNA in a host fails, that host can still access both logical devices, through the other CNA on that host. 46 EMC PowerPath Family Version 5.7 Product Guide

47 PowerPath Configuration Requirements If either FCoE switch fails, that host can still access both logical devices, through the other FCoE switch. For redundancy, if multiple ports are used, they must be on multiple physical interfaces. In Figure 13 on page 46, the two ports are divided between two interfaces. For maximum availability, configure at least two fully redundant paths from each host to each logical device. Two paths are fully redundant if the paths do not share CNAs, fabrics, switches, or hubs. Figure 13 on page 46 shows two redundant paths from each host to each logical device. Zoning to active-passive arrays The redundant design ensures minimum application impact on component failures. For maximum availability when designing a fabric topology, use dual fabrics. PowerPath insulates I/Os from a fabric-wide outage. For PowerPath in a cluster environment connected to active-passive arrays, such as VNX and CLARiiON arrays, all nodes in the cluster should be connected to the array through both fabrics, as depicted in Figure 14 on page 47. PowerPath Host Node 1 CNA 1 CNA 2 PowerPath Host Node 2 CNA 1 CNA 2 FCoE Switch FCoE Switch FC Fabric 1 FC Fabric 2 CLARiiON Storage System SP A Port A0 Port A1 SP B Port B0 Port B1 Logical Device 1 Logical Device 2 GEN Figure 14 High-availability (multiple-fabric) Fibre Channel over Ethernet to active-passive arrays Additionally, each node should have a minimum of one path to each Storage Processor (SP). Any port from the same SP must be connected to the same fabric. This is to avoid multiple LUN trespasses in the event of a FC or FCoE switch or fabric failure. For higher availability and performance, EMC recommends mapping at least four paths from each cluster node, with two paths connected to each SP. In such a configuration, it is fine to have ports from same SP connected to different fabric because each node continues to have access to both SPs in the case of a fabric failure. For example: Node 1: Node 1 CNA 1, fabric 1, SPA0 Node 1 CNA 1, fabric 1, SPB0 Node 1 CNA 2, fabric 2, SPA1 Node 1 CNA 2, fabric 2, SPB1 Node 2: Node 2 CNA 1, fabric 1, SPA0 FCoE configuration requirements 47

48 PowerPath Configuration Requirements Node 2 CNA 1, fabric 1, SPB0 Node 2 CNA 2, fabric 2, SPA1 Node 2 CNA 2, fabric 2, SPB1 In Figure 13 on page 46 and Figure 14 on page 47, each CNA has two paths to each logical device through different interfaces. This configuration greatly improves performance if one fabric fails, as there are still two ports to handle the load. iscsi configuration requirements This section provides high-level guidelines for configuring PowerPath iscsi environments. For definitive information on iscsi configuration requirements, refer to the E-Lab Interoperability Navigator on EMC Online Support. Observe the following guidelines when configuring iscsi physical connections for a PowerPath environment: A single host can attach to multiple storage systems: Fibre Channel storage systems (through Fibre Channel HBAs) and iscsi storage systems (through iscsi HBAs only or NICs only). Note: With VNX OE and CLARiiON FLARE operating environment version 03.26, PowerPath supports concurrent host access to Fibre Channel and iscsi devices, as specified in the E-Lab Interoperability Navigator. You can connect hosts with all Fibre Channel HBAs and hosts with all iscsi NICs to the same storage system. Sample iscsi configurations Three example PowerPath iscsi configurations follow: Single NIC/HBA to one subnet Multiple NICs/HBAs to multiple subnets Multiple NICs/HBAs to one subnet (Windows only) Note that only the data paths are represented in the following figures. The management ports are assumed to be connected to a separate sub-network. All of the following configurations support the management ports on the same subnet as the data ports. (Note that you can only manage an array through a NIC.) An interface is an FA on a Symmetrix or an SP on a VNX and CLARiiON system. 48 EMC PowerPath Family Version 5.7 Product Guide

49 PowerPath Configuration Requirements Single NIC/HBA to one subnet Figure 15 on page 49 shows a single NIC/HBA connecting to a single subnet. Multiple NICs/HBAs to multiple subnets Figure 15 Single NIC/HBA configuration When using one NIC or HBA, you can have one connection to each port on the storage system. Figure 16 on page 49 shows multiple NICs/HBAs connecting to multiple subnets. Figure 16 Multiple NICs/HBAs to multiple subnets Note the following: When using multiple NICs, you can have one connection per host to each port on the storage system. When using multiple HBAs, you can have one connection per HBA to each port on the storage system. iscsi configuration requirements 49

50 PowerPath Configuration Requirements Multiple NICs/HBAs to one subnet Figure 17 on page 50 shows multiple NICs/HBAs connecting to a single subnet. Figure 17 Multiple NICs/HBAs to one subnet Note the following: This configuration is supported in Windows only. When using multiple NICs, you can make one connection per host to each port on the storage system. Multiple NICs on the same subnet are ignored by the Microsoft iscsi Initiator default configuration. Using the Advanced button in the Log On to Target dialog box of the Microsoft iscsi Initiator GUI allows a specific NIC to be associated with a specific port. Storage configuration requirements and recommendations All storage systems This section includes high-level requirements and recommendations for configuring EMC and third-party storage systems. EMC Customer Support should configure the storage systems. The E-Lab Interoperability Navigator on EMC Online Support provides information on the storage systems that can act as boot devices on your platform. To enable PowerPath to load balance I/Os and provide path redundancy, configure each logical device to be used with PowerPath for access on two or more interface boards (array boards). To gain the PowerPath benefits of high availability, reliability, and performance for your storage network, configure multiple paths from hosts to storage devices. A host that is part of a cluster cannot have both active-active and active-passive storage devices in the same disk group. When not in a cluster, a host can be connected to both active-active and active-passive storage systems; however, specific hardware (such as HBAs and cables) and software may be required for this configuration. See the E-Lab Navigator at EMC Online Support. 50 EMC PowerPath Family Version 5.7 Product Guide

51 PowerPath Configuration Requirements Mixed storage environments Symmetrix and VMAX storage systems In a mixed storage environment, the fabric components (switches, directors, and HBAs) along with the operating system level, HBA models, drivers, and firmware must all be at the EMC-supported levels. A third-party storage system connecting to this fabric environment must also be supported through that system s OEM vendor in the stated environment. Deviations in any of the supported levels for any component can be handled through either EMC's RPQ process or that of the OEM vendor. This will ensure that all storage arrays remain supported through their respective OEM vendors. Regardless of the sharing model you choose (Shared HBA, Shared Server, or Shared SAN), EMC recommends that you limit the amount of possible interactions between the arrays. This will assist in troubleshooting, maintenance, and management of the environment. To limit the interactions and dependencies, we recommend that you not include storage array ports from different vendors in the same zone. Multiple zones can be created that use the same HBA, as long as the storage arrays are in separate zones with that common HBA. Zoning in this fashion will ensure that there are no direct interactions between the different storage arrays. To improve redundancy to logical devices, distribute HBA connections across as many Symmetrix interfaces as possible. To improve performance, distribute paths across Symmetrix FA ports and VMAX engines. For more information on Symmetrix and VMAX configuration requirements, refer to the EMC host connectivity guides, available on the EMC Online Support site. Supported Hitachi TagmaStore, HP StorageWorks XP, and IBM ESS storage systems To improve redundancy to logical devices, distribute HBA connections across as many channel host adapters (CHAs) and disk controllers (DKCs) as possible. For more information on storage system configuration requirements, refer to the appropriate documentation from your vendor. Supported HP StorageWorks EVA storage systems For best performance, supported HP StorageWorks EVA LUNs accessible from any given host should be distributed over both controllers (A and B) of the array. This ensures that I/O load from the host is shared by both array controllers. For supported HP StorageWorks EVA arrays, every LUN should be set to Failover only or Failover/Failback on either controller A or controller B. The setting No preference does not allow for predictable load balancing over controllers. In configurations where one or more StorageWorks LUNs are shared by several hosts, the powermt restore command produces lasting results only if every host sending I/O to the storage system maintains connectivity to both controllers. If even one of the hosts subsequently loses all of its paths to one controller, any subsequent I/O that host issues Storage configuration requirements and recommendations 51

52 PowerPath Configuration Requirements causes the affected LUN(s) to be reassigned to the other controller regardless of the preferred settings. All other hosts will then follow over the LUN(s) to the other controller as needed. VNX and CLARiiON storage systems With VNX and CLARiiON storage systems, the array failover mode must be the same for all paths that access a single LUN. If two paths access the same LUN, and one path is set to PNR (passive not ready) mode and one to ALUA (asymmetric logical unit access) mode, PowerPath behavior is undefined for that LUN. The array failover mode is set at the HBA level with VNX Unisphere and CLARiiON Navisphere commands. The EMC host connectivity guides, available on the EMC Online Support site, and the VNX and CLARiiON Storage-System Support websites provide more information: Invista storage devices VPLEX storage devices Dynamic reconfiguration Hot swapping an HBA EMC recommends configuring two switches per Invista instance. Single-switch configurations are supported for testing purposes only because a single switch does not provide high availability. For more information on Invista configuration requirements, refer to the Invista documentation, available on the EMC Online Support site. EMC recommends configuring two switches per VPLEX director. Single-switch configurations are supported for testing purposes only because a single switch does not provide high availability. For more information on VPLEX configuration requirements, refer to the VPLEX documentation, available on the EMC Online Support site. PowerPath Family End-of-Life Summary on page 63 provides information on deprecated EMC and third-party storage arrays. PowerPath for Windows and Linux supports dynamic addition and removal of LUNs and paths to the PowerPath configuration. As you perform these procedures on your platform, keep documentation for your platform available. The PowerPath Installation and Administration Guide for your platform provides for more information on dynamically adding and removing LUNs. PowerPath for Windows and Linux supports hot-swapping an HBA. Support for hot-swapping an HBA on PowerPath for Linux is provided through the Linux PCI hot plug feature, which allows you to hot swap an HBA card using Fujitsu hardware and drivers. The PowerPath Family for Linux Release Notes provide information on the supported Linux kernels. The PowerPath Installation and Administration Guide for your platform provides more information on this procedure. 52 EMC PowerPath Family Version 5.7 Product Guide

53 PowerPath Configuration Requirements Dynamic reconfiguration 53

54 PowerPath Configuration Requirements 54 EMC PowerPath Family Version 5.7 Product Guide

55 APPENDIX A PowerPath Standard Edition This appendix describes PowerPath Standard Edition, or PowerPath SE, a version of PowerPath without a license key that provides only basic failover functionality. Note: Older CLARiiON documents used the term Utility Kit PowerPath or PowerPath Fabric Failover to refer to PowerPath SE. The CLARiiON document Important Information about PowerPath SE contains additional, important information for CLARiiON users only. Ensure to read that document before you install PowerPath SE in a CLARiiON environment. The appendix contains the following information: PowerPath SE functionality Installing PowerPath SE Using PowerPath SE PowerPath Standard Edition 55

56 PowerPath Standard Edition PowerPath SE functionality PowerPath SE is a server-based utility that provides basic failover for a VNX, CLARiiON, Symmetrix, and VMAX storage system. PowerPath SE is supported in single HBA configurations where the same HBA is connected through a switch or fabric to each port on two separate Symmetrix or VMAX FAs or to each VNX or CLARiiON SP. Figure 18 on page 56 illustrates the supported configuration on a Symmetrix system. Installing PowerPath SE Using PowerPath SE Figure 18 PowerPath SE supported configuration PowerPath SE protects against VNX and CLARiiON SP failures, Symmetrix and VMAX FA port failures, and back-end storage-system failures, and supports non-disruptive upgrade (NDU) of storage system software. While a server is running normally, PowerPath SE takes no action. If a failure occurs in an SP or an FA port, PowerPath SE attempts to fail over (transfer) the I/Os to a different SP or FA port. PowerPath SE does not protect against HBA failures. To protect against such failures in storage systems with multiple HBAs connected to a storage system, you must have PowerPath and an accompanying license. Before you install PowerPath SE, read the PowerPath Family Release Notes for your platform. To install PowerPath SE, follow the installation procedure in the PowerPath Installation and Administration Guide for your platform. Note, however, that you do not need to register a license key when you install PowerPath SE. The most current versions of the release notes and installation and administration guides are available on the EMC Online Support site: From the Support by Product pages, search for PowerPath using Find a Product > Documentation. To use PowerPath SE, refer to the PowerPath Installation and Administration Guide for your platform as well as this Product Guide. 56 EMC PowerPath Family Version 5.7 Product Guide

57 PowerPath Standard Edition The most current versions of these manuals are available on the EMC Online Support site: From the Support by Product pages, search for PowerPath using Find a Product > Documentation. Using PowerPath SE 57

58 PowerPath Standard Edition 58 EMC PowerPath Family Version 5.7 Product Guide

59 APPENDIX B PowerPath Family Functionality Summary This appendix contains the PowerPath Family functionality summary by version and platform. It lists the functions and features supported by PowerPath Multipathing, Migration Enabler, and Encryption with RSA by PowerPath version, from version 5.5 and Service Pack releases to 5.7 and Service Pack releases, and by supported platforms. The PowerPath Family Product Guide versions 5.2, 5.3, 5.5, 5.6, and 5.7 available on EMC Online Support, provides information on features by platform for PowerPath versions previous to version 5.5. The PowerPath 5.0 Product Guide provides information on features by platform for PowerPath version 5.0. The Migration Enabler section of the PowerPath Family Release Notes for your platform provides information on thin device support. The Multipathing section of the PowerPath Family Release Notes for your platform and the E-Lab Interoperability Navigator provide information on native and third-party clustering. Table 4 PowerPath Family functionality summary by version and platform (page 1 of 2) PowerPath version and platform Command/Feature Solaris Windows Linux AIX Linux Windows Linux a MULTIPATHING PowerPath Viewer X X X X X X X Path management insight capabilities X X X Autostandby X X X Automatic host-array registration X X X emcpadm enhancements X X X X X X X R1/R2 boot X X X X X X PowerPath no reboot upgrade (NRU) X X X X X X X Unattended installation X X X X X X X NPIV Dynamic LUN addition/removal/online reconfiguring X X X X X X X X Hot swapping HBA X X X X X X X PowerPath Family Functionality Summary 59

60 PowerPath Family Functionality Summary Table 4 PowerPath Family functionality summary by version and platform (page 2 of 2) PowerPath version and platform Command/Feature Solaris Windows Linux AIX Linux Windows Linux a Adding new paths to PowerPath logical device w/o interruption Coexistence with third-party management software SNMP management daemon X X X X X X X X X X X X X X X X X X X X X ALUA X X X X X X X Management of >256 LUNs per VNX and CLARiiON Storage Group (CLARiiON 04.29) b Audit logging for powermt commands X X X X X X X X X X X X X Root check X X X X X X MIGRATION ENABLER c Remote Solutions Enabler Access X X X X X X X TimeFinder/Clone X X X X X X X Pause/resume on TimeFinder/Clone X X X i X X i X X Host Copy ceiling X X X X X X Pause/resume on Host Copy X X X X X X X Host Copy X X X X X X X Host Copy with encryption X X X X X X X Open Replicator X X X X X X X Virtual encapsulation X X X d X X e X X ENCRYPTION WITH RSA f Third-party arrays with encryption X X X g X h X X Encryption with RSA X X X i X h X X a. The PowerPath Family 5.7 and Minor Releases for Linux Release Notes provide information on supported Linux OS variants. b. Not applicable to Windows (OS limitation). c. As of PowerPath 5.7 for Windows and 5.7 SP1 for Linux, a separate license is not required for Migration Enabler. d. Virtual encapsulation is not supported on SLES 10 SP3. 60 EMC PowerPath Family Version 5.7 Product Guide

61 PowerPath Family Functionality Summary e. Supported on RHEL 5.6 only. f. Requires a separate license key. g. Supported on SLES10 SP3 (x8664) and RHEL 5.5 (x8664) only. h. Supported on RHEL5.6 (x8664) and RHEL 6 (x8664) only. i. Supported on SLES10 SP3 (x8664) and RHEL 5.5 (x8664) only. 61

62 PowerPath Family Functionality Summary 62 EMC PowerPath Family Version 5.7 Product Guide

63 APPENDIX C PowerPath Family End-of-Life Summary This appendix contains the PowerPath Family end-of-life summary. It lists the PowerPath family functions and features for which support is being phased out, the document in which end of life is announced, and the release at which end of life is effective. Table 5 PowerPath end-of-life summary (page 1 of 5) Feature/Function Platform Announced a Effective Consistency Groups on PowerPath (Consistency Group support is included in Symmetrix Enginuity versions 5568 and later). AIX Windows PowerPath Family 5.3 for AIX Release Notes PowerPath 5.2 SP1 for Windows Release Notes PowerPath 4.5.x for Windows Release Notes PowerPath 5.3 SP1 for AIX PowerPath and PowerPath/VE Family 5.3 for Windows HP-UX PowerPath 5.1 SP2 for HP-UX Release Notes PowerPath 5.2 for HP-UX Solaris PowerPath 5.2 SP1 for Solaris Release Notes PowerPath 5.3 for Solaris Linux b N/A N/A AIX PowerPath Family 5.3 and Service Pack Releases for AIX Release Notes PowerPath 5.5 for AIX BasicFailover (bf) and NoRedirect (nr) load-balancing policies in powermt CLI c Windows HP-UX PowerPath and PowerPath/VE Family 5.3 and Service Pack Releases for Windows Release Notes PowerPath 5.1 and Service Pack Releases for HP-UX Release Notes PowerPath and PowerPath/VE 5.5 for Windows PowerPath 5.2 for HP-UX Solaris PowerPath Family 5.3 and Minor Releases for Solaris Release Notes PowerPath 5.5 for Solaris Linux PowerPath Family 5.3 and Service Pack Releases for Linux Release Notes PowerPath 5.5 for Linux PowerPath Family End-of-Life Summary 63

64 PowerPath Family End-of-Life Summary Table 5 PowerPath end-of-life summary (page 2 of 5) Feature/Function Platform Announced a Effective AIX PowerPath Family 5.3 and Service Pack Releases for AIX Release Notes PowerPath 5.5 for AIX powermt set priority command in powermt CLI Windows HP-UX PowerPath and PowerPath/VE Family 5.3 and Service Pack Releases for Windows Release Notes PowerPath 5.1 and Service Pack Releases for HP-UX Release Notes PowerPath and PowerPath/VE 5.5 for Windows PowerPath 5.2 for HP-UX Solaris PowerPath Family 5.3 and Minor Releases for Solaris Release Notes PowerPath 5.5 for Solaris Linux PowerPath Family 5.3 and Service Pack Releases for Linux Release Notes PowerPath 5.5 for Linux AIX PowerPath Family 5.3 and Service Pack Releases for AIX Release Notes PowerPath 5.5 for AIX hphsx class option in powermt CLI Windows HP-UX PowerPath and PowerPath/VE Family 5.3 and Service Pack Releases for Windows Release Notes PowerPath 5.1 and Service Pack Releases for HP-UX Release Notes PowerPath and PowerPath/VE 5.5 for Windows PowerPath 5.2 for HP-UX Solaris PowerPath Family 5.3 and Minor Releases for Solaris Release Notes PowerPath 5.5 for Solaris Linux PowerPath Family 5.3 and Service Pack Releases for Linux Release Notes PowerPath 5.5 for Linux 64 EMC PowerPath Family Version 5.7 Product Guide

65 PowerPath Family End-of-Life Summary Table 5 PowerPath end-of-life summary (page 3 of 5) Feature/Function Platform Announced a Effective AIX PowerPath Family 5.3 SP1 for AIX Release Notes PowerPath 5.5 for AIX HP arrays: HP EVA 3000 with VCS3.x HP EVA 5000 with VCS3.x HP XP 48 HP XP 128 HP XP 512 HP XP 1024 Windows HP-UX Solaris PowerPath and PowerPath/VE Family 5.3 for Windows Release Notes PowerPath 5.1 SP2 for HP-UX Release Notes PowerPath Family 5.3 for Solaris Release Notes PowerPath and PowerPath/VE 5.5 for Windows PowerPath 5.2 for HP-UX PowerPath 5.5 for Solaris Linux PowerPath 5.3 SP1 for Linux Release Notes PowerPath 5.5 for Linux IBM Arrays: F10 F T Windows AIX PowerPath and PowerPath/VE Family 5.5 for Windows Release Notes PowerPath Family 5.3 and Service Packs for AIX Release Notes PowerPath 5.7 for Windows PowerPath 5.5 for AIX AIX PowerPath Family 5.3 SP1 for AIX Release Notes PowerPath 5.5 for AIX Windows PowerPath and PowerPath/VE Family 5.3 for Windows Release Notes PowerPath and PowerPath/VE 5.5 for Windows Hitachi Lightning array HP-UX PowerPath 5.1 SP2 for HP-UX Release Notes PowerPath 5.2 for HP-UX Solaris PowerPath Family 5.3 for Solaris Release Notes PowerPath 5.5 for Solaris Linux PowerPath Family 5.3 SP1 for Linux Release Notes PowerPath 5.5 for Linux 65

66 PowerPath Family End-of-Life Summary Table 5 PowerPath end-of-life summary (page 4 of 5) Feature/Function Platform Announced a Effective AIX PowerPath Family 5.3 for AIX Release Notes PowerPath 5.3 SP1 for AIX HP arrays: HP StorageWorks EMA HP StorageWorks EMA HP StorageWorks 8000 Windows HP-UX Solaris PowerPath 4.5.x for Windows Release Notes PowerPath 5.1 SP1 for HP-UX Release Notes PowerPath 5.2 SP1 for Solaris Release Notes PowerPath 4.5.x/ d 5.3 for Windows PowerPath 5.1 SP2 for HP-UX PowerPath 5.3 for Solaris Linux PowerPath Family 5.3 for Linux Release Notes PowerPath 5.3 SP1 for Linux Native path support for SLES 11 and Linux PowerPath 5.3 SP2 for Linux RHEL 6 e IBM Power (IBM PPC) Linux PowerPath Family 5.5 for Linux Release Notes PowerPath 5.6 for Linux IA64 architecture Linux PowerPath Family 5.5 for Linux Release Notes PowerPath 5.6 for Linux 32-bit architecture Linux PowerPath Family 5.6 for Linux Release Notes PowerPath 5.7 for Linux VERITAS Volume Manager (VxVM) 4.1 Linux PowerPath Family 5.5 for Linux Release Notes PowerPath 5.6 for Linux Windows 2003 Server Windows PowerPath Family 5.7 for Windows Release Notes AIX PowerPath Family 5.5 and Minor Releases for AIX Release Notes Windows PowerPath Family 5.5 and Minor Releases for Windows Release Notes PowerPath 5.7 for Windows PP_DEFAULT_STORAGE_SYS environment variable HPUX PowerPath 5.1 and Service Pack Releases for HP-UX Release Notes PowerPath 5.2 for HP-UX Solaris PowerPath Family 5.3 for Solaris Release Notes PowerPath 5.5 for Solaris Linux PowerPath Family 5.6 and Minor Releases for Linux Release Notes PowerPath 5.7 for Linux Solaris PowerPath Family 5.5 for Solaris Release Notes environment variables: PP_SHOW_CLAR_LUN_NAMES PP_SHOW_ALUA_FAILOVER_MODE PP_SHOW_CLAR_OWNING_SP HP-UX Linux PowerPath Family 5.2 for HP-UX Release Notes PowerPath Family 5.7 for Linux Release Notes Windows PowerPath Family 5.7 for Windows Release Notes 66 EMC PowerPath Family Version 5.7 Product Guide

67 PowerPath Family End-of-Life Summary Table 5 PowerPath end-of-life summary (page 5 of 5) Feature/Function Platform Announced a Effective term paths on powermt display paths commands and usage; change to powermt display bus Solaris Linux Windows HP-UX PowerPath Family 5.5 for Solaris Release Notes PowerPath Family 5.7 for Linux Release Notes PowerPath Family 5.7 for Windows Release Notes PowerPath Family 5.2 for HP-UX Release Notes a. Electronic versions of the documents indicated in this column are available on the EMC Online Support site at b. Linux does not support Consistency Groups. c. Deprecation of bf and nr is a two-phase deprecation. The release notes for your platform provide deprecation phase details. d. PowerPath 5.3 for Windows supports third-party arrays, but this support does not include the HP arrays listed above. e. Native path support will not change for existing versions of Linux (previous to SLES 11 and RHEL 6). 67

68 PowerPath Family End-of-Life Summary 68 EMC PowerPath Family Version 5.7 Product Guide

69 GLOSSARY This glossary contains terms related to PowerPath and the management of disk storage subsystems. Many of these terms are used in this manual. A Access Logix Active (paths) Active-active (storage systems) Active-passive (storage systems) Adapter Adaptive (ad) Alive ALUA (Asymmetric Logical Unit Access) A software package that lets multiple hosts share storage on certain VNX and CLARiiON storage systems. Access Logix implements storage sharing using storage groups. See also Storage group. One of two modes for PowerPath I/O paths. The other mode is standby. An active path can accept I/O. The load-balancing and failover policy (set for the PowerPath device with the powermt set policy command) determines how loads are balanced over active paths. Load balancing is done for each device with more than one active path. See also Mode and Standby (paths). A type of storage system in which, if there are multiple interfaces to a logical device, they all provide equal access to the logical device. Active-active means all interfaces to a device are active simultaneously. For example, Symmetrix, Hitachi Lightning, Hitachi TagmaStore, HP StorageWorks XP, and IBM ESS storage systems are active-active. See also Active-passive (storage systems). A type of storage system in which, if there are multiple interfaces to a logical device, one is designated as the primary route to the device. The device is assigned to that interface card. I/O is not directed to paths connected to a nonassigned interface. For example, VNX and CLARiiON storage systems are active-passive. If there is a failure of a device s assigned interface card or all paths to it, the device is reassigned automatically from the broken interface card to another interface card. Active-passive means only one interface to a device is active at a time, and any others are passive with respect to that device and waiting to take over if needed. See also Active-active (storage systems)and Trespassing. A circuit board that enables a computer to use external devices such as a disk storage system or a high-speed network. See also Host bus adapter (HBA). A load-balancing and failover policy for PowerPath devices in which I/O requests are assigned to paths based on an algorithm that takes into account path load and logical device priority. One of two states for PowerPath paths and logical devices. The other state is dead. A live path is usable: PowerPath can direct I/O to it. A live logical device either was never marked dead by PowerPath or was marked dead but restored with the powermt restore command. See also Dead. An array failover mode available with VNX and CLARiiON arrays in which one array controller is designated as the active/optimized controller and the other array controller is designated as the active/non-optimized controller. As long as the active/optimized controller is viable, I/O is directed to this controller. Should the active/optimized array controller become unavailable or fail, I/O is directed to the active/non-optimized array controller until a trespass occurs. EMC PowerPath Family Version 5.7 Product Guide 69

70 Glossary Arbitrated loop A Fibre Channel topology supported by PowerPath. An arbitrated loop topology requires a port to successfully negotiate to establish a circuit between itself and another port on the loop. B Basic failover (bf) Boot device Bus A failover policy that protects against VNX and CLARiiON SP failures, Symmetrix FA port failures, and back-end failures, and that allows non-disruptive upgrades to work when running PowerPath without a license key. It does not protect against HBA failures. Load balancing is not in effect with basic failover. I/O routing on failure is limited to one HBA and one port on each storage system interface. This policy is valid for VNX, CLARiiON, Symmetrix, Invista, VPLEX, and supported Celerra devices, and is the default policy for them on platforms without a valid PowerPath license. This is the only time that a device is set to Basic failover. PowerPath version 5.3 and service packs is the last version to include support for setting the Basic failover (bf) load-balancing and failover policy when there is a PowerPath license present. As of PowerPath version 5.5 bf has been removed from the powermt set policy command usage. In subsequent releases you will not be able to manually set this policy. See also Load balancing. The device that contains a computer s startup code. Symmetrix logical devices managed by PowerPath can be configured as boot devices. In a computer, a collection of signal lines that work together to connect one or more modules; for example, a disk controller and the central processor. A bus can also connect two cooperating controllers, such as a SCSI host adapter and a SCSI device controller. See also SCSI. In PowerPath, bus refers to two connected SAN edge points in the storage configuration: an HBA port on the server on one end and an array port on the other. For the iscsi standard, bus is the Initiator-Target, or IT, nexus. C Channel Channel group CLARiiON LUN name CLARiiON optimization (co) Cluster Consistency Group A point-to-point data transport link. PowerPath s name for a communication channel directed to only one logical device. Several paths make up a channel group. Channel groups can increase system performance and redundancy by dedicating a set of paths to a critical application component (for example, database log files), while maintaining access to a redundant set of paths to the application component, in case the first set fails. See User-assignable LUN name. A load-balancing and failover policy for PowerPath devices, in which I/O requests are assigned to paths based on an algorithm that takes into account path load and the logical device priority you set with powermt set policy. This policy is valid for VNX and CLARiiON storage systems only and is the default policy for them, on platforms with a valid PowerPath license. It is listed in powermt display output as CLAROpt. See also Load balancing. Two or more interconnected hosts sharing access to the same data storage resources. If one host fails, another host can continue to make data available to applications. A group of Symmetrix devices specially configured to act in unison, to maintain the integrity of a database distributed across multiple Symmetrix Remote Data Facility (SRDF) units. PowerPath can report SRDF consistency group status. See also SRDF (Symmetrix Remote Data Facility). 70 EMC PowerPath Family Version 5.7 Product Guide

71 Glossary Controller A device that controls and manages access to a part of a computer or computerized system. Examples include disk controllers on computers or similar controllers on disk storage systems. D Data availability Data channel Dead Default Degraded Device Device number Device driver Disabled (HBA) Access to any and all user data by an application. See Channel. One of two states for paths and logical devices: A dead path is not usable: PowerPath will not direct user I/O to this path. PowerPath marks a path dead when it fails a path test; it marks a path alive again when it passes a path test. A dead logical device returned certain types of I/O errors to PowerPath and was judged unusable. Once a logical device is marked dead (and until it is restored with powermt restore), PowerPath returns subsequent I/O requests with a failure status, without forwarding them to the logical device. This prevents further, unrecoverable corruption and allows the user to perform data recovery if needed. Dead is an unusual condition for logical devices. HP-UX is the only platform that ever marks logical devices as dead. See also Alive, Path and Logical device. An attribute, value, or option that is assumed when no other is specified explicitly. One of three statuses reported by PowerPath for an HBA. The other statuses are failed and optimal. Degraded means one or more (but not all) I/O paths connected to this HBA have failed. See also Failed and Optimal. An addressable part (physical or logical) of a host or storage device. For example, PowerPath represents a path set between a host and a logical device as a uniquely named pseudo device. The value that logically identifies a device. Software that permits data transfer between a computer system and a device such as a disk. Typically, a device driver interacts directly with system hardware. Consequently, satisfactory operation requires device driver software that is compatible with a specific operating system and hardware model. A user-defined HBA attribute, indicating the system administrator has made the HBA unavailable for use by PowerPath. Disabling an HBA tells PowerPath not to use any paths originating from this HBA for I/O. Disabling an HBA is done using operating-system-specific commands, not in PowerPath. See also Enabled (HBA). E Enabled (HBA) Encryption E_Port A user-defined HBA attribute, indicating the system administrator considers the HBA available for use by PowerPath. Enabling an HBA is done using operating system specific commands, not in PowerPath. See also Disabled (HBA). See PowerPath Encryption. An expansion port on a Fibre Channel switch that links multiple switches into a fabric. EMC PowerPath Family Version 5.7 Product Guide 71

72 Glossary emcpower device ESN The name used by PowerPath (on some operating systems) for a pseudo device. See also Pseudo device. Enterprise Storage Network. An ESN can provide high availability by configuring multiple paths between connections, configuring alternate paths to Storage Area Network (SAN) components, and deploying redundant SAN components. F Fabric Failed Failover FC-AL Fibre Fibre Channel Fibre Channel Arbitrated Loop (FC-AL) Firmware The facilities that link multiple Fibre Channel nodes. One of three statuses reported by PowerPath for an HBA. The other statuses are degraded and optimal. Failed means all paths to this HBA are dead and no data is passing through the HBA. See also Degraded and Optimal. In PowerPath, the process of detecting a failure on an active path and automatically sending data to another available path. See Fibre Channel Arbitrated Loop (FC-AL). A general term for all physical media types supported by the Fibre Channel specification, such as optical fiber, twisted pair, and coaxial cable. The general name of an integrated set of ANSI standards that define protocols for flexible information transfer. Fibre Channel is a high-performance serial data channel. A standard for a shared access loop, in which several Fibre Channel devices are connected (as opposed to point-to-point transmissions). See also Arbitrated loop. Software, typically startup and I/O instructions, stored in an HBA s read-only memory. PowerPath installation requirements often specify both an HBA and a specific revision of that HBA s firmware. G GUI The acronym for graphical user interface, which represents an application with icons, menus, and dialog boxes selectable by a user. Command-line interfaces are another major means of interacting with an application. PowerPath Administrator is a GUI that allows you to interactively manage PowerPath on Windows platforms. H Host Host bus adapter (HBA) Hub Hw path The generic name for a computer connected to a network or cluster system. A device through which a host can issue I/O requests. PowerPath reports the status of paths originating from HBAs as optimal, degraded, or failed. A Fibre Channel device used to connect several devices (such as computer servers and storage systems) into a Fibre Channel- Arbitrated Loop (FC-AL). See also Fibre Channel Arbitrated Loop (FC-AL). A path name assigned by the device file system; not to be confused with Path on page EMC PowerPath Family Version 5.7 Product Guide

73 Glossary I Identifier (ID) Initiator Interface A sequence of bits or characters that identifies a program, device, controller, or system. A SCSI or Fibre Channel device (usually a host system) that requests an operation to be performed by another device, called the target. See also Target. For example, a Fibre Adapter (FA) on a Symmetrix storage system or a Storage Processor (SP) on a VNX and CLARiiON storage system. An array interface port is the front-end interface that connects to the SAN. An interface board (or array board) consists of the interface ports. L Least blocks (lb) Least IOs (li) Load balancing Logical device Logical Volume Manager (LVM) Logical Unit Number (LUN) LUN name LUNZ A load-balancing and failover policy for PowerPath devices, in which load balance is based on the number of blocks in pending I/Os. I/O requests are assigned to the path with the fewest queued blocks, regardless of the number of requests involved. See also Load balancing. A load-balancing and failover policy for PowerPath devices, in which load balance is based on the number of pending I/Os. I/O requests are assigned to the path with the fewest queued requests, regardless of total block volume. See also Load balancing. The activity of distributing the I/O workload across two or more paths, according to a defined policy. See also Path and Policy. The smallest addressable storage unit. A logical device is an entity managed and presented by a storage system, which comprises one or more physical disks or sections of physical disks. Logical devices aggregated and managed at a higher level by a volume manager are referenced as logical volumes rather than logical devices. Software that manages logical storage devices. Logical volume managers typically reside under the computer server s filesystem. An identifier for a physical or virtual device addressable through a target. See User-assignable LUN name. A VNX and CLARiiON device used for a management program to communicate with the storage system. A LUNZ is used to tell the storage system that the host exists and what is the WWN of the host. (A WWN, or World Wide Name, uniquely identifies a device on a Fibre Channel network.) A LUNZ device is present when no storage has been assigned to the host. When Access Logix is used on a VNX and CLARiiON system, an agent runs on the host and communicates with the storage system through either the LUNZ or a storage device. On a VNX and CLARiiON system, the LUNZ device is replaced by the first storage device assigned to the host; the agent then communicates through the storage device. See also VCMDB (Volume Configuration Management Database). M Mirroring Maintaining two or more identical copies of a designated volume on two or more disks. Each copy is updated automatically during a write operation. Mirroring improves data availability: if one disk device fails, storage devices automatically use EMC PowerPath Family Version 5.7 Product Guide 73

74 Glossary the other disk device to access the data. In this way, the mirrored copies of a disk can be presented as a single, fault-tolerant, virtual disk. Mirrored pair Mode A logical volume with all data recorded twice, once on each of two different physical devices. An attribute of a PowerPath path. Path mode can be active or standby. See also Active (paths)and Standby (paths). N Native device Nice name No redirect (nr) A device created by the operating system to represent and provide access to a logical device. Typically, a native device is path aware (as opposed to path independent) and represents a single path to a logical device. The device is native in that it is provided by the operating system for use with applications. PowerPath supports native devices on all platforms except AIX. See also PowerPath device and Pseudo device. See User-assignable LUN name. A load-balancing and failover policy for PowerPath devices, in which neither load balancing nor failover is in effect. If nr is set on a failed path and a native device is used, I/O errors will occur when I/O is directed to that path. If one or more paths is failed and nr is set, data I/O errors can occur. EMC does not recommend using this policy in production environments; use this policy only for diagnostic purposes. This policy is the default for Invista on platforms without a valid PowerPath license. PowerPath version 5.3 and service packs is the last version to include support for setting the NoRedirect (nr) load-balancing and failover policy when there is a PowerPath license present. As of PowerPath version 5.5 nr has been removed from the powermt set policy command usage. In subsequent releases you will not be able to manually set this policy. See also Load balancing. O Operating system Optimal Software that manages the use and allocation of computer resources; for example, memory, central processing unit (CPU), disk, and printer access. PowerPath runs on several operating systems. In PowerPath documentation, an operating system and the hardware it runs on are referred to as a platform. One of three statuses reported by PowerPath for an HBA; the others are degraded and failed. Optimal means all paths to this HBA are alive (usable). See also Degraded and Failed. P Parameter Path Path set A value given to a command variable. PowerPath powermt commands have parameters that users can specify to tailor the effects of the commands. Any route between nodes in a network. In PowerPath, path refers to the physical route between a host and a storage system Logical Unit (LU). This includes the HBA port, cables, a switch, a storage system interface and port, and an LU. For the iscsi standard, path is the Initiator- Target-LUN, or ITL, nexus. In PowerPath, the group of all paths that read data from and write data to the same logical device. 74 EMC PowerPath Family Version 5.7 Product Guide

75 Glossary Physical volume Physical volume identifier (PVID) Platform Policy Port PowerPath device PowerPath Encryption Pseudo device In IBM AIX LVM terminology, each physical disk drive connected to the system. A physical volume is an addressable disk on the SCSI bus. By default, AIX refers to the physical volumes as hdisk0, hdisk1, hdisk2, and so on. See also SCSI. On AIX, a unique number written on the first block of the device. The Logical Volume Manager uses this number to identify specific disks. See also Logical Volume Manager (LVM). In PowerPath documentation, an operating system and the hardware it runs on. A load-balancing and failover algorithm for PowerPath devices. This can be changed with powermt set policy. See also Adaptive (ad), CLARiiON optimization (co), Least blocks (lb), Least IOs (li), Request (re), Round robin (rr), StreamIO (si), and Symmetrix optimization (so). PowerPath version 5.3 and service packs is the last version to include support for setting the Basic Failover (bf) and NoRedirect (nr) load-balancing and failover policies when there is a PowerPath license present. As of PowerPath version 5.5 bf and nr have been removed from the powermt set policy command usage. In subsequent releases you will not be able to manually set these policies. (1) An access point for data entry or exit. (2) A receptacle on a device, to which a cable for another device is attached. A device created by PowerPath for each logical device PowerPath discovers. There is a one-to-one relationship between PowerPath devices and logical devices. PowerPath presents PowerPath devices differently, depending on the platform. Much of this difference is due to the design of the host operating system. Depending on the platform, PowerPath may present PowerPath devices as native devices or pseudo devices. See also Logical device, Native device, and Pseudo device. PowerPath Encryption with RSA is host-based software that uses strong encryption to safeguard sensitive data on disk devices. PowerPath Encryption assures the confidentiality of data on a disk drive that is physically removed from a data center, and it prevents anyone who gains unauthorized access to the disk from reading or using the data on that device. A special kind of device (operating system object used to access devices) created by PowerPath. It is path independent, as are native devices, once PowerPath is installed. When a pseudo device is created, there is one (and only one) per path set. See also emcpower device, Native device, Path set, and PowerPath device. R R1 (Source) and R2 (Target) devices Reassignment A Symmetrix source (R1) device participating in SRDF operations with a target (R2) device. All writes to the R1 device are mirrored to an R2 target device in a remote Symmetrix unit. On some platforms, PowerPath provides failover boot support to R1 and R2 devices. See also SRDF (Symmetrix Remote Data Facility)and Boot device. On an active-passive storage system, movement of logical devices from one storage system interface card to another. This occurs in the event of a failure of a storage system interface card or all paths to an interface card. If an interface card fails, logical devices are reassigned from the broken interface to another interface. This reassignment is initiated by the other, functioning interface. If all paths from a host to an interface fail, logical devices accessed on those paths are reassigned to another EMC PowerPath Family Version 5.7 Product Guide 75

76 Glossary interface, with which the host can still communicate. This reassignment is initiated by PowerPath, which instructs the storage system to make the reassignment. Reassignment can take several seconds to complete; however, I/Os do not fail during it. After devices are reassigned, PowerPath detects the changes and seamlessly routes data using the new route. The VNX and CLARiiON term for reassignment is trespassing. See also Active-passive (storage systems). Redundant path Request (re) Round robin (rr) An independent communication channel between a host and a logical device that already share at least one channel. PowerPath allows you to create redundant paths to promote failover. See also Failover. A load-balancing and failover policy for PowerPath devices. For native devices, it uses the path that would have been used if PowerPath were not installed. For pseudo devices, it uses one arbitrary path for all I/O. For all devices, path failover is in effect, but load balancing is not. See also Failover and Load balancing. A load-balancing and failover policy for PowerPath devices, in which I/O requests are assigned to each available path in rotation. See also Load balancing. S SAN SCSI SCSI device Single point of failure (SPOF) Standby (paths) Storage group StreamIO (si) Storage Area Network. See also ESN. The acronym for Small Computer System Interface, the ANSI-standard set of protocols that defines connections between personal and other small computers and peripheral devices such as printers and disks. PowerPath supports SCSI standards. Specific requirements apply to each supported operating system. An HBA, peripheral controller, or intelligent peripheral that can attach to a SCSI bus. A hardware or software design or configuration that depends on one component for successful operation: If that component fails, the entire application fails. High-availability design tries to eliminate or minimize single points of failure through redundancy, recovery, and/or failover. One of two modes for I/O paths. A standby path is held in reserve. Being set to standby does not mean a path will not be used. Rather, it means that the weight of the path is heavily adjusted to preclude its use in normal operations. A standby path still can be selected if it is the best path for a request. Path mode is set with powermt set mode. See also Active (paths). One or more LUNs within a storage system that is reserved for one or more hosts and is inaccessible to other hosts. Access Logix enforces the host-to-storage group permissions and runs in the storage-system SPs A load-balancing and failover policy for PowerPath devices in which, For each possible path for an I/O to a particular volume, this policy selects the same path as was selected for the previous I/O to the volume, unless the volume I/O count since last path change exceeds the volume s threshold value. When the threshold is exceeded, the policy selects a new path based on the adaptive policy algorithm. The volume I/O count is re-zeroed on each path change (See also Adaptive (ad). 76 EMC PowerPath Family Version 5.7 Product Guide

77 Glossary Striping Storage device Switch Symmetrix optimization (so) SRDF (Symmetrix Remote Data Facility) Segmenting logically sequential data and writing the segments to multiple physical disks. Placing data across multiple disks improves performance, by aggregating the I/O performance of several disks. It also improves availability, as the combined striped data can be presented as a single, fault-tolerant, virtual disk. A physical device that can attach to a SCSI device, which in turn connects to the SCSI bus. A Fibre Channel device used to connect other devices (for example, computer servers and storage systems) into a Fibre Channel fabric. In a switched topology, HBAs may be zoned to share storage-system ports. See also Fibre Channel. A load-balancing and failover policy for PowerPath devices, in which I/O requests are routed to paths based on an algorithm that takes into account path load and the logical device priority you set with powermt set policy. Load is a function of the number, size, priority, and type of I/O queued on each path. This policy is valid for Symmetrix storage systems only and is the default policy for them, on platforms with a valid PowerPath license. It is listed in powermt display output as SymmOpt. See also Load balancing. The microcode and hardware required to support Symmetrix remote mirroring. See also Mirroring. T Target Trespassing A SCSI or Fibre Channel device that performs an I/O process requested by another device, called the initiator. See also Initiator. The VNX and CLARiiON term for reassignment. See also Reassignment. U UNIX User-assignable LUN name An interactive, multitasking, multiuser operating system supported by PowerPath. A character string that a user or system manager associates with a logical device on a VNX and CLARiiON array and assigns through Navisphere and Unisphere. V VCMDB (Volume Configuration Management Database) Volume A Symmetrix device used for a management program to communicate with the storage system. A VCBDM is used to tell the storage system that the host exists and what is the WWN of the host. (A WWN, or World Wide Name, uniquely identifies a device on a Fibre Channel network.) A VCMDB is present when using Volume Logix to perform LUN masking. When storage is assigned to the host, the storage appears in addition to the VCMDB. See also LUNZ. An abstracted, logical disk device. Volumes read and write data like other disk devices, but typically they do not support other operations. A Symmetrix volume may comprise storage on one or more Symmetrix devices, but it is presented to hosts as a single disk device. A volume can be a single disk partition or multiple disk partitions on one or more physical drives. A volume can coincide with a logical device, include multiple logical devices, or contain only a piece of a logical device. Applications that use volumes do EMC PowerPath Family Version 5.7 Product Guide 77

78 Glossary not need to be aware of the underlying physical structure; software handles the mapping of virtual addresses to physical addresses. See also Logical device. Volume group Volume manager A group of physical volumes. Software that creates and manages logical volumes that span multiple physical disks, allowing greater flexibility and reliability for storing data. W Write throttling If enabled, limits the number of queued writes to the common I/O queue in the HBA driver; instead, the writes are queued in PowerPath. As a result, read requests do not get delayed behind a large number of write requests. Write throttling is disabled by default. See also Host bus adapter (HBA). Z Zone A set of devices that can access one another. All devices connected to a Fibre Channel connectivity product (such as the ED-1032 Director) may be configured into one or more zones. Devices in the same zone can see each other, while those in different zones cannot. Zoning allows an administrator to group several devices by function or location. See also Fibre Channel. 78 EMC PowerPath Family Version 5.7 Product Guide

79 INDEX A Access Logix 69 Active-active storage systems 28 Adapter 69 Adaptive (ad) 69 Alive path state 35 Application performance tuning 38 Arbitrated loop 70 Autorestore. See Periodic autorestore B Boot device 70 Bus 70 C Channel 70 Channel group 38 Cluster 70 Consistency group 70 Controller 71 D Data availability 71 Data channel 71 Dead path state 35 Default 71 Device definition 71 driver 71 native 30 number 71 pseudo 31 Disabled HBA status 71 documentation, related 9 E E_Port 71 EMC online support website 9 emcpower device 72 Enabled HBA status 71 Encryption. See PowerPath Encryption ESN 72 F FA 26 Fabric 72 FC-AL 72 Fibre 72 Fibre Channel configuration requirements 43 definition 72 Fibre Channel Arbitrated Loop 72 Firmware 72 G GUI 72 H HBA. See Host bus adapter Host 72 Host bus adapter (HBA) definition 27 disabled status 71 enabled status 71 Hub 72 I Identifier (ID) 73 Initiator 73 Interface 26, 73 Invista storage devices 52 L Load balancing 20, 32 Load balancing group 29 Logical Unit Number (LUN) 73 Logical Volume Manager (LVM) 73 LUNZ 73 M Mirrored pair 74 Mirroring 73 N Native devices 30 O Operating system 74 P Parameter 74 Path set 29 Paths about 27 load balancing 32 path set 29 state 35 testing 34, 35 Performance, tuning applications 38 Periodic autorestore 36 Physical volume 75 EMC PowerPath Family Version 5.7 Product Guide 79

80 Index Physical Volume Identifier (PVID) 75 Platform 75 Ports, using multiple 26 powermt set mode 38 powermt utility commands 39 PowerPath Encryption 25 PowerPath Fabric Failover 55 PowerPath SE 55 Pseudo devices 31 R R1 (source) and R2 (target) devices 75 Reassignment 28, 75 Redundant path 76 S SAN 76 SCSI configuration requirements 50 definition 76 device 76 Single point of failure (SPOF) 76 SP (Storage Processor) 26 State, path 35 Storage 76 Storage device 77 Storage group 76 Stream I/O (si) 76 Striping 77 Switch 77 Symmetrix Remote Data Facility (SRDF) 77 T Target 77 Testing paths 34, 35 Trespassing 28, 76 Tuning application performance 38 U UNIX 77 Utility Kit PowerPath 55 V VCMDB (Volume Configuration Management Database) 77 Volume 77 Volume group 78 Volume manager 78 Z Zone EMC PowerPath Family Version 5.7 Product Guide

TECHNICAL NOTES. Celerra Physical to Virtual IP Address Migration Utility Technical Notes P/N 300-012-104 REV A03. EMC Ionix ControlCenter 6.

TECHNICAL NOTES. Celerra Physical to Virtual IP Address Migration Utility Technical Notes P/N 300-012-104 REV A03. EMC Ionix ControlCenter 6. TECHNICAL NOTES EMC Ionix ControlCenter 6.1 Celerra Physical to Virtual IP Address Migration Utility Technical Notes P/N 300-012-104 REV A03 August 2011 These release notes contain supplemental information

More information

TECHNICAL NOTES. Technical Notes P/N 302-000-535 REV 02

TECHNICAL NOTES. Technical Notes P/N 302-000-535 REV 02 TECHNICAL NOTES EMC NetWorker Module for Microsoft: Performing Exchange Server Granular Recovery by using EMC NetWorker Module for Microsoft with Ontrack PowerControls Release 3.0 SP1 Technical Notes P/N

More information

EMC Support Matrix. Interoperability Results. P/N 300 000 166 ECO 36106 Rev B30

EMC Support Matrix. Interoperability Results. P/N 300 000 166 ECO 36106 Rev B30 EMC Support Matrix Interoperability Results P/N 300 000 166 ECO 36106 Rev B30 Table of Contents Copyright EMC Corporation 2006...1 EMC's Policies and Requirements for EMC Support Matrix...2 Selections...4

More information

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Release 8.0 P/N 300-014-176 REV 01

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Release 8.0 P/N 300-014-176 REV 01 EMC NetWorker Release 8.0 Server Disaster Recovery and Availability Best Practices Guide P/N 300-014-176 REV 01 Copyright 1990-2012 EMC Corporation. All rights reserved. Published in the USA. Published

More information

EMC Perspective. Application Discovery and Automatic Mapping for an On-Demand Infrastructure

EMC Perspective. Application Discovery and Automatic Mapping for an On-Demand Infrastructure EMC Perspective Application Discovery and Automatic Mapping for an On-Demand Infrastructure Table of Contents Introduction......................................................3 Challenges Facing an On-Demand

More information

All other trademarks used herein are the property of their respective owners.

All other trademarks used herein are the property of their respective owners. Welcome to Atmos 2.1 System Administration course. Click the Notes tab to view text that corresponds to the audio recording. Click the Supporting Materials tab to download a PDF version of this elearning.

More information

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02 EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Table 1 on page 4 presents the revision history of this document: Revision Date Description. A01 March 30, 2012 Initial release of this document.

Table 1 on page 4 presents the revision history of this document: Revision Date Description. A01 March 30, 2012 Initial release of this document. TECHNICAL NOTES Backup and Recovery of EMC Documentum Content Server by Using CYA HOTBackup and EMC NetWorker Technical Notes P/N 300-013-730 REV A01 March 30, 2012 These technical notes contain information

More information

EMC PowerPath and PowerPath/VE for Microsoft Windows Version 5.5 and Minor Releases

EMC PowerPath and PowerPath/VE for Microsoft Windows Version 5.5 and Minor Releases EMC PowerPath and PowerPath/VE for Microsoft Windows Version 5.5 and Minor Releases Installation and Administration Guide P/N 300-010-646 REV A05 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103

More information

A Guide to. Server Virtualization. Block Storage Virtualization. File Storage Virtualization. Infrastructure Virtualization Services

A Guide to. Server Virtualization. Block Storage Virtualization. File Storage Virtualization. Infrastructure Virtualization Services A Guide to Virtualizing Your Information Infrastructure Server Virtualization Block Storage Virtualization File Storage Virtualization Infrastructure Virtualization Services Table of Contents Virtualizing

More information

How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network

How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

VNX Unified Storage Management Lab Guide

VNX Unified Storage Management Lab Guide VNX Unified Storage Management Lab Guide January 2014 EMC Education Service Copyright Copyright 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 2013, 2014 EMC Corporation.

More information

EMC Atmos Virtual Edition with EMC VNX Series

EMC Atmos Virtual Edition with EMC VNX Series EMC Atmos Virtual Edition with EMC VNX Series h8281.2 Copyright, 2011 EMC Corporation. All rights reserved. Published October, 2011 EMC believes the information in this publication is accurate as of its

More information

Microsoft SQL Server 2008 on EMC VNXe Series

Microsoft SQL Server 2008 on EMC VNXe Series Microsoft SQL Server 2008 on EMC VNXe Series h8286 Copyright 2011 EMC Corporation. All rights reserved. Published September, 2011 EMC believes the information in this publication is accurate as of its

More information

NetWorker Module for Microsoft SQL Server INSTALLATION GUIDE. Release 5.0 P/N E6-1799-01

NetWorker Module for Microsoft SQL Server INSTALLATION GUIDE. Release 5.0 P/N E6-1799-01 NetWorker Module for Microsoft SQL Server Release 5.0 INSTALLATION GUIDE P/N E6-1799-01 Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2005 Corporation.

More information

How To Use An Uniden Vnx5300 (Vx53I) With A Power Supply (Sps) And Power Supply Power Supply For A Power Unit (Sse) (Power Supply) (Sus) (Dae

How To Use An Uniden Vnx5300 (Vx53I) With A Power Supply (Sps) And Power Supply Power Supply For A Power Unit (Sse) (Power Supply) (Sus) (Dae EMC VNX VNX5300 Block Installation Guide P/N 300-012-924 REV 04 Copyright 2012 EMC Corporation. All rights reserved. Published in the USA. Published June, 2012 EMC believes the information in this publication

More information

EMC NetWorker Module for Microsoft Exchange Server Release 5.1

EMC NetWorker Module for Microsoft Exchange Server Release 5.1 EMC NetWorker Module for Microsoft Exchange Server Release 5.1 Installation Guide P/N 300-004-750 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC PRODUCT W ARRANTY AND M AINTENANCE T ABLE

EMC PRODUCT W ARRANTY AND M AINTENANCE T ABLE EMC PRODUCT W ARRANTY AND M AINTENANCE T ABLE The table below sets forth EMC product-specific warranty and maintenance terms and information. Each product identified as equipment also includes its related

More information

EMC CLARiiON Asymmetric Active/Active Feature

EMC CLARiiON Asymmetric Active/Active Feature EMC CLARiiON Asymmetric Active/Active Feature A Detailed Review Abstract This white paper provides an overview of the EMC CLARiiON Asymmetric Active/Active feature. It highlights the configuration, best

More information

EMC Smarts Application Discovery Manager and Multisite Data Aggregation

EMC Smarts Application Discovery Manager and Multisite Data Aggregation EMC Smarts Application Discovery Manager and Multisite Data Aggregation Abstract: Without a complete overview, which includes a detailed as well as global representation, CIOs and their IT team may not

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

EMC PowerPath Family

EMC PowerPath Family DATA SHEET EMC PowerPath Family PowerPath Multipathing PowerPath Migration Enabler PowerPath Encryption with RSA The enabler for EMC host-based solutions The Big Picture Intelligent high-performance path

More information

EMC NetWorker. Licensing Guide. Release 8.0 P/N 300-013-596 REV A01

EMC NetWorker. Licensing Guide. Release 8.0 P/N 300-013-596 REV A01 EMC NetWorker Release 8.0 Licensing Guide P/N 300-013-596 REV A01 Copyright (2011-2012) EMC Corporation. All rights reserved. Published in the USA. Published June, 2012 EMC believes the information in

More information

RELEASE NOTES. Release Notes P/N 300-999-667 REV 08. EMC PowerPath and PowerPath/VE Family for Windows Version 5.7 and Minor Releases.

RELEASE NOTES. Release Notes P/N 300-999-667 REV 08. EMC PowerPath and PowerPath/VE Family for Windows Version 5.7 and Minor Releases. RELEASE NOTES EMC PowerPath and PowerPath/VE Family for Windows Version 5.7 and Minor Releases Release Notes P/N 300-999-667 REV 08 May 21, 2014 These release notes contain information about features,

More information

EMC NetWorker VSS Client for Microsoft Windows Server 2003 First Edition

EMC NetWorker VSS Client for Microsoft Windows Server 2003 First Edition EMC NetWorker VSS Client for Microsoft Windows Server 2003 First Edition Installation Guide P/N 300-003-994 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

EMC SourceOne for Microsoft SharePoint Storage Management Version 7.1

EMC SourceOne for Microsoft SharePoint Storage Management Version 7.1 EMC SourceOne for Microsoft SharePoint Storage Management Version 7.1 Installation Guide 302-000-227 REV 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

AX4 5 Series Software Overview

AX4 5 Series Software Overview AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

EMC Solutions at Microsoft: Optimizing Exchange Backup and Recovery with VSS (Volume Shadowcopy Service) Technology Integration

EMC Solutions at Microsoft: Optimizing Exchange Backup and Recovery with VSS (Volume Shadowcopy Service) Technology Integration EMC Perspective : Optimizing Exchange Backup and Recovery with VSS (Volume Shadowcopy Service) Technology Integration EMC CLARiiON, SnapView, and EMC Replication Manager/SE Best Practices Situation Microsoft

More information

EMC VSI for VMware vsphere: Storage Viewer

EMC VSI for VMware vsphere: Storage Viewer EMC VSI for VMware vsphere: Storage Viewer Version 5.6 Product Guide P/N 300-013-072 REV 07 Copyright 2010 2013 EMC Corporation. All rights reserved. Published in the USA. Published September 2013 EMC

More information

EMC Data Domain Management Center

EMC Data Domain Management Center EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes

More information

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Release 8.0 Service Pack 1 P/N 300-999-723 REV 01

EMC NetWorker. Server Disaster Recovery and Availability Best Practices Guide. Release 8.0 Service Pack 1 P/N 300-999-723 REV 01 EMC NetWorker Release 8.0 Service Pack 1 Server Disaster Recovery and Availability Best Practices Guide P/N 300-999-723 REV 01 Copyright 1990-2012 EMC Corporation. All rights reserved. Published in the

More information

Windows Host Utilities 6.0.2 Installation and Setup Guide

Windows Host Utilities 6.0.2 Installation and Setup Guide Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277

More information

Today s Choices for. Compliance. Regulatory Requirements E-Mail and Content Archiving Records Management ediscovery

Today s Choices for. Compliance. Regulatory Requirements E-Mail and Content Archiving Records Management ediscovery Today s Choices for Compliance Regulatory Requirements E-Mail and Content Archiving Records Management ediscovery Today s Choices for Compliance Regulatory Requirements Enables IT to provide for the confidentiality,

More information

Installing Management Applications on VNX for File

Installing Management Applications on VNX for File EMC VNX Series Release 8.1 Installing Management Applications on VNX for File P/N 300-015-111 Rev 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Administrator s Guide P/N 300-009-573 REV. A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

Release Notes P/N 300-011-782 Rev A07

Release Notes P/N 300-011-782 Rev A07 EMC PowerPath Family for Linux Version 5.6 and Minor Releases Release Notes P/N 300-011-782 Rev A07 March 07, 2012 These release notes contain information about features, system requirements, and limitations

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 8.2 User Guide P/N 302-000-658 REV 01 Copyright 2007-2014 EMC Corporation. All rights reserved. Published in the USA.

More information

EMC VNXe Series Using a VNXe System with CIFS Shared Folders

EMC VNXe Series Using a VNXe System with CIFS Shared Folders EMC VNXe Series Using a VNXe System with CIFS Shared Folders VNXe Operating Environment Version 2.4 P/N 300-010-548 REV 04 Connect to Storage Copyright 2013 EMC Corporation. All rights reserved. Published

More information

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01 EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,

More information

EMC DiskXtender File System Manager for NAS MICROSOFT WINDOWS INSTALLATION GUIDE. Release 2.0 (Beta Version) P/N E6-1789-01

EMC DiskXtender File System Manager for NAS MICROSOFT WINDOWS INSTALLATION GUIDE. Release 2.0 (Beta Version) P/N E6-1789-01 EMC DiskXtender File System Manager for NAS Release 2.0 (Beta Version) MICROSOFT WINDOWS INSTALLATION GUIDE P/N E6-1789-01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

EMC NetWorker. Licensing Process Guide SECOND EDITION P/N 300-007-566 REV A02. EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103

EMC NetWorker. Licensing Process Guide SECOND EDITION P/N 300-007-566 REV A02. EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 EMC NetWorker Licensing Process Guide SECOND EDITION P/N 300-007-566 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2009 EMC Corporation.

More information

Validating Host Multipathing with EMC VPLEX Technical Notes P/N 300-012-789 REV A01 June 1, 2011

Validating Host Multipathing with EMC VPLEX Technical Notes P/N 300-012-789 REV A01 June 1, 2011 Validating Host Multipathing with EMC VPLEX Technical Notes P/N 300-012-789 REV A01 June 1, 2011 This technical notes document contains information on these topics: Introduction... 2 EMC VPLEX overview...

More information

Next-Generation Backup, Recovery, and Archive

Next-Generation Backup, Recovery, and Archive A Guide to Next-Generation Backup, Recovery, and Archive Integrated Backup-to-Disk Solutions File System and E-mail Archiving Enhancing Tivoli Storage Manager (TSM) Backup Environments Remote and Branch

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 8.2 Service Pack 1 User Guide 302-001-235 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published

More information

EMC Avamar 7.0 and EMC Data Domain System

EMC Avamar 7.0 and EMC Data Domain System EMC Avamar 7.0 and EMC Data Domain System Integration Guide P/N 300-015-224 REV 02 Copyright 2001-2013 EMC Corporation. All rights reserved. Published in the USA. Published July, 2013 EMC believes the

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 9.0 User Guide 302-001-755 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published in USA. Published

More information

EMC POWERPATH LOAD BALANCING AND FAILOVER

EMC POWERPATH LOAD BALANCING AND FAILOVER White Paper EMC POWERPATH LOAD BALANCING AND FAILOVER Comparison with native MPIO operating system solutions Abstract EMC PowerPath and PowerPath/VE provide intelligent load balancing and failover. This

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

Veritas Cluster Server from Symantec

Veritas Cluster Server from Symantec Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster

More information

Windows Host Utilities 6.0 Installation and Setup Guide

Windows Host Utilities 6.0 Installation and Setup Guide Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP

More information

Veritas Storage Foundation High Availability for Windows by Symantec

Veritas Storage Foundation High Availability for Windows by Symantec Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability

More information

EMC Navisphere Manager ADMINISTRATOR S GUIDE P/N 069001125 REV A12

EMC Navisphere Manager ADMINISTRATOR S GUIDE P/N 069001125 REV A12 EMC Navisphere Manager ADMINISTRATOR S GUIDE P/N 069001125 REV A12 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508 -435-1000 www.emc.com Copyright 2003-2005 EMC Corporation. All

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

How To Write An Emma Document On A Microsoft Server On A Windows Server On An Ubuntu 2.5 (Windows) Or Windows 2 (Windows 8) On A Pc Or Macbook (Windows 2) On An Unidenor

How To Write An Emma Document On A Microsoft Server On A Windows Server On An Ubuntu 2.5 (Windows) Or Windows 2 (Windows 8) On A Pc Or Macbook (Windows 2) On An Unidenor EMC Avamar 7.0 for Windows Server User Guide P/N 300-015-229 REV 04 Copyright 2001-2014 EMC Corporation. All rights reserved. Published in the USA. Published May, 2014 EMC believes the information in this

More information

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010

More information

EMC Data Protection Search

EMC Data Protection Search EMC Data Protection Search Version 1.0 Security Configuration Guide 302-001-611 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published April 20, 2015 EMC believes

More information

IMPLEMENTING EMC FEDERATED LIVE MIGRATION WITH MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING SUPPORT

IMPLEMENTING EMC FEDERATED LIVE MIGRATION WITH MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING SUPPORT White Paper IMPLEMENTING EMC FEDERATED LIVE MIGRATION WITH MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING SUPPORT Abstract This white paper examines deployment and integration of Federated Live Migration

More information

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe White Paper CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe Simplified configuration, deployment, and management for Microsoft SQL Server on Symmetrix VMAXe Abstract This

More information

EMC SourceOne Email Management Version 7.1

EMC SourceOne Email Management Version 7.1 EMC SourceOne Email Management Version 7.1 Installation Guide 302-000-174 REV 02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2006-2013 EMC Corporation.

More information

EMC SourceOne. Products Compatibility Guide 300-008-041 REV 54

EMC SourceOne. Products Compatibility Guide 300-008-041 REV 54 EMC SourceOne Products Compatibility Guide 300-008-041 REV 54 Copyright 2005-2016 EMC Corporation. All rights reserved. Published in the USA. Published February 23, 2016 EMC believes the information in

More information

EMC DiskXtender MediaStor Release 6.2 Microsoft Windows Version

EMC DiskXtender MediaStor Release 6.2 Microsoft Windows Version EMC DiskXtender MediaStor Release 6.2 Microsoft Windows Version Administrator s Guide P/N 300-003-810 A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1

Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1 ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/

More information

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster #1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com EMC Backup and Recovery for SAP with IBM DB2 on IBM AIX Enabled by EMC Symmetrix DMX-4, EMC CLARiiON CX3, EMC Replication Manager, IBM Tivoli Storage Manager, and EMC NetWorker Reference Architecture EMC

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

N_Port ID Virtualization

N_Port ID Virtualization A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February

More information

Storage Pool Management Feature in EMC Virtual Storage Integrator

Storage Pool Management Feature in EMC Virtual Storage Integrator Storage Pool Management Feature in EMC Virtual Storage Integrator Version 4.0 Installation and Configuration of SPM Detailed Use Cases Customer Example Drew Tonnesen Lee McColgan Bill Stronge Copyright

More information

EMC Ionix MPLS Manager

EMC Ionix MPLS Manager EMC Ionix MPS Manager Version 9.0 Discovery Guide P/N 300-013-009 REV A01 Copyright 2004-2011 EMC Corporation. All rights reserved. Published in the USA. Published December, 2011 EMC believes the information

More information

EMC Invista: The Easy to Use Storage Manager

EMC Invista: The Easy to Use Storage Manager EMC s Invista SAN Virtualization System Tested Feb. 2006 Page 1 of 13 EMC Invista: The Easy to Use Storage Manager Invista delivers centrally managed LUN Virtualization, Data Mobility, and Copy Services

More information

EMC NetWorker Module for Microsoft Exchange Server Release 5.1

EMC NetWorker Module for Microsoft Exchange Server Release 5.1 EMC NetWorker Module for Microsoft Exchange Server Release 5.1 Administration Guide P/N 300-004-749 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103

More information

EMC NetWorker Release 7.4 Service Pack 1 Multiplatform Version

EMC NetWorker Release 7.4 Service Pack 1 Multiplatform Version EMC NetWorker Release 7.4 Service Pack 1 Multiplatform Version Cluster Installation Guide P/N 300-005-509 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

EMC Symmetrix V-Max and Microsoft SQL Server

EMC Symmetrix V-Max and Microsoft SQL Server EMC Symmetrix V-Max and Microsoft SQL Server Applied Technology Abstract This white paper examines deployment and integration of Microsoft SQL Server solutions on the EMC Symmetrix V-Max Series with Enginuity.

More information

EMC Solutions for Disaster Recovery

EMC Solutions for Disaster Recovery EMC Solutions for Disaster Recovery EMC RecoverPoint Daniel Golic, Technology Consultant Banja Luka, May 27 th 2008 1 RecoverPoint 3.0 Overview 2 The CIO s Information Storage and Management Requirements

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

EMC Replication Manager for Virtualized Environments

EMC Replication Manager for Virtualized Environments EMC Replication Manager for Virtualized Environments A Detailed Review Abstract Today s IT organization is constantly looking for ways to increase the efficiency of valuable computing resources. Increased

More information

Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5

Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 This document supports the version of each product listed and supports all subsequent versions until the

More information

EMC Data Protection Advisor 6.0

EMC Data Protection Advisor 6.0 White Paper EMC Data Protection Advisor 6.0 Abstract EMC Data Protection Advisor provides a comprehensive set of features to reduce the complexity of managing data protection environments, improve compliance

More information

VMware Site Recovery Manager with EMC RecoverPoint

VMware Site Recovery Manager with EMC RecoverPoint VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright

More information

HP SCOM Management Packs User Guide

HP SCOM Management Packs User Guide HP SCOM Management Packs User Guide Abstract This guide describes the HP extensions for Microsoft System Center Operations Manager that are provided as part of HP Insight Control for Microsoft System Center.

More information

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Using Windows Administrative Tools on VNX

Using Windows Administrative Tools on VNX EMC VNX Series Release 7.0 Using Windows Administrative Tools on VNX P/N 300-011-833 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011 -

More information

HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration

HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration Technology Insight Paper HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration By Leah Schoeb December 3, 2012 Enabling you to make the best technology decisions HP EVA to 3PAR Online Import

More information

Domain Management with EMC Unisphere for VNX

Domain Management with EMC Unisphere for VNX White Paper Domain Management with EMC Unisphere for VNX EMC Unified Storage Solutions Abstract EMC Unisphere software manages EMC VNX, EMC Celerra, and EMC CLARiiON storage systems. This paper discusses

More information

HP StoreVirtual DSM for Microsoft MPIO Deployment Guide

HP StoreVirtual DSM for Microsoft MPIO Deployment Guide HP StoreVirtual DSM for Microsoft MPIO Deployment Guide HP Part Number: AX696-96254 Published: March 2013 Edition: 3 Copyright 2011, 2013 Hewlett-Packard Development Company, L.P. 1 Using MPIO Description

More information

EMC SourceOne Offline Access

EMC SourceOne Offline Access EMC SourceOne Offline Access Version 7.2 User Guide 302-000-963 REV 01 Copyright 2005-2015 EMC Corporation. All rights reserved. Published April 30, 2015 EMC believes the information in this publication

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information