Fibre Channel and iscsi Configuration Guide
|
|
|
- Prosper Martin
- 9 years ago
- Views:
Transcription
1 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.A. Telephone: +1 (408) Fax: +1 (408) Support telephone: +1 (888) 4-NETAPP Documentation comments: Information Web: Part number _A0 December 2009
2
3 Table of Contents 3 Contents Copyright information... 7 Trademark information... 9 About this guide Audience Terminology Keyboard and formatting conventions Special messages How to send your comments iscsi topologies Single-network active/active configuration in an iscsi SAN Multinetwork active/active configuration in an iscsi SAN Direct-attached single-controller configurations in an iscsi SAN VLANs Static VLANs Dynamic VLANs Fibre Channel topologies FC onboard and expansion port combinations Fibre Channel supported hop count Fibre Channel switch configuration best practices The cfmode setting Host multipathing software requirements xx supported topologies xx target port configuration recommendations xx: Single-fabric single-controller configuration xx: Single-fabric active/active configuration xx: Multifabric active/active configuration xx: Direct-attached single-controller configuration xx: Direct-attached active/active configuration xx supported topologies xx target port configuration recommendations xx: Single-fabric single-controller configuration xx: Single-fabric active/active configuration... 34
4 4 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family 31xx: Multifabric active/active configuration xx: Direct-attached single-controller configurations xx: Direct-attached active/active configuration xx supported topologies xx target port configuration recommendations and 3070 supported topologies and 3050 supported topologies FAS20xx supported topologies FAS20xx: Single-fabric single-controller configuration FAS20xx: Single-fabric active/active configuration FAS20xx: Multifabric single-controller configuration FAS20xx: Multifabric active/active configuration FAS20xx: Direct-attached single-controller configurations FAS20xx: Direct-attached active/active configuration FAS270/GF270c supported topologies FAS270/GF270c: Single-fabric active/active configuration FAS270/GF270c: Multifabric active/active configuration FAS270/GF270c: Direct-attached configurations Other Fibre Channel topologies R200 and 900 series supported topologies Fibre Channel over Ethernet overview FCoE initiator and target combinations Fibre Channel over Ethernet supported topologies FCoE: FCoE initiator to FC target configuration FCoE: FCoE end-to-end configuration FCoE: FCoE mixed with FC FCoE: FCoE mixed with IP storage protocols Fibre Channel and FCoE zoning Port zoning World Wide Name based zoning Individual zones Single-fabric zoning Dual-fabric active/active configuration zoning Shared SAN configurations ALUA configurations (Native OS, FC) AIX Host Utilities configurations that support ALUA... 83
5 Table of Contents 5 ESX configurations that support ALUA HP-UX configurations that support ALUA Linux configurations that support ALUA (MPxIO/FC) Solaris Host Utilities configurations that support ALUA Windows configurations that support ALUA Configuration limits Configuration limit parameters and definitions Host operating system configuration limits for iscsi and FC xx and 31xx single-controller limits xx and 31xx active/active configuration limits xx single-controller limits xx active/active configuration limits FAS20xx single-controller limits FAS20xx active/active configuration limits FAS270/GF270, 900 series, and R200 single-controller limits FAS270c/GF270c and 900 series active/active configuration limits Index
6
7 Copyright information 7 Copyright information Copyright NetApp, Inc. All rights reserved. Printed in the U.S.A. No part of this document covered by copyright may be reproduced in any form or by any means graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system without prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp. The product described in this manual may be protected by one or more U.S.A. patents, foreign patents, or pending applications. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS (October 1988) and FAR (June 1987).
8
9 Trademark information 9 Trademark information NetApp, the Network Appliance logo, the bolt design, NetApp-the Network Appliance Company, Cryptainer, Cryptoshred, DataFabric, DataFort, Data ONTAP, Decru, FAServer, FilerView, FlexClone, FlexVol, Manage ONTAP, MultiStore, NearStore, NetCache, NOW NetApp on the Web, SANscreen, SecureShare, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapRestore, SnapValidator, SnapVault, Spinnaker Networks, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, StoreVault, SyncMirror, Topio, VFM, VFM Virtual File Manager, and WAFL are registered trademarks of NetApp, Inc. in the U.S.A. and/or other countries. gfiler, Network Appliance, SnapCopy, Snapshot, and The evolution of storage are trademarks of NetApp, Inc. in the U.S.A. and/or other countries and registered trademarks in some other countries. The NetApp arch logo; the StoreVault logo; ApplianceWatch; BareMetal; Camera-to-Viewer; ComplianceClock; ComplianceJournal; ContentDirector; ContentFabric; Data Motion; EdgeFiler; FlexShare; FPolicy; Go Further, Faster; HyperSAN; InfoFabric; Lifetime Key Management, LockVault; NOW; ONTAPI; OpenKey, RAID-DP; ReplicatorX; RoboCache; RoboFiler; SecureAdmin; SecureView; Serving Data by Design; Shadow Tape; SharedStorage; Simplicore; Simulate ONTAP; Smart SAN; SnapCache; SnapDirector; SnapFilter; SnapMigrator; SnapSuite; SohoFiler; SpinMirror; SpinRestore; SpinShot; SpinStor; vfiler; VPolicy; and Web Filer are trademarks of NetApp, Inc. in the U.S.A. and other countries. NetApp Availability Assurance and NetApp ProTech Expert are service marks of NetApp, Inc. in the U.S.A. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. A complete and current list of other IBM trademarks is available on the Web at Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/or other countries. Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks. NetApp, Inc. NetCache is certified RealSystem compatible.
10
11 About this guide 11 About this guide You can use your product more effectively when you understand this document's intended audience and the conventions that this document uses to present information. This document describes the configuration of fabric-attached, network-attached, and direct-attached storage systems in Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iscsi environments. This guide explains the various topologies that are supported and describes the relevant SAN configuration limits for each controller model. The configurations apply to controllers with their own disks and to V-Series configurations. Next topics Audience on page 11 Terminology on page 11 Keyboard and formatting conventions on page 12 Special messages on page 13 How to send your comments on page 14 Audience This document is written with certain assumptions about your technical knowledge and experience. This document is for system administrators who are familiar with host operating systems connecting to storage systems using FC, FCoE, and iscsi protocols. This guide assumes that you are familiar with basic FC, FCoE, and iscsi solutions and terminology. This guide does not cover basic system or network administration topics, such as IP addressing, routing, and network topology; it emphasizes the characteristics of the storage system. Terminology To understand the concepts in this document, you might need to know how certain terms are used. Storage terms array LUN LUN (Logical Unit Number) Refers to storage that third-party storage arrays provide to storage systems running Data ONTAP software. One array LUN is the equivalent of one disk on a native disk shelf. Refers to a logical unit of storage identified by a number.
12 12 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family native disk native disk shelf storage controller storage system third-party storage Refers to a disk that is sold as local storage for storage systems that run Data ONTAP software. Refers to a disk shelf that is sold as local storage for storage systems that run Data ONTAP software. Refers to the component of a storage system that runs the Data ONTAP operating system and controls its disk subsystem. Storage controllers are also sometimes called controllers, storage appliances, appliances, storage engines, heads, CPU modules, or controller modules. Refers to the hardware device running Data ONTAP that receives data from and sends data to native disk shelves, third-party storage, or both. Storage systems that run Data ONTAP are sometimes referred to as filers, appliances, storage appliances, V-Series systems, or systems. Refers to back-end storage arrays, such as IBM, Hitachi Data Systems, and HP, that provide storage for storage systems running Data ONTAP. Cluster and high-availability terms active/active configuration cluster In the Data ONTAP 7.2 and 7.3 release families, refers to a pair of storage systems (sometimes called nodes) configured to serve data for each other if one of the two systems stops functioning. Also sometimes referred to as active/active pairs. In the Data ONTAP 7.1 release family and earlier releases, this functionality is referred to as a cluster. In the Data ONTAP 7.1 release family and earlier releases, refers to a pair of storage systems (sometimes called nodes) configured to serve data for each other if one of the two systems stops functioning. In the Data ONTAP 7.3 and 7.2 release families, this functionality is referred to as an active/active configuration. Keyboard and formatting conventions You can use your product more effectively when you understand how this document uses keyboard and formatting conventions to present information. Keyboard conventions Convention What it means The NOW site Refers to NetApp On the Web at
13 About this guide 13 Convention What it means Enter, enter Used to refer to the key that generates a carriage return; the key is named Return on some keyboards. Used to mean pressing one or more keys on the keyboard and then pressing the Enter key, or clicking in a field in a graphical interface and then typing information into the field. hyphen (-) type Used to separate individual keys. For example, Ctrl-D means holding down the Ctrl key while pressing the D key. Used to mean pressing one or more keys on the keyboard. Formatting conventions Convention What it means Italic font Words or characters that require special attention. Placeholders for information that you must supply. For example, if the guide says to enter the arp -d hostname command, you enter the characters "arp -d" followed by the actual name of the host. Book titles in cross-references. Monospaced font Command names, option names, keywords, and daemon names. Information displayed on the system console or other computer monitors. Contents of files. File, path, and directory names. Bold monospaced font Words or characters you type. What you type is always shown in lowercase letters, unless your program is case-sensitive and uppercase letters are necessary for it to work properly. Special messages This document might contain the following types of messages to alert you to conditions that you need to be aware of. Note: A note contains important information that helps you install or operate the system efficiently.
14 14 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attention: An attention notice contains instructions that you must follow to avoid a system crash, loss of data, or damage to the equipment. How to send your comments You can help us to improve the quality of our documentation by sending us your feedback. Your feedback is important in helping us to provide the most accurate and high-quality information. If you have suggestions for improving this document, send us your comments by to [email protected]. To help us direct your comments to the correct division, include in the subject line the name of your product and the applicable operating system. For example, FAS6070 Data ONTAP 7.3, or Host Utilities Solaris, or Operations Manager 3.8 Windows.
15 iscsi topologies 15 iscsi topologies Supported iscsi configurations include direct-attached and network-attached topologies. Both single-controller and active/active configurations are supported. In an iscsi environment, all methods of connecting Ethernet switches to a network approved by the switch vendor are supported. Ethernet switch counts are not a limitation in Ethernet iscsi topologies. For specific recommendations and best practices, see the Ethernet switch vendor's documentation. For Windows iscsi multipathing options, please see Technical Report Next topics Single-network active/active configuration in an iscsi SAN on page 15 Multinetwork active/active configuration in an iscsi SAN on page 17 Direct-attached single-controller configurations in an iscsi SAN on page 18 VLANs on page 19 Related information NetApp Interoperability Matrix - now.netapp.com/now/products/interoperability/ Technical Report 3441: iscsi multipathing possibilities on Windows with Data ONTAP - media.netapp.com/documents/tr-3441.pdf Single-network active/active configuration in an iscsi SAN You can connect hosts using iscsi to active/active configuration controllers using a single IP network. The network can consist of one or more switches, and the controllers can be attached to multiple switches. Each controller can have multiple iscsi connections to the network. The number of ports is based on the storage controller model and the number of supported Ethernet ports. The following figure shows two Ethernet connections to the network per storage controller. Depending on the controller model, more connections are possible.
16 16 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Figure 1: iscsi single network active/active configuration Attribute Fully redundant Type of network Different host operating systems Multipathing required Type of configuration Value No, due to the single network Single network Yes, with multiple-host configurations Yes Active/active configuration
17 iscsi topologies 17 Multinetwork active/active configuration in an iscsi SAN You can connect hosts using iscsi to active/active configuration controllers using multiple IP networks. To be fully redundant, a minimum of two connections to separate networks per controller is necessary to protect against NIC, network, or cabling failure. Figure 2: iscsi multinetwork active/active configuration Attribute Fully redundant Type of network Different host operating systems Multipathing required Type of configuration Value Yes Multinetwork Yes, with multiple-host configurations Yes Active/active configuration
18 18 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Direct-attached single-controller configurations in an iscsi SAN You can connect hosts using iscsi directly to controllers. The number of hosts that can be directly connected to a controller or pair of controllers depends on the number of available Ethernet ports. Note: Direct-attached configurations are not supported in active/active configurations. Figure 3: iscsi direct-attached single-controller configurations Attribute Fully redundant Type of network Different host operating systems Multipathing required Type of configuration Value No, due to the single controller None, direct-attached Yes, with multiple-host configurations Yes Single controller
19 iscsi topologies 19 VLANs A VLAN consists of a group of switch ports, optionally across multiple switch chassis, grouped together into a broadcast domain. Static and dynamic VLANs enable you to increase security, isolate problems, and limit available paths within your IP network infrastructure. Reasons for implementing VLANs Implementing VLANs in larger IP network infrastructures has the following benefits. VLANs provide increased security because they limit access between different nodes of an Ethernet network or an IP SAN. VLANs enable you to leverage existing infrastructure while still providing enhanced security. VLANs improve Ethernet network and IP SAN reliability by isolating problems. VLANs can also help reduce problem resolution time by limiting the problem space. VLANs enable you to reduce the number of available paths to a particular iscsi target port. VLANs enable you to reduce the maximum number of paths to a manageable number. You need to verify that only one path to a LUN is visible if a host does not have a multipathing solution available. Next topics Static VLANs on page 19 Dynamic VLANs on page 19 Static VLANs Static VLANs are port-based. The switch and switch port are used to define the VLAN and its members. Static VLANs offer improved security because it is not possible to breach VLANs using media access control (MAC) spoofing. However, if someone has physical access to the switch, replacing a cable and reconfiguring the network address can allow access. In some environments, static VLANs are also easier to create and manage because only the switch and port identifier need to be specified, instead of the 48-bit MAC address. In addition, you can label switch port ranges with the VLAN identifier. Dynamic VLANs Dynamic VLANs are MAC address based. You can define a VLAN by specifying the MAC address of the members you want to include. Dynamic VLANs provide flexibility and do not require mapping to the physical ports where the device is physically connected to the switch. You can move a cable from one port to another without reconfiguring the VLAN.
20
21 Fibre Channel topologies 21 Fibre Channel topologies Supported FC configurations include single-fabric, multifabric, and direct-attached topologies. Both single-controller and active/active configurations are supported. For multiple-host configurations, hosts can use different operating systems, such as Windows or UNIX. Active/active configurations with multiple, physically independent storage fabrics (minimum of two) are recommended for SAN solutions. This provides redundancy at the fabric and storage system layers, which is particularly important because these layers typically support many hosts. The use of heterogeneous FC switch fabrics is not supported, except in the case of embedded blade switches. For specific exceptions, see the Interoperability Matrix on the NOW site. Cascade, mesh, and core-edge fabrics are all industry-accepted methods of connecting FC switches to a fabric, and all are supported. A fabric can consist of one or multiple switches, and the storage arrays can be connected to multiple switches. Note: The following sections show detailed SAN configuration diagrams for each type of storage system. For simplicity, the diagrams show only a single fabric or, in the case of the dual-fabric configurations, two fabrics. However, it is possible to have multiple fabrics connected to a single storage system. In the case of dual-fabric configurations, even multiples of fabrics are supported. This is true for both active/active configurations and single-controller configurations. Next topics FC onboard and expansion port combinations on page 22 Fibre Channel supported hop count on page 23 Fibre Channel switch configuration best practices on page 23 The cfmode setting on page 23 Host multipathing software requirements on page 24 60xx supported topologies on page 24 31xx supported topologies on page 32 30xx supported topologies on page 38 FAS20xx supported topologies on page 51 FAS270/GF270c supported topologies on page 57 Other Fibre Channel topologies on page 60 Related information NetApp Interoperability Matrix - now.netapp.com/now/products/interoperability/
22 22 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family FC onboard and expansion port combinations You can use storage controller onboard FC ports as both initiators and targets. You can also add storage controller FC ports on expansion adapters and use them as initiators and targets. The following table lists FC port combinations and specifies which combinations are supported. All expansion adapters should be the same speed (2 Gb, 4 Gb, or 8 Gb); you can configure 4-Gb or 8-Gb ports to run at a lower speed if needed for the connected device. Onboard ports Expansion ports Supported? Initiator + Target None Yes Initiator + Target Target only Yes with Data ONTAP and later Initiator + Target Initiator only Yes Initiator + Target Initiator + Target Yes with Data ONTAP and later Initiator only Target only Yes Initiator only Initiator + Target Yes Initiator only Initiator only Yes, but no FC SAN support Initiator only None Yes, but no FC SAN support Target only Initiator only Yes Target only Initiator + Target Yes with Data ONTAP and later Target only Target only Yes with Data ONTAP and later, but no FC disk shelf or V- Series configurations or tape support Target only None Yes, but no FC disk shelf or V- Series configurations or tape support Related concepts Configuration limits on page 89 Related references FCoE initiator and target combinations on page 67
23 Fibre Channel topologies 23 Fibre Channel supported hop count The maximum supported FC hop count, or the number of inter-switch links (ISLs) crossed between a particular host and storage system, depends on the hop count that the switch supplier and storage system support for FC configurations. The following table shows the supported hop count for each switch supplier. Switch supplier Supported hop count Brocade 6 Cisco 5 McData 3 QLogic 4 Fibre Channel switch configuration best practices A fixed link speed setting is highly recommended, especially for large fabrics, because it provides the best performance for fabric rebuild times. In large fabrics, this can create significant time savings. Although autonegotiation provides the greatest flexibility, it does not always perform as expected. Also, it adds time to the overall fabric-build sequence because the FC port has to autonegotiate. Note: Where supported, it is recommended to set the switch port topology to F (point-to-point). The cfmode setting The cfmode setting controls how the FC adapters of a storage system in an active/active configuration log in to the fabric, handle local and partner traffic in normal operation and during takeover, and provide access to local and partner LUNs. The cfmode setting of your storage system and the number of paths available to the storage system must align with cabling, configuration limits, and zoning requirements. Both controllers in an active/active configuration must have the same cfmode setting. A cfmode setting is not available on single-controller configurations. You can change the cfmode setting from the storage system console by setting privileges to advanced and then using the fcp set command.
24 24 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family The Data ONTAP 7.3 release family only supports single_image cfmode, unless you are upgrading from an earlier release. The mixed cfmode is not supported even when upgrading; you must change from mixed to single_image. Detailed descriptions of port behavior with each cfmode are available in the Data ONTAP Block Access Management Guide for iscsi and FC. For details about migrating to single_image cfmode and reconfiguring hosts, see Changing the Cluster cfmode Setting in Fibre Channel SAN Configurations. Related information Data ONTAP Blocks Access Management Guide for iscsi and FC - now.netapp.com/now/ knowledge/docs/ontap/ontap_index.shtml Changing the Cluster cfmode Setting in Fibre Channel SAN Configurations - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/ Host multipathing software requirements Multipathing software is required on a host computer any time it can access a LUN through more than one path. The multipathing software presents a single disk to the operating system for all paths to a LUN. Without multipathing software, the operating system could see each path as a separate disk, which can lead to data corruption. Multipathing software is also known as MPIO (multipath I/O) software. Supported multipathing software for an operating system is listed in the Interoperability Matrix. For single-fabric single-controller configurations, multipathing software is not required if you have a single path from the host to the controller. You can use zoning to limit paths. For an active/active configuration in single_image cfmode, host multipathing software is required unless you use zoning to limit the host to a single path. 60xx supported topologies 60xx controllers are available in single-controller and active/active configurations. The 6030 and 6070 systems have eight onboard 2-Gb FC ports per controller and each one can be configured as either a target or initiator FC port. 2-Gb target connections are supported with the onboard 2-Gb ports. 4-Gb target connections are supported with 4-Gb target expansion adapters. If you use 4-Gb target expansion adapters, then you can only configure the onboard ports as initiators. You cannot use both 2-Gb and 4-Gb targets on the same controller or on two different controllers in an active/active configuration. The 6030 and 6070 systems are supported by single_image cfmode.
25 Fibre Channel topologies 25 The 6040 and 6080 systems have eight onboard 4-Gb FC ports per controller and each one can be configured as either a target or initiator FC port. 4-Gb target connections are supported with the onboard 4-Gb ports configured as targets. Additional target connections can be supported using 4-Gb target expansion adapters with Data ONTAP 7.3 and later. The 6040 and 6080 systems are only supported by single_image cfmode. Note: The 60xx systems support the use of 8-Gb target expansion adapters beginning with Data ONTAP version While 8-Gb and 4-Gb target expansion adapters function similarly, 8-Gb targets cannot be combined with 2-Gb or 4-Gb targets (whether using expansion adapters or onboard). Next topics 60xx target port configuration recommendations on page 25 60xx : Single-fabric single-controller configuration on page 26 60xx : Single-fabric active/active configuration on page 27 60xx : Multifabric active/active configuration on page 28 60xx : Direct-attached single-controller configuration on page 30 60xx : Direct-attached active/active configuration on page 31 60xx target port configuration recommendations For best performance and highest availability, use the recommended FC target port configuration. The port pairs on a 60xx controller that share an ASIC are 0a+0b, 0c+0d, 0e+0f, and 0g+0h. The following table shows the preferred port usage order for onboard FC target ports. For target expansion adapters, the preferred slot order is given in the System Configuration Guide for the version of Data ONTAP software being used by the controllers. Number of target ports Ports 1 0h 2 0h, 0d 3 0h, 0d, 0f 4 0h, 0d, 0f, 0b 5 0h, 0d, 0f, 0b, 0g 6 0h, 0d, 0f, 0b, 0g, 0c 7 0h, 0d, 0f, 0b, 0g, 0c, 0e
26 26 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Number of target ports Ports 8 0h, 0d, 0f, 0b, 0g, 0c, 0e, 0a 60xx: Single-fabric single-controller configuration You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 4: 60xx single-fabric single-controller configuration Attribute Value Fully redundant No, due to the single fabric and single controller Type of fabric Single fabric Different host operating systems Yes, with multiple-host configurations
27 Fibre Channel topologies 27 Attribute FC ports or adapters Type of configuration Value One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Single-controller configuration Related references 60xx target port configuration recommendations on page 25 60xx: Single-fabric active/active configuration You can connect hosts to both controllers in an active/active configuration using a single FC switch. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 5: 60xx single-fabric active/active configuration
28 28 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller Active/active configuration Related references 60xx target port configuration recommendations on page 25 60xx: Multifabric active/active configuration You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics for redundancy. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.
29 Fibre Channel topologies 29 Figure 6: 60xx multifabric active/active configuration Attribute Fully redundant Type of fabric Value Yes Multifabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller Active/active configuration Related references 60xx target port configuration recommendations on page 25
30 30 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family 60xx: Direct-attached single-controller configuration You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Figure 7: 60xx direct-attached single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single controller None Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Single-controller configuration
31 Fibre Channel topologies 31 Related references 60xx target port configuration recommendations on page 25 60xx: Direct-attached active/active configuration You can connect hosts directly to FC target ports on both controllers in an active/active configuration. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 8: 60xx direct-attached active/active configuration Attribute Value Fully redundant Yes Type of fabric None Different host operating systems Yes, with multiple-host configurations
32 32 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute FC ports or adapters Type of configuration Value One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Active/active configuration Related references 60xx target port configuration recommendations on page 25 31xx supported topologies 31xx systems are available in single-controller and active/active configurations. The 31xx systems have four onboard 4-Gb FC ports per controller and each port can be configured as either an FC target port or an initiator port. For example, you can configure two ports as SAN targets and two ports as initiators for disk shelves. Each 31xx controller supports 4-Gb FC target expansion adapters. The 31xx systems are only supported by single_image cfmode. Note: 31xx controllers support the use of 8-Gb target expansion adapters beginning with Data ONTAP However, the 8-Gb expansion adapters cannot be combined with 4-Gb targets (whether using expansion adapters or onboard). Next topics 31xx target port configuration recommendations on page 32 31xx : Single-fabric single-controller configuration on page 33 31xx : Single-fabric active/active configuration on page 34 31xx : Multifabric active/active configuration on page 35 31xx : Direct-attached single-controller configurations on page 36 31xx : Direct-attached active/active configuration on page 37 31xx target port configuration recommendations For best performance and highest availability, use the recommended FC target port configuration. The port pairs on a 31xx controller that share an ASIC are 0a+0b and 0c+0d. The following table shows the preferred port usage order for onboard FC target ports. For target expansion adapters, the preferred slot order is given in the System Configuration Guide for the version of Data ONTAP software being used by the controllers.
33 Fibre Channel topologies 33 Number of target ports Ports 1 0d 2 0d, 0b 3 0d, 0b, 0c 4 0d, 0b, 0c, 0a 31xx: Single-fabric single-controller configuration You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 9: 31xx single-fabric single-controller configuration Attribute Value Fully redundant No, due to the single fabric and single controller Type of fabric Single fabric Different host operating systems Yes, with multiple-host configurations
34 34 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute FC ports or adapters Type of configuration Value One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Single-controller configuration Related references 31xx target port configuration recommendations on page 32 31xx: Single-fabric active/active configuration You can connect hosts to both controllers in an active/active configuration using a single FC switch. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 10: 31xx single-fabric active/active configuration Attribute Fully redundant Value No, due to the single fabric
35 Fibre Channel topologies 35 Attribute Type of fabric Value Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Active/active configuration Related references 31xx target port configuration recommendations on page 32 31xx: Multifabric active/active configuration You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics for redundancy. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 11: 31xx multifabric active/active configuration
36 36 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute Fully redundant Type of fabric Value Yes Multifabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Active/active configuration Related references 31xx target port configuration recommendations on page 32 31xx: Direct-attached single-controller configurations You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.
37 Fibre Channel topologies 37 Figure 12: 31xx direct-attached single-controller configurations Attribute Fully redundant Type of fabric Value No, due to the single controller None Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Single-controller configuration Related references 31xx target port configuration recommendations on page 32 31xx: Direct-attached active/active configuration You can connect hosts directly to FC target ports on both controllers in an active/active configuration. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If
38 38 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 13: 31xx direct-attached active/active configuration Attribute Fully redundant Type of fabric FC ports or adapters Value Yes None One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration Active/active configuration Related references 31xx target port configuration recommendations on page 32 30xx supported topologies 30xx systems are available in single-controller and active/active configurations. Note: 3040 and 3070 controllers support the use of 8-Gb target expansion adapters beginning with Data ONTAP While 8-Gb and 4-Gb target expansion adapters function similarly, please note that the 8-Gb target expansion adapters cannot be combined with 4-Gb targets (expansion adapters or onboard) and 3050 controllers do not support the use of 8-Gb target expansion adapters.
39 Fibre Channel topologies and 3050 controllers support 2-Gb or 4-Gb FC target connections, but you cannot use both on the same controller or on two different controllers in an active/active configuration. If you use target expansion adapters, then you can only use onboard adapters as initiators. Only single_image cfmode is supported with new installations of the Data ONTAP 7.3 release family software. For 3020 and 3050 controllers running partner or standby cfmode with earlier versions of Data ONTAP, those cfmodes continue to be supported when upgrading the controllers to Data ONTAP 7.3. However, converting to single_image cfmode is recommended. Next topics 30xx target port configuration recommendations on page and 3070 supported topologies on page and 3050 supported topologies on page 45 30xx target port configuration recommendations For best performance and highest availability, use the recommended FC target port configuration. The port pairs on a 30xx controller that share an ASIC are 0a+0b, 0c+0d. The following table shows the preferred port usage order for onboard FC target ports. For target expansion adapters, the preferred slot order is given in the System Configuration Guide for the version of Data ONTAP software being used by the controllers. Number of target ports Ports 1 0d 2 0d, 0b 3 0d, 0b, 0c 4 0d, 0b, 0c, 0a 3040 and 3070 supported topologies 3040 and 3070 systems are available in single-controller and active/active configurations. The 3040 and 3070 controllers have four onboard 4-Gb FC ports per controller and each port can be configured as either an FC target port or an initiator port. For example, you can configure two ports as SAN targets and two ports as initiators for disk shelves. Next topics 3040 and 3070 : Single-fabric single-controller configuration on page and 3070 : Single-fabric active/active configuration on page and 3070 : Multifabric active/active configuration on page 42
40 40 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family 3040 and 3070 : Direct-attached single-controller configurations on page and 3070 : Direct-attached active/active configuration on page and 3070: Single-fabric single-controller configuration You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 14: 3040 and 3070 single-fabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric and single controller Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Single-controller configuration
41 Fibre Channel topologies 41 Related references 30xx target port configuration recommendations on page and 3070: Single-fabric active/active configuration You can connect hosts to both controllers in an active/active configuration using a single FC switch. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 15: 3040 and 3070 single-fabric active/active configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller Active/active configuration
42 42 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Related references 30xx target port configuration recommendations on page and 3070: Multifabric active/active configuration You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics for redundancy. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 16: 3040 and 3070 multifabric active/active configuration Attribute Fully redundant Type of fabric Value Yes Multifabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller
43 Fibre Channel topologies 43 Attribute Type of configuration Value Active/active configuration Related references 30xx target port configuration recommendations on page and 3070: Direct-attached single-controller configurations You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Figure 17: 3040 and 3070 direct-attached single-controller configurations Attribute Fully redundant Type of fabric Value No, due to the single controller None Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters
44 44 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute Type of configuration Value Single-controller configuration Related references 30xx target port configuration recommendations on page and 3070: Direct-attached active/active configuration You can connect hosts directly to FC target ports on both controllers in an active/active configuration. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 18: 3040 and 3070 direct-attached active/active configuration Attribute Fully redundant Type of fabric FC ports or adapters Value Yes None One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters
45 Fibre Channel topologies 45 Attribute Value Type of configuration Active/active configuration Related references 30xx target port configuration recommendations on page and 3050 supported topologies 3020 and 3050 systems are available in single-controller and active/active configurations. The 3020 and 3050 controllers have four onboard 2-Gb FC ports per controller and each port can be configured as either an FC target port or an initiator port. 2-Gb FC target ports are supported with the onboard 2-Gb FC ports on the 3020 and 3050 controllers. 4-Gb FC target connections are supported with 4-Gb FC target HBAs. Each 3020 and 3050 controller supports 2-Gb or 4-Gb FC target HBAs, but you cannot use both on the same controller or on two different controllers in an active/active configuration. If you use target expansion HBAs, then you can only use onboard ports as initiators. Next topics 3020 and 3050 : Single-fabric single-controller configuration on page and 3050 : Single-fabric active/active configuration on page and 3050 : Multifabric active/active configuration on page and 3050 : Direct-attached single-controller configurations on page and 3050 : Direct-attached active/active configuration on page and 3050: Single-fabric single-controller configuration You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.
46 46 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Figure 19: 3020 and 3050 single-fabric single-controller configuration Attribute Fully redundant Type of fabric FC ports or adapters Value No, due to the single fabric and single controller Single fabric One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 2-Gb or 4-Gb FC target expansion adapters Type of configuration Single-controller configuration Related references 30xx target port configuration recommendations on page and 3050: Single-fabric active/active configuration You can connect hosts to both controllers in an active/active configuration using a single FC switch. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.
47 Fibre Channel topologies 47 Figure 20: 3020 and 3050 single-fabric active/active configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 2-Gb or 4-Gb FC ports using target expansion adapters per controller Active/active configuration Related references 30xx target port configuration recommendations on page and 3050: Multifabric active/active configuration You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics for redundancy. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If
48 48 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 21: 3020 and 3050 multifabric active/active configuration Attribute Fully redundant Type of fabric Value Yes Multifabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 2-Gb or 4-Gb FC ports using target expansion adapters per controller Active/active configuration Related references 30xx target port configuration recommendations on page 39
49 Fibre Channel topologies and 3050: Direct-attached single-controller configurations You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Figure 22: 3020 and 3050 direct-attached single-controller configurations Attribute Fully redundant Type of fabric Value No, due to the single controller None Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 2-Gb or 4-Gb FC target expansion adapters Single-controller configuration Related references 30xx target port configuration recommendations on page 39
50 50 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family 3020 and 3050: Direct-attached active/active configuration You can connect hosts directly to FC target ports on both controllers in an active/active configuration. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed. Figure 23: 3020 and 3050 direct-attached active/active configuration Attribute Fully redundant Type of fabric FC ports or adapters Value Yes, if configured with multipathing software None One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 2-Gb or 4-Gb FC target expansion adapters Type of configuration Active/active configuration Related references 30xx target port configuration recommendations on page 39
51 Fibre Channel topologies 51 FAS20xx supported topologies FAS20xx systems are available in single-controller and active/active configurations and are supported by single_image cfmode only. The FAS20xx have two onboard 4-Gb FC ports per controller. You can configure these ports as either target ports for FC SANs or initiator ports for connecting to disk shelves. Next topics FAS20xx : Single-fabric single-controller configuration on page 51 FAS20xx : Single-fabric active/active configuration on page 52 FAS20xx : Multifabric single-controller configuration on page 53 FAS20xx : Multifabric active/active configuration on page 54 FAS20xx : Direct-attached single-controller configurations on page 55 FAS20xx : Direct-attached active/active configuration on page 56 FAS20xx: Single-fabric single-controller configuration You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host. Note: The FC target port numbers in the following illustration are examples. The actual port numbers might vary depending on whether you are using onboard ports or an FC target expansion adapter. The FC target expansion adapter is supported only for the FAS2050 controller. Figure 24: FAS20xx single-fabric single-controller configuration
52 52 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute Fully redundant Type of fabric Value No, due to the single fabric and single controller Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller For FAS2050 only, one supported 4-Gb or 8-Gb FC target expansion adapter Single-controller configuration FAS20xx: Single-fabric active/active configuration You can connect hosts to both controllers in an active/active configuration using a single FC switch. Note: The FC target port numbers in the following illustration are examples. The actual port numbers might vary depending on whether you are using onboard ports or an FC target expansion adapter. The FC target expansion adapter is supported only for the FAS2050 controller. Figure 25: FAS20xx single-fabric active/active configuration
53 Fibre Channel topologies 53 Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller For FAS2050 only, one supported 4-Gb or 8-Gb FC target expansion adapter Active/active configuration FAS20xx: Multifabric single-controller configuration You can connect hosts to one controller using two or more FC switch fabrics for redundancy. Note: The FC target port numbers in the following illustration are examples. The actual port numbers might vary depending on whether you are using onboard ports or an FC target expansion adapter. The FC target expansion adapter is supported only for the FAS2050 controller. Figure 26: FAS20xx multifabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single controller Multifabric
54 54 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute Value Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller For FAS2050 only, one supported 4-Gb or 8-Gb FC target expansion adapter Single-controller configuration FAS20xx: Multifabric active/active configuration You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics for redundancy. Note: The FC target port numbers in the following illustration are examples. The actual port numbers might vary depending on whether you are using onboard ports or an FC target expansion adapter. The FC target expansion adapter is supported only for the FAS2050 controller. Figure 27: FAS20xx multifabric active/active configuration Attribute Fully redundant Type of fabric Value Yes Multifabric
55 Fibre Channel topologies 55 Attribute Value Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller For FAS2050 only, one supported 4-Gb or 8-Gb FC target expansion adapter Active/active configuration FAS20xx: Direct-attached single-controller configurations You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Note: The FC target port numbers in the following illustration are examples. The actual port numbers might vary depending on whether you are using onboard ports or an FC target expansion adapter. The FC target expansion adapter is supported only for the FAS2050 controller. Figure 28: FAS20xx direct-attached single-controller configurations Attribute Value Fully redundant No, due to the single controller Type of fabric None Different host operating systems Yes, with multiple-host configurations
56 56 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute FC ports or adapters Type of configuration Value One to the maximum number of supported onboard FC ports per controller For FAS2050 only, one supported 4-Gb or 8-Gb FC target expansion adapter Single-controller configuration FAS20xx: Direct-attached active/active configuration You can connect hosts directly to FC target ports on both controllers in an active/active configuration. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Note: The FC target port numbers in the following illustration are examples. The actual port numbers might vary depending on whether you are using onboard ports or an FC target expansion adapter. The FC target expansion adapter is supported only for the FAS2050 controller. Figure 29: FAS20xx direct-attached active/active configuration Attribute Value Fully redundant Yes Type of fabric None Different host operating systems Yes, with multiple-host configurations
57 Fibre Channel topologies 57 Attribute FC ports or adapters Type of configuration Value One to the maximum number of supported onboard FC ports per controller For FAS2050 only, one supported 4-Gb or 8-Gb FC target expansion adapter Active/active configuration FAS270/GF270c supported topologies FAS270/GF270c systems are available in active/active configurations. Next topics FAS270/GF270c : Single-fabric active/active configuration on page 57 FAS270/GF270c : Multifabric active/active configuration on page 58 FAS270/GF270c : Direct-attached configurations on page 59 FAS270/GF270c: Single-fabric active/active configuration You can connect hosts to both controllers in an active/active configuration using a single FC switch. Host 1 Host 2 Host N FC Fabric 1 Controller 1 / Controller 2 Figure 30: FAS270/GF270c single-fabric active/active configuration
58 58 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute Fully Redundant Type of fabric Different host operating systems Type of configuration Value No, due to the single fabric Single fabric Yes, with multiple-host configurations Active/active configuration FAS270/GF270c: Multifabric active/active configuration You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics for redundancy. Host 1 Host 2 Host 3 Host N FC Fabric 1 FC Fabric 2 Controller 1 and Controller 2 Figure 31: FAS270/GF270c multifabric active/active configuration Attribute Fully redundant Type of fabric Different host operating systems Type of configuration Value Yes, if a host is dual-attached No, if a host is single-attached Multifabric Yes, with multiple-host configurations Active/active configuration
59 Fibre Channel topologies 59 FAS270/GF270c: Direct-attached configurations You can connect hosts directly to FC target ports on a single controller or an Active/active configuration. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Host 1 Host 1 Host 1 Host 2 Controller 1 Controller 1 / Controller 2 Controller 1 / Controller 2 Figure 32: FAS270/GF270c direct-attached configurations Attribute Fully Redundant Type of fabric Different host operating systems Type of configuration Value First configuration: No, due to the single controller Second configuration: Yes Third configuration: No, due to a single connection from storage system to hosts None Yes, with multiple-host configurations First configuration: Single controller configuration Second configuration: Active/active configuration Third configuration: Active/active configuration
60 60 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Other Fibre Channel topologies Other FC systems, such as the 900 series and R200, are no longer sold, but are still supported. R200 and 900 series supported topologies R200 and 900 series systems are available in single controller and active/active configurations. Next topics R200 and 900 series: Single-fabric single-controller configuration on page series: Single-fabric active/active configuration on page series: Multifabric active/active configuration, one dual-ported FC target expansion adapter on page series: Multifabric active/active configuration, two dual-ported FC target expansion adapters on page series: Multifabric active/active configuration, four dual-ported FC target expansion adapters on page 65 R200 and 900 series: Direct-attached configurations on page 66
61 Fibre Channel topologies 61 R200 and 900 series: Single-fabric single-controller configuration You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host. Host 1 Host 2 Host N FC Fabric 1 Controller 1 Figure 33: R200 and 900 series single-fabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric and single controller Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to four connections to the fabric, depending on the number of target expansion adapters installed in each controller Single controller configuration
62 62 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family 900 series: Single-fabric active/active configuration You can connect hosts to both controllers in an active/active configuration using a single FC switch. The following diagram shows the minimum FC cabling configuration for connecting an active/active configuration to a single fabric. Figure 34: 900 series single-fabric active/active configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration Two to 16 connections to the fabric, depending on the number of target expansion adapters connected to the system Active/active configuration
63 Fibre Channel topologies series: Multifabric active/active configuration, one dual-ported FC target expansion adapter You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics and a single FC target expansion adapter in each controller. Figure 35: 900 series multifabric active/active configuration Attribute Fully redundant Type of fabric Value Yes, when the host has multipathing software properly configured Multifabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One dual-ported FC target expansion adapter per controller Active/active configuration
64 64 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family 900 series: Multifabric active/active configuration, two dual-ported FC target expansion adapters You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics and two FC target expansion adapters in each controller. Note: The port numbers used in the following figure (7a, 7b, 9a, and 9b) are examples. The actual port numbers might vary, depending on the expansion slot in which the FC target expansion adapters are installed. Figure 36: 900 series multifabric active/active configuration Attribute Fully redundant Type of fabric Value Yes, when the host is dually attached to two physically separate fabrics Multifabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration Two dual-ported FC target expansion adapters per controller Active/active configuration
65 Fibre Channel topologies series: Multifabric active/active configuration, four dual-ported FC target expansion adapters You can connect hosts to both controllers in an active/active configuration using two or more FC switch fabrics and four FC target expansion adapters in each controller. This configuration is used in combination with larger, more complex fabrics where the eight connections between the controller and the fabric are not attached to a single FC switch. Figure 37: 900 series multifabric active/active configuration Attribute Fully redundant Type of fabric Different host operating systems FC ports or adapters Type of configuration Value Yes Multifabric Yes, with multiple-host configurations Four dual-ported FC target expansion adapters per controller Active/active configuration
66 66 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family R200 and 900 series: Direct-attached configurations You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports. Host 2 Host 1 Host 1 Host 3 Host 1 Host 2 Host N Controller 1 Controller 1 Controller 1 Figure 38: 900 series, or R200 direct-attached single-controller configurations Attribute Fully redundant Type of fabric Different host operating systems Type of configuration Value No, due to the single controller None Yes, with multiple-host configurations Single controller configuration
67 Fibre Channel over Ethernet overview 67 Fibre Channel over Ethernet overview Fibre Channel over Ethernet (FCoE) is a new model for connecting hosts to storage systems. FCoE is very similar to traditional Fibre Channel (FC), as it maintains existing FC management and controls, but the hardware transport is a lossless 10-gigabit Ethernet network. Setting up an FCoE connection requires one or more supported converged network adapters (CNAs) in the host, connected to a supported data center bridging (DCB) Ethernet switch. The CNA is a consolidation point and effectively serves as both an HBA and an Ethernet adapter. As an HBA, the presentation to the host is FC targets and all FC traffic is sent out as FC frames mapped into Ethernet packets (FC over Ethernet). The 10 gigabit Ethernet adapter is also used for host IP traffic, such as iscsi, NFS, and HTTP. Both FCoE and IP communications through the CNA run over the same 10 gigabit Ethernet port, which connects to the DCB switch. Note: Using the FCoE target adapter in the storage controller for non-fcoe IP traffic such as NFS or iscsi is NOT currently supported. In general, you configure and use FCoE connections just like traditional FC connections. Note: For detailed information about how to set up and configure your host to run FCoE, see your appropriate host documentation. Next topics FCoE initiator and target combinations on page 67 Fibre Channel over Ethernet supported topologies on page 68 FCoE initiator and target combinations Certain combinations of FCoE and traditional FC initiators and targets are supported. FCoE initiators You can use FCoE initiators in host computers with both FCoE and traditional FC targets in storage controllers. The FCoE initiator must connect to an FCoE DCB (data center bridging) switch; direct connection to a target is not supported. The following table lists the supported combinations. Initiator Target Supported? FC FC Yes FC FCoE No
68 68 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Initiator Target Supported? FCoE FC Yes FCoE FCoE Yes with Data ONTAP and later FCoE targets You can mix FCoE target ports with 4Gb or 8Gb FC ports on the storage controller regardless of whether the FC ports are add-in target adapters or onboard ports. You can have both FCoE and FC target adapters in the same storage controller. Note: Using the FCoE target adapter for non-fcoe IP traffic such as NFS or iscsi is NOT currently supported. Note: The rules for combining onboard and expansion FC ports still apply. Related references FC onboard and expansion port combinations on page 22 Fibre Channel over Ethernet supported topologies Supported FCoE native configurations include single-fabric and multifabric topologies. Both singlecontroller and active/active configurations are supported. Supported storage systems with native FCoE target expansion adapters are the FAS60xx series, the FAS31xx series, and the FAS3040 and FAS3070. In active/active configurations, only single_image cfmode is supported. Native FCoE configurations using an FCoE target adapter are supported only in the Data ONTAP 7.3 release family. The FCoE initiator with FC target configuration is also supported on FAS60xx, FAS31xx, FAS30xx, FAS20xx, FAS270, and FAS900 series storage systems in Data ONTAP and later using an FCoE/DCB switch. Note: The following configuration diagrams are examples only. Most supported FC and iscsi configurations on supported storage systems can be substituted for the example FC or iscsi configurations in the following diagrams. However, direct-attached configurations are not supported in FCoE. Note: While iscsi configurations allow any number of Ethernet switches, there must be no additional Ethernet switches in FCoE configurations. The CNA must connect directly to the FCoE switch.
69 Fibre Channel over Ethernet overview 69 Next topics FCoE: FCoE initiator to FC target configuration on page 69 FCoE: FCoE end-to-end configuration on page 70 FCoE: FCoE mixed with FC on page 71 FCoE: FCoE mixed with IP storage protocols on page 73 FCoE: FCoE initiator to FC target configuration You can connect hosts to both controllers in an active/active configuration using FCoE initiators through data center bridging (DCB) Ethernet switches to FC target ports. The FCoE initiator always connects to a supported DCB switch. The DCB switch can connect directly to an FC target, or can connect through FC switches to the FC target. Note: The FC target expansion adapter port numbers (2a and 2b) in the following figure are examples. The actual port numbers might vary, depending on the expansion slot in which the FC target expansion adapter is installed. Host 1 Host 2 Host N CNA Ports CNA Ports CNA Ports DCB Ports DCB Ports IP Network IP Network FCoE Switch FC Ports FC Ports FCoE Switch Switch/Fabric 1 Switch/Fabric 2 Controller 1 0b 0d 0b 0d Controller 2 Figure 39: FCoE initiator to FC dual-fabric active/active configuration
70 70 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Attribute Fully redundant Type of fabric Value Yes Dual fabric Different host operating systems Yes, with multiple-host configurations FC ports or adapters Multipathing required Type of configuration One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports per controller using FC target expansion adapters Yes Active/active configuration FCoE: FCoE end-to-end configuration You can connect hosts to both controllers in an active/active configuration using FCoE initiators through DCB switches to FCoE target ports. The FCoE initiator and FCoE target must connect to the same supported DCB switch. You can use multiple switches for redundant paths, but only one DCB switch in a given path. Note: The FCoE target expansion adapter port numbers (2a and 2b) in the following figure are examples. The actual port numbers might vary, depending on the expansion slot in which the FCoE target expansion adapter is installed.
71 Fibre Channel over Ethernet overview 71 Host 1 Host 2 Host N CNA Ports CNA Ports CNA Ports DCB Ports DCB Ports IP Network IP Network FCoE Switch DCB Ports DCB Ports FCoE Switch Controller 1 CNA Ports 2a 2b 2a 2b CNA Ports Controller 2 Figure 40: FCoE end-to-end Attribute Fully redundant Type of fabric Different host operating systems FCoE ports or adapters Multipathing required Type of configuration Value Yes Dual fabric Yes, with multiple host-configurations One or more FCoE target expansion adapters per controller Yes Active/active configuration FCoE: FCoE mixed with FC You can connect hosts to both controllers in an active/active configuration using FCoE initiators through data center bridging (DCB) Ethernet switches to FCoE and FC mixed target ports. The FCoE initiator and FCoE target must connect to the same supported DCB switch. The DCB switch can connect directly to the FC target, or can connect through FC switches to the FC target. You can use multiple DCB switches for redundant paths, but only one DCB switch in a given path.
72 72 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Note: The FCoE target expansion adapter port numbers (2a and 2b) and FC target port numbers (4a and 4b) are examples. The actual port numbers might vary, depending on the expansion slots in which the FCoE target expansion adapter and FC target expansion adapter are installed. Host 1 Host 2 Host N CNA Ports CNA Ports CNA Ports DCB Ports DCB Ports IP Network IP Network FCoE Switch DCB Ports FC Ports DCB Ports FC Ports FCoE Switch Switch/Fabric 1 Switch/Fabric 2 0b 0d Controller 1 CNA Ports 2b 2a 2a 2b CNA Ports 0b 0d Controller 2 Figure 41: FCoE mixed with FC Attribute Value Fully redundant Yes Type of fabric Dual fabric Different host operating systems Yes, with multiple-host configurations
73 Fibre Channel over Ethernet overview 73 Attribute FC/FCoE ports or adapters Multipathing required Type of configuration Value One to the maximum number of supported onboard FC ports per controller One or more FCoE target expansion adapters per controller At least one 4-Gb or 8-Gb FC target expansion adapter per controller Yes Active/active configuration FCoE: FCoE mixed with IP storage protocols You can connect hosts to both controllers in an active/active configuration using FCoE initiators through data center bridging (DCB) Ethernet switches to FCoE target ports. You can also run non- FCoE IP traffic through the same switches. The FCoE initiator and FCoE target must connect to the same supported DCB switch. You can use multiple switches for redundant paths, but only one DCB switch in a given path. Note: The FCoE ports are connected to DCB ports on the DCB switches. Ports used to connect iscsi are not required to be DCB ports; they can also be regular (non-dcb) Ethernet ports. These ports can also be used to carry NFS, CIFS, or other IP traffic. The use of FCoE does not add any additional restrictions or limitations on the configuration or use of iscsi, NFS, CIFS, or other IP traffic. Note: Using the FCoE target adapter for non-fcoe IP traffic such as NFS or iscsi is NOT currently supported. Note: The FCoE target expansion adapter port numbers (2a and 2b) and the Ethernet port numbers (e0a and e0b) in the following figure are examples. The actual port numbers might vary, depending on the expansion slots in which the FCoE target expansion adapters are installed.
74 74 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Host 1 Host 2 Host N CNA Ports CNA Ports CNA Ports DCB Ports DCB Ports IP Network IP Network FCoE Switch DCB/ Ethernet Ports DCB Ports DCB/ Ethernet Ports DCB Ports FCoE Switch Controller 1 e0a e0b CNA Ports 2a 2b 2a 2b CNA Ports Controller 2 e0a e0b Figure 42: FCoE mixed with IP storage protocols Attribute Fully redundant Type of fabric Different host operating systems FCoE ports or adapters Multipathing required Type of configuration Value Yes Dual fabric Yes, with multiple-host configurations One or more FCoE target expansion adapters per controller Yes Active/active configuration
75 Fibre Channel and FCoE zoning 75 Fibre Channel and FCoE zoning An FC or FCoE zone is a subset of the fabric that consists of a group of FC or FCoE ports or nodes that can communicate with each other. You must contain the nodes within the same zone to allow communication. Reasons for zoning Zoning reduces or eliminates cross talk between initiator HBAs. This occurs even in small environments and is one of the best arguments for implementing zoning. The logical fabric subsets created by zoning eliminate cross-talk problems. Zoning reduces the number of available paths to a particular FC or FCoE port and reduces the number of paths between a host and a particular LUN that is visible. For example, some host OS multipathing solutions have a limit on the number of paths they can manage. Zoning can reduce the number of paths that an OS multipathing driver sees. If a host does not have a multipathing solution installed, you need to verify that only one path to a LUN is visible. Zoning increases security because there is limited access between different nodes of a SAN. Zoning improves SAN reliability by isolating problems that occur and helps to reduce problem resolution time by limiting the problem space. Recommendations for zoning You should implement zoning anytime four or more hosts are connected to a SAN. Although World Wide Node Name zoning is possible with some switch vendors, World Wide Port Name zoning is recommended. You should limit the zone size while still maintaining manageability. Multiple zones can overlap to limit size. Ideally, a zone is defined for each host or host cluster. You should use single-initiator zoning to eliminate crosstalk between initiator HBAs. Next topics Port zoning on page 76 World Wide Name based zoning on page 76 Individual zones on page 76 Single-fabric zoning on page 77 Dual-fabric active/active configuration zoning on page 78
76 76 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Port zoning Port zoning, also referred to as hard zoning, specifies the unique fabric N_port IDs of the ports to be included within the zone. The switch and switch port are used to define the zone members. Port zoning provides the following advantages: Port zoning offers improved security because it is not possible to breach the zoning by using WWN spoofing. However, if someone has physical access to the switch, replacing a cable can allow access. In some environments, port zoning is easier to create and manage because you only work with the switch or switch domain and port number. World Wide Name based zoning World Wide Name based zoning (WWN) specifies the WWN of the members to be included within the zone. Depending on the switch vendor, either World Wide Node Name or World Wide Port Names can be used. You should use World Wide Port Name zoning when possible. WWN zoning provides flexibility because access is not determined by where the device is physically connected to the fabric. You can move a cable from one port to another without reconfiguring zones. Individual zones In the standard zoning configuration for a simple environment where each host is shown in a separate zone, the zones overlap because the storage ports are included in each zone to allow each host to access the storage. Each host can see all of the FC target ports but cannot see or interact with the other host ports. Using port zoning, you can do this zoning configuration in advance even if all of the hosts are not present. You can define each zone to contain a single switch port for the host and switch ports one through four for the storage system. For example, Zone 1 would consist of switch ports 1, 2, 3, 4 (storage ports) and 5 (Host1 port). Zone 2 would consist of switch ports 1, 2, 3, 4 (storage ports) and 6 (Host2 port), and so forth. This diagram shows only a single fabric, but multiple fabrics are supported. Each subsequent fabric has the same zone structure.
77 Fibre Channel and FCoE zoning 77 Figure 43: Hosts in individual zones Single-fabric zoning Zoning and multipathing software used in conjunction prevent possible controller failure in a singlefabric environment. Without multipathing software in a single-fabric environment, hosts are not protected from a possible controller failure. In the following figure, Host1 and Host2 do not have multipathing software and are zoned so that there is only one path to each LUN (Zone 1). Therefore, Zone 1 contains only one of the two storage ports. Even though the host has only one HBA, both storage ports are included in Zone 2. The LUNs are visible through two different paths, one going from the host FC port to storage port 0 and the other going from host FC port to storage port 1. Because this figure contains only a single fabric, it is not fully redundant. However, as shown, Host3 and Host4 have multipathing software that protects against a possible controller failure. They are zoned so that a path to the LUNs is available through each of the controllers.
78 78 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Figure 44: Single-fabric zoning Dual-fabric active/active configuration zoning Zoning can separate hosts in a topology to eliminate HBA cross talk. Zoning can also prevent a host from accessing LUNs from a storage system in a different zone. The following figure shows a configuration where Host1 accesses LUNs from storage system 1 and Host2 accesses LUNs from storage system 2. Each storage system is an active/active configuration and both are fully redundant. Multiple FAS270c storage systems are shown in this figure, but they are not necessary for redundancy.
79 Figure 45: Dual-fabric zoning Fibre Channel and FCoE zoning 79
80
81 Shared SAN configurations 81 Shared SAN configurations Shared SAN configurations are defined as hosts that are attached to both NetApp and non-netapp storage arrays. Accessing NetApp arrays and other vendors' arrays from a single host is supported as long as several requirements are met. The following requirements must be met for support of accessing NetApp arrays and other vendors' arrays from a single host: Native Host OS multipathing or VERITAS DMP is used for multipathing (see exception for EMC PowerPath co-existence below) NetApp configuration requirements (such as timeout settings) as specified in the appropriate NetApp Host Utilities documents are met Single_image cfmode is used Support for Native Host OS multipathing in combination with EMC PowerPath is supported for the following configurations. For configurations that do meet these requirements, a PVR is required to determine supportability. Host Supported configuration Windows EMC CLARiiON CX3-20, CX3-40, CX3-80 w/ PowerPath 4.5+ and connected to a NetApp storage system using Data ONTAP DSM for Windows MPIO Solaris AIX EMC CLARiiON CX3-20, CX3-40, CX3-80 / PowerPath 5+ and connected to a NetApp storage system using SUN Traffic Manager (MPxIO) EMC CLARiiON CX3-20, CX3-40, CX3-80 / PowerPath 5+ and connected to a NetApp storage system using AIX MPIO
82
83 ALUA configurations 83 ALUA configurations ALUA (asymmetric logical unit access) is supported for certain combinations of host operating systems and Data ONTAP software. ALUA is an industry standard protocol for identifying optimized paths between a storage system and a host computer. The administrator of the host computer does not need to manually select the paths to use. ALUA is enabled or disabled on the igroup mapped to a NetApp LUN. The default ALUA setting in Data ONTAP is disabled. For information about using ALUA on a host, see the Host Utilities Installation and Setup Guide for your host operating system. For information about enabling ALUA on the storage system, see the Block Access Management Guide for iscsi and FC for your version of Data ONTAP software. Next topics (Native OS, FC) AIX Host Utilities configurations that support ALUA on page 83 ESX configurations that support ALUA on page 85 HP-UX configurations that support ALUA on page 85 Linux configurations that support ALUA on page 86 (MPxIO/FC) Solaris Host Utilities configurations that support ALUA on page 86 Windows configurations that support ALUA on page 87 (Native OS, FC) AIX Host Utilities configurations that support ALUA The Native OS environment of the AIX Host Utilities supports ALUA on hosts using MPIO and the FC protocol. The following AIX Native OS configurations support ALUA when you are using the FC protocol:
84 84 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Host Utilities version Host requirements Data ONTAP version Host Utilities 4.0, 4.1, and TL8 5.3 TL9 SP4 with APAR IZ TL10 SP1 with APAR IZ TL2 SP4 with APAR IZ TL3 SP1 with APAR IZ53160 Note: It is strongly recommended that, if you want to use ALUA, you use the latest levels of 5.3 TL9 or 6.1 TL2 listed in the support matrix. ALUA is supported on all AIX Service Streams that have the corresponding APAR (authorized program analysis report) installed. At the time this document was prepared, the Host Utilities supported AIX Service Streams with the APARs listed above as well as with APARs IZ53718, IZ53730, IZ53856, IZ54130, IZ57806, and IZ If an APAR listed here has not been publicly released, contact IBM and request a copy and later Note: The Host Utilities do not support ALUA with AIX environments using iscsi or Veritas. If you have a Native OS environment and do not want to use ALUA, you can use the dotpaths utility to specify path priorities. The Host Utilities provide dotpaths as part of the SAN Toolkit.
85 ALUA configurations 85 ESX configurations that support ALUA ESX hosts support ALUA with certain combinations of ESX, Data ONTAP, and guest operating system configurations. The following table lists which configurations support ALUA (asymmetric logical unit access). Use the Interoperability Matrix to determine a supported combination of ESX, Data ONTAP, and Host Utilities software. Then enable or disable ALUA based on the information in the table. ESX version Minimum Data ONTAP Windows guest in Microsoft cluster Supported? 4.0 or later with single_image cfmode No Yes 4.0 or later with single_image cfmode Yes No 3.5 and earlier any any No Using ALUA is strongly recommend, but not required, for configurations that support ALUA. If you do not use ALUA, be sure to set an optimized path using the tools supplied with ESX Host Utilities or Virtual Storage Console. HP-UX configurations that support ALUA The HP-UX Host Utilities support asymmetric logical unit access (ALUA). ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on FC and iscsi SANs. You should enable ALUA when your HP-UX configuration supports it. ALUA is enabled on the igroup mapped to NetApp LUNs used by the HP-UX host. Currently, the default setting in Data ONTAP software for ALUA is disabled. You can use the NetApp Interoperability Matrix to determine a supported combination of HP-UX, Data ONTAP, Host Utilities, and Native MPIO software. You can then enable or disable ALUA based on the information in the following table: HP-UX version Native MPIO software Minimum Data ONTAP Supported HP-UX 11iv3 ALUA or later Yes HP-UX 11iv2 ALUA None No Note: ALUA is mandatory and is supported with HP UX 11iv3 September 2007 and later. Related information NetApp Interoperability Matrix -
86 86 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Linux configurations that support ALUA The Linux Host Utilities supports asymmetric logical unit access (ALUA) on hosts running Red Hat Enterprise Linux or SUSE Linux Enterprise Server. ALUA is also known as Target Port Group Support (TPGS). DM-Multipath works with ALUA to determine which paths are primary paths and which paths are secondary or partner paths to be used for failover. ALUA is automatically enabled for Linux operating system. The following configurations support ALUA: Host Utilities Version Host requirements Data ONTAP versions Host Utilities 4.0 and later Red Hat Enterprise Linux 5 Update 1 and later SUSE Linux Enterprise Server 10 SP1 and later and later Note: The Host Utilities do not support ALUA with both iscsi and Veritas environments. (MPxIO/FC) Solaris Host Utilities configurations that support ALUA The MPxIO environment of the Solaris Host Utilities supports ALUA on hosts running either the SPARC processor or the x86 processor and using the FC protocol. If you are using MPxIO with FC and active/active storage controllers with any of the following configurations, you must have ALUA enabled: Host Utilities version Host requirements Data ONTAP version Host Utilities 4.1 through 5.1 Solaris 10 update 3 and later and later Host Utilities 4.0 Solaris 10 update 2 only with QLogic drivers and SPARC processors and later iscsi Support Kit 3.0 Solaris 10 update 2 only and later Note: The Host Utilities do not support ALUA with iscsi except with the 3.0 Support Kit. The Host Utilities do not support ALUA in Veritas environments.
87 ALUA configurations 87 Windows configurations that support ALUA Windows hosts support ALUA with certain combinations of Windows, Data ONTAP, Host Utilities, and MPIO software. The following table lists configurations that support ALUA (asymmetric logical unit access). Use the Interoperability Matrix to determine a supported combination of Windows, Data ONTAP, Host Utilities, and MPIO software. Then enable or disable ALUA based on the information in the table. Windows version MPIO software Minimum Data ONTAP Supported? Server 2008 Microsoft DSM (msdsm) Yes Server 2008 Data ONTAP DSM none No Server 2008 Veritas DSM none No Server 2003 all none No ALUA is required when using the Microsoft DSM (msdsm).
88
89 Configuration limits 89 Configuration limits Configuration limits are available for FC, FCoE, and iscsi topologies. In some cases, limits might be theoretically higher, but the published limits are tested and supported. Next topics Configuration limit parameters and definitions on page 89 Host operating system configuration limits for iscsi and FC on page 91 60xx and 31xx single-controller limits on page 92 60xx and 31xx active/active configuration limits on page 93 30xx single-controller limits on page 95 30xx active/active configuration limits on page 96 FAS20xx single-controller limits on page 97 FAS20xx active/active configuration limits on page 98 FAS270/GF270, 900 series, and R200 single-controller limits on page 100 FAS270c/GF270c and 900 series active/active configuration limits on page 101 Configuration limit parameters and definitions There are a number of parameters and definitions related to FC, FCoE, and iscsi configuration limits. Parameter Visible target ports per host (iscsi) Visible target ports per host (FC) LUNs per host Paths per LUN Maximum LUN size Definition The maximum number of target iscsi Ethernet ports that a host can see or access on iscsi attached controllers. The maximum number of FC adapters that a host can see or access on the attached Fibre Channel controllers. The maximum number of LUNs that you can map from the controllers to a single host. The maximum number of accessible paths that a host has to a LUN. Note: Using the maximum number of paths is not recommended. The maximum size of an individual LUN on the respective operating system.
90 90 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Parameter LUNs per controller LUNs per volume FC port fan-in FC port fan-out Hosts per controller (iscsi) Hosts per controller (FC) Definition The maximum number of LUNs that you can configure per controller, including cloned LUNs and LUNs contained within cloned volumes. LUNs contained in Snapshot copies do not count in this limit and there is no limit on the number of LUNs that can be contained within Snapshot copies. The maximum number of LUNs that you can configure within a single volume. LUNs contained in Snapshot copies do not count in this limit and there is no limit on the number of LUNs that can be contained within Snapshot copies. The maximum number of hosts that can connect to a single FC port on a controller. Connecting the maximum number of hosts is generally not recommended and you might need to tune the FC queue depths on the host to achieve this maximum value. The maximum number of LUNs mapped to a host through a FC target port on a controller. The recommended maximum number of iscsi hosts that you can connect to a single controller. The general formula to calculate this is as follows: Maximum hosts = 8 * System Memory divided by 512 MB. The maximum number of hosts that you can connect to a controller. Connecting the maximum number of hosts is generally not recommended and you might need to tune the FC queue depths on the host to achieve this maximum value. igroups per controller The maximum number of initiator groups that you can configure per controller. Initiators per igroup LUN mappings per controller LUN path name length LUN size FC queue depth available per port FC target ports per controller The maximum number of FC initiators (HBA WWNs) or iscsi initiators (host iqn/eui node names) that you can include in a single igroup. The maximum number of LUN mappings per controller. For example, a LUN mapped to two igroups counts as two mappings. The maximum number of characters in a full LUN name. For example, /vol/ abc/def has 12 characters. The maximum capacity of an individual LUN on a controller. The usable queue depth capacity of each FC target port. The number of LUNs is limited by available FC queue depth. The maximum number of supported FC target ports per controller. FC initiator ports used for back-end disk connections, for example, connections to disk shelves, are not included in this number.
91 Configuration limits 91 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf Host operating system configuration limits for iscsi and FC Each host operating system has host-based configuration limits for FC, FCoE, and iscsi. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. Parameter Windows Linux HP-UX Solaris AIX ESX Visible target ports per host LUNs per host 64 (Windows 2000) FC, 8 paths per LUN: 64 11iv2: iv3: x=128 3.x= (Windows 2003) FC, 4 paths per LUN: (Windows 2008) iscsi, 8 paths per LUN: 32 (RHEL4, OEL4 and SLES9 series); 64 (all other series) iscsi, 4 paths per LUN: 64 (RHEL4, OEL4 and SLES9 series); 128 (all other series)
92 92 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Parameter Windows Linux HP-UX Solaris AIX ESX Paths per LUN 8 (max of 1024 per host) 4 (FC Native Multipath without ALUA) 11iv2: 8 11iv3: x=4 3.x=8 8 (all others, FC and iscsi) Max LUN size 2 TB 2 TB 2 TB 1023 GB 1 TB 2 TB 16 TB (Windows 2003 and Windows 2008) 16 TB with Solaris 9+, VxVM, EFI, and appropriate patches 16 TB with AIX 5.2ML7 or later and AIX 5.3ML3 or later Related references Configuration limit parameters and definitions on page 89 60xx and 31xx single-controller limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports. Parameter 31xx 6030 or or 6080 LUNs per controller 2,048 2,048 2,048 FC queue depth available per port LUNs per volume 2,048 2,048 2,048 Port fan-in
93 Configuration limits 93 Parameter 31xx 6030 or or 6080 Connected hosts per storage controller (FC) Connected hosts per controller (iscsi) igroups per controller Initiators per igroup LUN mappings per controller 4,096 8,192 8,192 LUN path name length LUN size 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) FC target ports per controller Data ONTAP 7.3.0: and later: 16 Data ONTAP 7.3.0: and later: 16 Data ONTAP 7.3.0: and later: 16 Related references Configuration limit parameters and definitions on page 89 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf 60xx and 31xx active/active configuration limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Limits for active/active configuration systems are NOT double the limits for single-controller systems. This is because one controller in the active/active configuration must be able to handle the entire system load during failover. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
94 94 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Parameter 31xx 6030 or or 6080 LUNs per active/active configuration 2,048 4,096 (available on the 3160A and 3170A with PVR approval) 2,048 4,096 (with PVR approval) 2,048 4,096 (with PVR approval) FC queue depth available per port 1,720 1,720 1,720 LUNs per volume 2,048 2,048 2,048 FC port fan-in Connected hosts per active/active configuration (FC) (available on the 3160A and 3170A with PVR approval) (with PVR approval) (with PVR approval) Maximum connected hosts per active/active configuration (iscsi) ,024 igroups per active/active configuration (available on the 3160A and 3170A with PVR approval) (with PVR approval) (with PVR approval) Initiators per igroup LUN mappings per active/active configuration 4,096 8,192 (available on the 3160A and 3170A with PVR approval) 8,192 8,192 LUN path name length LUN size 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) FC target ports per active/ active configuration Data ONTAP 7.3.0: and later: 32 Data ONTAP 7.3.0: and later: 32 16Data ONTAP 7.3.0: and later: 32 Related references Configuration limit parameters and definitions on page 89
95 Configuration limits 95 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf 30xx single-controller limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports. Parameter and 3070 LUNs per controller 1,024 1,024 2,048 FC queue depth available per port 1,720 1,720 1,720 LUNs per volume 1,024 1,024 2,048 Port fan-in Connected hosts per storage controller (FC) Connected hosts per controller (iscsi) igroups per controller Initiators per igroup LUN mappings per controller 4,096 4,096 4,096 LUN path name length LUN size 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) FC target ports per controller 4 4 Data ONTAP 7.3.0: and later: 12
96 96 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Related references Configuration limit parameters and definitions on page 89 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf 30xx active/active configuration limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Limits for active/active configuration systems are NOT double the limits for single-controller systems. This is because one controller in the active/active configuration must be able to handle the entire system load during failover. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports. Parameter 3020A 3050A 3040A and 3070A LUNs per active/active configuration FC queue depth available per port 1,024 1,024 2,048 1,720 1,720 1,720 LUNs per volume 1,024 1,024 2,048 FC port fan-in Connected hosts per active/active configuration (FC) Connected hosts per active/active configuration (iscsi) igroups per active/active configuration Initiators per igroup
97 Configuration limits 97 Parameter 3020A 3050A 3040A and 3070A LUN mappings per active/active configuration 4,096 4,096 4,096 LUN path name length LUN size 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) FC target ports per active/ active configuration 8 8 Data ONTAP 7.3.0: : 24 Related references Configuration limit parameters and definitions on page 89 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf FAS20xx single-controller limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports. Parameter FAS2020 FAS2040 FAS2050 LUNs per controller 1,024 1,024 1,024 FC queue depth available per port LUNs per volume 1,024 1,024 1,024 FC port fan-in Connected hosts per controller (FC)
98 98 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Parameter FAS2020 FAS2040 FAS2050 Connected hosts per controller (iscsi) igroups per controller Initiators per igroup LUN mappings per controller 4,096 4,096 4,096 LUN path name length LUN size 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) FC target ports per controller Related references Configuration limit parameters and definitions on page 89 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf FAS20xx active/active configuration limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Limits for active/active configuration systems are NOT double the limits for single-controller systems. This is because one controller in the active/active configuration must be able to handle the entire system load during failover. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
99 Configuration limits 99 Parameter FAS2020A FAS2040A FAS2050A LUNs per active/active configuration FC queue depth available per port 1,024 1,024 1, LUNs per volume 1,024 1,024 1,024 FC port fan-in Connected hosts per active/active configuration (FC) Connected hosts per active/active configuration (iscsi) igroups per active/active configuration Initiators per igroup LUN mappings per active/active configuration 4,096 4,096 4,096 LUN path name length LUN size 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) 16 TB (might require deduplication and thin provisioning) FC target ports per active/ active configuration Related references Configuration limit parameters and definitions on page 89 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf
100 100 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family FAS270/GF270, 900 series, and R200 single-controller limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports. Parameter FAS270 and GF R200 LUNS per controller FC queue depth available per port LUNs per volume FC port fanin Connected hosts per controller (FC) Connected hosts per controller (iscsi) igroups per controller Initiators per igroup 1,024 2,048 2,048 2,048 2,048 2, ,966 1,966 1,966 1, ,024 2,048 2,048 2,048 2,048 2,
101 Configuration limits 101 Parameter FAS270 and GF R200 LUN mappings per controller LUN path name length 4,096 8,192 8,192 8,192 8,192 8, LUN size 6 TB 4 TB 6 TB 12 TB 12 TB 12 TB FC target ports per controller Related references Configuration limit parameters and definitions on page 89 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf FAS270c/GF270c and 900 series active/active configuration limits Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iscsi unless noted. Limits for active/active configuration systems are NOT double the limits for single-controller systems. This is because one controller in the active/active configuration must be able to handle the entire system load during failover. Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
102 102 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family Parameter FAS270c and GF270c LUNS per active/active configuration FC queue depth available per port LUNs per volume 1,024 2,048 2,048 2,048 2, ,966 1,966 1,966 1,966 1,024 2,048 2,048 2,048 2,048 FC port fan-in Connected hosts per active/active configuration (FC) Connected hosts per active/active configuration (iscsi) igroups per active/active configuration Initiators per igroup LUN mappings per active/active configuration LUN path name length ,096 8,192 8,192 8,192 8, LUN size 6 TB 4 TB 6 TB 12 TB 12 TB FC target ports per active/active configuration Related references Configuration limit parameters and definitions on page 89
103 Configuration limits 103 Related information Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf
104
105 Index 105 Index 20xx active/active configuration limits 98 single-controller limits and 3050 direct-attached active/active configuration FC topologies 50 direct-attached single-controller FC topologies 49 multifabric active/active configuration FC topologies 47 single-fabric active/active configuration FC topologies 46 single-fabric single-controller FC topologies and 3070 direct-attached active/active configuration FC topologies 44 direct-attached single-controller FC topologies 43 multifabric active/active configuration FC topologies 42 single-fabric active/active configuration FC topologies 41 single-fabric single-controller FC topologies 40 30xx FC topologies 38 active/active configuration limits 96 single-controller configuration limits 95 target port configuration 39 31xx FC topologies 32 active/active configuration limits 93 direct-attached active/active configuration FC topologies 37 direct-attached single-controller FC topologies 36 multifabric active/active configuration FC topologies 35 single-controller configuration limits 92 single-fabric active/active configuration FC topologies 34 single-fabric single-controller FC topologies 33 target port configuration 32 60xx FC topologies 24 active/active configuration limits 93 direct-attached active/active configuration FC topologies 31 direct-attached single-controller FC topologies 30 multifabric active/active configuration FC topologies 28 single-controller configuration limits 92 single-fabric active/active configuration FC topologies 27 single-fabric single-controller FC topologies 26 target port configuration single-controller limits series FC topologies 60 active/active configuration limits 101 direct-attached FC topologies 66 multifabric active/active configuration FC topologies single-fabric active/active configuration FC topologies 62 single-fabric single-controller FC topologies 61 A active/active configuration iscsi direct-attached configuration 18 iscsi multinetwork configuration 17 iscsi single-network configuration 15 AIX host configuration limits 91 ALUA ESX configurations supported 85 supported AIX configurations 83 supported configurations 86 Windows configurations supported 87 ALUA configurations 83 asymmetric logical unit access (ALUA) configurations 83 C cfmode overview 23 configuration limits 20xx active/active configuration storage systems 98 20xx single-controller storage systems 97 30xx active/active configuration storage systems 96 30xx single-controller storage systems 95 31xx active/active configuration storage systems 93
106 106 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family D 31xx single-controller storage systems 92 60xx active/active configuration storage systems 93 60xx single-controller storage systems series active/active configuration storage systems single-controller storage systems 100 by host operating system 91 FAS270 active/active configuration storage systems 101 FAS270 single-controller storage systems 100 parameters defined 89 R200 single-controller storage systems 100 DCB (data center bridging) switch for FCoE 67 direct-attached active/active configuration FC topologies 3020 and and xx 37 60xx 31 FAS20xx 56 direct-attached configuration iscsi 18 direct-attached FC topologies 900 series 66 FAS270/GF270c 59 R direct-attached single-controller FC topologies 3020 and and xx 36 60xx 30 FAS20xx 55 dynamic VLANs 19 E EMC CLARiiON shared configurations 81 ESX host configuration limits 91 supported ALUA configurations 85 expansion FC ports usage rules 22 F FAS20xx FC topologies 51 direct-attached active/active configuration FC topologies 56 direct-attached single-controller FC topologies 55 multifabric active/active configuration FC topologies 54 multifabric single-controller FC topologies 53 single-fabric active/active configuration FC topologies 52 single-fabric single-controller FC topologies 51 FAS270 active/active configuration limits 101 single-controller limits 100 FAS270/GF270c FC topologies 57 direct-attached FC topologies 59 multifabric active/active configuration FC topologies 58 single-fabric active/active configuration FC topologies 57 FC 30xx target port configuration 39 30xx topologies 38 31xx target port configuration 32 31xx topologies 32 60xx target port configuration 25 60xx topologies series topologies 60 FAS20xx topologies 51 FAS270/GF270c topologies 57 multifabric switch zoning 78 onboard and expansion port usage rules 22 R200 topologies 60 single-fabric switch zoning 77 supported cfmode settings 23 switch configuration 23 switch hop count 23 switch port zoning 76 switch WWN zoning 76 switch zoning 75 switch zoning with individual zones 76 topologies overview 21 FC protocol ALUA configurations 83, 86 FCoE initiator and target combinations 67, 68 supported configurations 68
107 Index 107 switch zoning 75 FCoE topologies FCoE initiator to FC target 69 FCoE initiator to FCoE and FC mixed target 71 FCoE initiator to FCoE target 70 FCoE initiator to FCoE target mixed with IP traffic 73 Fibre Channel over Ethernet (FCoE) overview 67 H hard zoning FC switch 76 heterogeneous SAN using VSAN 21 hop count for FC switches 23 host multipathing software when required 24 HP-UX host configuration limits 91 I initiator FC ports onboard and expansion usage rules 22 initiators FCoE and FC combinations 67, 68 inter-switch links (ISLs) supported hop count 23 IP traffic in FCoE configurations 73 iscsi direct-attached configuration 18 dynamic VLANs 19 multinetwork configuration 17 single-network configuration 15 static VLANs 19 topologies 15 using VLANs 19 L Linux host configuration limits 91 Linux configurations ALUA support automatically enabled 86 M asymmetric logical unit access Target Port Group Support 86 MPIO ALUA configurations 83 MPIO software when required 24 MPxIO ALUA configurations 86 multifabric active/active configuration FC topologies 3020 and and xx 35 60xx series FAS20xx 54 FAS270/GF270c 58 multifabric single-controller FC topologies FAS20xx 53 multipathing software when required 24 N Native OS ALUA configurations 83 O onboard FC ports usage rules 22 P parameters configuration limit definitions 89 point-to-point FC switch port topology 23 port topology FC switch 23 port zoning FC switch 76 PowerPath with shared configurations 81
108 108 Fibre Channel and iscsi Configuration Guide for the Data ONTAP 7.3 Release Family R R200 FC topologies 60 direct-attached FC topologies 66 single-controller limits 100 single-fabric single-controller FC topologies 61 S shared SAN configurations 81 single-fabric active/active configuration FC topologies 3020 and and xx 34 60xx series 62 FAS20xx 52 FAS270/GF270c 57 single-fabric single-controller FC topologies 3020 and and xx 33 60xx series 61 FAS20xx 51 R soft zoning FC switch 76 Solaris host configuration limits 91 static VLANs 19 switch FC configuration 23 FC hop count 23 FC multifabric zoning 78 FC port zoning 76 FC single-fabric zoning 77 FC WWN zoning 76 FC zoning 75 FC zoning with individual zones 76 FCoE zoning 75 T target FC ports onboard and expansion usage rules 22 target port configurations 30xx 39 31xx 32 60xx 25 targets FCoE and FC combinations 67, 68 topologies 30xx FC topologies 38 31xx FC topologies 32 60xx FC topologies series FC topologies 60 FAS20xx FC topologies 51 FAS270/GF270c FC topologies 57 FC 21 FCoE initiator to FC target 69 FCoE initiator to FCoE and FC mixed target 71 FCoE initiator to FCoE target 70 FCoE initiator to FCoE target mixed with IP traffic 73 iscsi 15 R200 FC topologies 60 topologies, 3020 and 3050 direct-attached active/active FC configuration 50 direct-attached single-controller FC topologies 49 multifabric active/active FC configuration 47 single-fabric active/active FC configuration 46 single-fabric single-controller FC topologies 45 topologies, 3040 and 3070 direct-attached active/active FC configuration 44 direct-attached single-controller FC topologies 43 multifabric active/active FC configuration 42 single-fabric active/active FC configuration 41 single-fabric single-controller FC topologies 40 topologies, 31xx direct-attached active/active FC configuration 37 direct-attached single-controller FC topologies 36 multifabric active/active FC configuration 35 single-fabric active/active FC configuration 34 single-fabric single-controller FC topologies 33 topologies, 60xx direct-attached active/active FC configuration 31 direct-attached single-controller FC topologies 30 multifabric active/active FC configuration 28 single-fabric active/active FC configuration 27 single-fabric single-controller FC topologies 26 topologies, 900 series direct-attached FC topologies 66 multifabric active/active FC configuration single-fabric active/active FC configuration 62 single-fabric single-controller FC topologies 61 topologies, FAS20xx direct-attached active/active FC configuration 56 direct-attached single-controller FC topologies 55
109 Index 109 multifabric active/active FC configuration 54 multifabric single-controller FC topologies 53 single-fabric active/active FC configuration 52 single-fabric single-controller FC topologies 51 topologies, FAS270/GF270c direct-attached FC topologies 59 multifabric active/active FC configuration 58 single-fabric active/active FC configuration 57 topologies, R200 direct-attached FC topologies 66 single-fabric single-controller FC topologies 61 V virtual LANs reasons for using 19 VLANs dynamic 19 reasons for using 19 static 19 VSAN W for heterogeneous SAN 21 Windows host configuration limits 91 supported ALUA configurations 87 WWN zoning FC switch 76 Z zoning FC switch 75 FC switch by port 76 FC switch by WWN 76 FC switch multifabric 78 FC switch single-fabric 77 FC switch with individual zones 76 FCoE switch 75
110
Fibre Channel and iscsi Configuration Guide
Fibre Channel and iscsi Configuration Guide for the Data ONTAP 8.0 Release Family NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
Storage Efficiency Express Guide For 7-Mode Administrators Learning Cluster-Mode
Data ONTAP 8.1 Storage Efficiency Express Guide For 7-Mode Administrators Learning Cluster-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
V-Series Systems Implementation Guide for EMC CLARiiON Storage
V-Series Systems Implementation Guide for EMC CLARiiON Storage NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)
Configuration Examples for FAS2240 Systems
Configuration Examples for FAS2240 Systems NetApp, c. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone: +1 (888) 463-8277 Web: www.netapp.com
Data ONTAP DSM 3.3.1 for Windows MPIO Release Notes
Data ONTAP DSM 3.3.1 for Windows MPIO Release Notes NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
NetApp FAS/V-Series Storage Replication Adapter 2.0.1 for Clustered Data ONTAP
NetApp FAS/V-Series Storage Replication Adapter 2.0.1 for Clustered Data ONTAP For VMware vcenter Site Recovery Manager Release Notes NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone:
SnapDrive for UNIX Quick Start Guide (IBM AIX, HP-UX, Linux, Solaris )
SnapDrive for UNIX Quick Start Guide (IBM AIX, HP-UX, Linux, Solaris ) Network Appliance, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
SnapManager 5.0 for Microsoft Exchange Installation and Administration Guide
SnapManager 5.0 for Microsoft Exchange Installation and Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
Software Storage Platform Mixing Rules
ONTAP Software Storage Platform Mixing Rules June 2016 215-08964_A0 ur005 [email protected] Table of Contents 3 Contents ONTAP storage platform mixing rules... 4 Cluster configuration qualifications...
Replacing the chassis on a FAS20xx system
Replacing the chassis on a FAS20xx system To replace the chassis, you must perform a specific sequence of steps.. Before you begin All other components in the system must be functioning properly; if not,
Clustered Data ONTAP 8.3
Clustered Data ONTAP 8.3 EMS Configuration Express Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
NetApp Storage Array Management Pack for Operations Manager 2.1
NetApp Storage Array Management Pack for Operations Manager 2.1 User Guide NetApp, Inc. Telephone: +1 (408) 822-6000 Part number: 215-08086_A0 495 East Java Drive Fax: +1 (408) 822-4501 Release date: January
Hot-swapping a power supply
80xx systems For all operating environments Hot-swapping a power supply Hot-swapping a power supply involves turning off, disconnecting, and removing the old power supply and installing, connecting, and
Data ONTAP 7.3 Data Protection Online Backup and Recovery Guide
Data ONTAP 7.3 Data Protection Online Backup and Recovery Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)
Logical Replication (LREP) Tool 3.0.1 User Guide
Logical Replication (LREP) Tool 3.0.1 User Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation
SnapManager 8 for Microsoft SharePoint Disaster Recovery User s Guide
SnapManager 8 for Microsoft SharePoint Disaster Recovery User s Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1
Cloud ONTAP 8.3 for Amazon Web Services
Cloud ONTAP 8.3 for Amazon Web Services Upgrade Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
How To Manage A System On A Pc Or Mac Or Mac (Or Mac) With A Hard Drive Or Ipnet (Or Ipnet) On A Computer Or Mac) On Your Computer Or Computer (Or Pc Or Pc Or Ipro) On
Data ONTAP 8.0 7-Mode System Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation
Data ONTAP 7.3 System Administration Guide
Data ONTAP 7.3 System Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation
NetApp E-Series Plug-in for Microsoft SQL Server Management Studio Installation Guide
NetApp E-Series Plug-in for Microsoft SQL Server Management Studio Installation Guide NetApp, Inc. Telephone: +1 (408) 822-6000 Part number: 215-08594_A0 495 East Java Drive Fax: +1 (408) 822-4501 Release
Navigating VSC 6.0 for VMware vsphere
Navigating VSC 6.0 for VMware vsphere Staring with version 5.0, Virtual Storage Console for VMware vsphere works with the VMware vsphere Web Client and has dropped support for the VMware Desktop Client.
Clustered Data ONTAP 8.3
Updated for 8.3.1 Clustered Data ONTAP 8.3 Performance Monitoring Express Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
Data ONTAP 7.3 Storage Management Guide
Data ONTAP 7.3 Storage Management Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments:
Clustered Data ONTAP 8.3
Clustered Data ONTAP 8.3 SNMP Configuration Express Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
Data ONTAP 7.3 Storage Efficiency Management Guide
Data ONTAP 7.3 Storage Efficiency Management Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation
SnapManager 8.1 for Microsoft SharePoint Job Monitor User s Guide
SnapManager 8.1 for Microsoft SharePoint Job Monitor User s Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
Windows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
SGI InfiniteStorage 4000 Series and 5000 Series Concepts Guide for SANtricity ES Storage (ISSM 10.86)
SGI InfiniteStorage 4000 Series and 5000 Series Concepts Guide for SANtricity ES Storage (ISSM 10.86) 007-5884-002 April 2013 The information in this document supports the SGI InfiniteStorage 4000 series
Windows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Config Advisor 3.2. Installation and Administration Guide
Config Advisor 3.2 Installation and Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
Single Mailbox Recovery 6.1 ExtractWizard Release Notes
IBM System Storage N series Single Mailbox Recovery 6.1 ExtractWizard Release Notes SC27-5421-00 Contents Product Overview... 3 New and Modified Features... 4 Single Mailbox Recovery ExtractWizard Agents...
IP SAN Fundamentals: An Introduction to IP SANs and iscsi
IP SAN Fundamentals: An Introduction to IP SANs and iscsi Updated April 2007 Sun Microsystems, Inc. 2007 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, CA 95054 USA All rights reserved. This
Data ONTAP 7.3 File Access and Protocols Management Guide
Data ONTAP 7.3 File Access and Protocols Management Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
NetApp E-Series Storage Systems Initial Configuration and Software Installation. For SANtricity ES Storage Manager 10.86
NetApp E-Series Storage Systems Initial Configuration and Software Installation For SANtricity ES Storage Manager 10.86 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000
Data ONTAP 8.0 7-Mode NCDA NSO-154 Certification Study Guide
NETAPP UNIVERSITY Data ONTAP 8.0 7-Mode NCDA NSO-154 Certification Study Guide Course Number: STRSW-ILT-ANCDA-D87M Revision, Date: 2.1, 20MAR2011 ATTENTION The information contained in this guide is intended
NetApp SANtricity Add-in for Microsoft SQL Server Management Studio 1.3
NetApp SANtricity Add-in for Microsoft SQL Server Management Studio 1.3 Installation Guide NetApp, Inc. Telephone: +1 (408) 822-6000 Part number: 215-09387_A0 495 East Java Drive Fax: +1 (408) 822-4501
High Availability and MetroCluster Configuration Guide For 7-Mode
Data ONTAP 8.2 High Availability and MetroCluster Configuration Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone:
Setting up CIFS shares and joining the Active Directory. Why join an N series storage system to Active Directory?
Redpaper Setting up CIFS shares and joining the Active Directory Alex Osuna This IBM Redpaper discusses setting up CIFS shares and joining the Microsoft Active Directory. Why join an N series storage system
Configuration Rules and Examples for FAS20xx Systems
Configuration Rules and Examples for FAS20xx Systems This flyer (previously named Supported Configurations for FAS20xx Series Systems) describes example connections for the onboard Fibre Channel (FC) ports
NetApp E-Series Storage Systems Concepts for SANtricity ES Storage Manager Version 10.86
NetApp E-Series Storage Systems Concepts for SANtricity ES Storage Manager Version 10.86 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
System Manager 1.1. Customer Presentation Feb 2010
System Manager 1.1 Customer Presentation Feb 2010 Agenda Overview IT challenges and solutions Business challenges and solutions Features and benefits Availability, licensing and requirements 2009 NetApp.
OnCommand System Manager 3.1
OnCommand System Manager 3.1 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
At A Glance. Guide To NetApp Certification
At A Glance Guide To NetApp Certification 2009 Welcome Welcome to Fast Lane / NetApp Fast Lane is the first and only worldwide, NetApp Learning Partner. Specializing in internetworking projects and the
NetApp E-Series Storage Systems Concepts for SANtricity ES Storage Manager Version 10.86
NetApp E-Series Storage Systems Concepts for SANtricity ES Storage Manager Version 10.86 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
SGI InfiniteStorage 4000 Series and 5000 Series Configuring and Maintaining a Storage Array (ISSM 10.86)
SGI InfiniteStorage 4000 Series and 5000 Series Configuring and Maintaining a Storage Array (ISSM 10.86) 007-5882-002 April 2013 The information in this document supports the SGI InfiniteStorage 4000 series
Clustered Data ONTAP 8.3
Updated for 8.3.1 Clustered Data ONTAP 8.3 Performance Monitoring Power Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
NetApp E-Series Storage Systems. Command Line Interface and Script Commands for Version 10.84
NetApp E-Series Storage Systems Command Line Interface and Script Commands for Version 10.84 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
SAN Implementation Course SANIW; 3 Days, Instructor-led
SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols
Implementing Storage Concentrator FailOver Clusters
Implementing Concentrator FailOver Clusters Technical Brief All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject to
Open Systems SnapVault 3.0.1 Installation and Administration Guide
Open Systems SnapVault 3.0.1 Installation and Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1
SGI InfiniteStorage 4000 Series and 5000 Series Configuring and Maintaining a Storage Array Using the Command Line Interface (ISSM 10.
SGI InfiniteStorage 4000 Series and 5000 Series Configuring and Maintaining a Storage Array Using the Command Line Interface (ISSM 10.83) 007-5882-001 August 2012 The information in this document supports
NetApp Training and Certification 2009
NetApp Training and Certification 2009 " NetApp relies on Fast Lane, as its sole global partner for customer training. In response Fast Lane continues to exceed our expectations in execution, quality of
EMC Invista: The Easy to Use Storage Manager
EMC s Invista SAN Virtualization System Tested Feb. 2006 Page 1 of 13 EMC Invista: The Easy to Use Storage Manager Invista delivers centrally managed LUN Virtualization, Data Mobility, and Copy Services
Clustered Data ONTAP 8.2
Clustered Data ONTAP 8.2 Cluster and Vserver Peering Express Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)
Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops
Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft
Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
Entry level solutions: - FAS 22x0 series - Ontap Edge. Christophe Danjou Technical Partner Manager
Entry level solutions: - FAS 22x0 series - Ontap Edge Christophe Danjou Technical Partner Manager FAS2200 Series More powerful, affordable, and flexible systems for midsized organizations and distributed
vcenter Management Plug-in for NetApp E-Series Storage, Version 2.4 User Guide
vcenter Management Plug-in for NetApp E-Series Storage, Version 2.4 User Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01
EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,
QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide
QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The QLogic 4Gb Fibre Channel Expansion Card (CIOv) for BladeCenter enables you to quickly and simply
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
OnCommand Unified Manager
OnCommand Unified Manager Operations Manager Administration Guide For Use with Core Package 5.2 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501
Netapp Interoperability Matrix
Netapp Interoperability Matrix FC, FCoE, iscsi and NAS (NFS) Storage Interoperability The Interoperability Matrix (IMT) defines the components and versions that have been qualified and which can be used
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2
Technical Note Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 This technical note discusses using ESX Server hosts with an IBM System Storage SAN Volume Controller
Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide
Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide The Brocade Enterprise 20-port, 20-port, and 10-port 8 Gb SAN Switch Modules for IBM BladeCenter
Continuous Availability for Business-Critical Data. MetroCluster Technical Overview
Continuous Availability for Business-Critical Data MetroCluster Technical Overview Agenda Causes of Downtime MetroCluster Overview MetroCluster Deployment Scenarios MetroCluster Failure Scenarios New Features
Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000
Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
Datasheet NetApp FAS6200 Series
Datasheet NetApp FAS6200 Series Flexibility meets performance, scalability, and availability to satisfy the most demanding needs of your applications and virtualization workloads KEY BENEFITS Ready for
Pricing - overview of available configurations
Pricing - overview of available configurations Bundle No System Heads Disks Disk Type Software End User EUR* Token ID Config Name Bundle 1 FAS2040 Single 6 x 1TB SATA Base 4.185 R809196-2040 EEM FAS2040
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Clustered Data ONTAP 8.3
Updated for 8.3.1 Clustered Data ONTAP 8.3 CIFS and NFS Multiprotocol Configuration Express Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until
Netapp Interoperability Matrix
Netapp Interoperability Matrix Storage Solution : Snap for Oracle (SMO) NetApp FC, FCoE, iscsi and NAS (NFS) Storage System Interoperability The NetApp Interoperability Matrix (IMT) defines the components
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the
SnapManager 6.1.1 for Microsoft SharePoint Server Release Notes
IBM System Storage N series SnapManager 6.1.1 for Microsoft SharePoint Server Release Notes GC27-2096-11 Contents New Features... 3 SnapManager System Requirements... 4 Windows Host System Requirements...
Quest vworkspace Virtual Desktop Extensions for Linux
Quest vworkspace Virtual Desktop Extensions for Linux What s New Version 7.6 2012 Quest Software, Inc. ALL RIGHTS RESERVED. Patents Pending. This guide contains proprietary information protected by copyright.
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Clustered Data ONTAP 8.3
Clustered Data ONTAP 8.3 SAN Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277 Web:
Datasheet NetApp FAS3200 Series
Datasheet NetApp FAS3200 Series Get advanced capabilities in a midrange storage system and easily respond to future expansion KEY BENEFITS Best Value with Unmatched Efficiency The FAS3200 series delivers
NetApp FAS3200 Series
Systems NetApp FAS3200 Series Get advanced capabilities in a midrange storage system and easily respond to future expansion KEY benefits Best value with unmatched efficiency. The FAS3200 series delivers
Backup and Recovery with Cisco UCS Solutions for SAP HANA
Configuration Guide Backup and Recovery with Cisco UCS Solutions for SAP HANA Configuration Guide March, 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page
Formation NetApp Accelerated NCDA
La Pédagogie au service de la Technologie TECHNOLOGIE Formation NetApp Accelerated NCDA Objectif >> A la fin de ce cours, les stagiaires seront à même d effectuer les tâches suivantes : Configure and administer
Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide
Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter enables high-performance connection
Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware
Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage
Disaster Recovery Procedures for Microsoft SQL 2000 and 2005 using N series
Redpaper Alex Osuna Bert Jonker Richard Waal Henk Vonk Peter Beijer Disaster Recovery Procedures for Microsoft SQL 2000 and 2005 using N series Introduction This IBM Redpaper gives a example of procedures
Datasheet NetApp FAS8000 Series
Datasheet NetApp FAS8000 Series Respond more quickly to changing IT needs with unified scale-out storage and industry-leading data management KEY BENEFITS Support More Workloads Run SAN and NAS workloads
A Continuous-Availability Solution for VMware vsphere and NetApp
Technical Report A Continuous-Availability Solution for VMware vsphere and NetApp Using VMware High Availability and Fault Tolerance and NetApp MetroCluster NetApp and VMware, Inc. June 2010 TR-3788 EXECUTIVE
Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V
Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.
ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK
ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK KEY FEATURES PROVISION FROM BARE- METAL TO PRODUCTION QUICKLY AND EFFICIENTLY Controlled discovery with active control of your hardware Automatically
Cisco WebEx Meetings Server System Requirements
First Published: October 21, 2012 Last Modified: October 21, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 8 526-00 800 553-NETS
BlackBerry Web Desktop Manager. Version: 5.0 Service Pack: 4. User Guide
BlackBerry Web Desktop Manager Version: 5.0 Service Pack: 4 User Guide Published: 2012-10-03 SWD-20121003174218242 Contents 1 Basics... 5 Log in to the BlackBerry Web Desktop Manager... 5 Connect your
