Cisco Unified Computing System (UCS) Storage Connectivity Options and Best Practices with NetApp Storage
|
|
|
- Morris Stevenson
- 10 years ago
- Views:
Transcription
1 White Paper Cisco Unified Computing System (UCS) Storage Connectivity Options and Best Practices with NetApp Storage Chris Naddeo, Cisco UCS Technical Marketing Engineer Patrick Strick, Technical Marketing Engineer, Datacenter Platforms, NetApp Contents About the Authors... 3 Introduction... 3 UCS storage Overview... 4 Fibre Channel and FCoE... 5 FC and FCoE Direct Connect FC Switch mode... 5 FC Adapter Policies... 6 iscsi... 6 iscsi with Data ONTAP... 7 UCS and iscsi Overview... 7 UCS iscsi Boot Caveats with the 2.0(1) Release... 7 UCS iscsi Boot and Fabric Failover... 9 Typical iscsi SAN deployment Upstream in End Host mode... 9 Highlights of Configuring iscsi boot with UCS Manager... 9 iscsi SAN Direct Connect Appliance Port in End Host made NAS and UCS Appliance Ports Failure Scenario 1: UCS FI Fails (Simplest failure case) Recovery From Failure Scenario 1: FI is repaired, rebooted Failure Scenario 2: UCS Failure of Chassis IOM or all uplinks to FI Failure Scenario 3: Underlying Multi-mode VIF Failure (Appliance port) Failure Scenario 4: Last Uplink on UCS FI fails Failure Scenario 5: NetApp Controller Failure Conclusion References Appendix I: FC Adapter Polices Explained Scope Purpose of the FC Adapter Policies Finding and Creating New FC Adapter Policies Default Settings Exposed Parameters Detailed Review Hardcoded Parameters Array Vendor Considerations Appendix II: Cisco and NetApp Support Matrices and References Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 40
2 Cisco HCLs NetApp Interoperability Matrix Tool Process Flow to Check Your Configuration Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 40
3 About the Authors Chris Naddeo is a Technical Marketing Engineer for the Cisco Unified Computing System (UCS). His focus is on the storage connectivity aspects of UCS working closely with the storage partner ecosystem on joint projects, Cisco Validated Designs and certifications. His also is responsible for meeting with potential and existing UCS customers and educating Cisco field sales on the UCS platform. Prior to Cisco, Chris worked at NetApp where he was a field sales consulting system engineer focusing on Oracle on NetApp best practices as well as the Data ONTAP GX platform. Prior to NetApp he spent over 8 years at VERITAS/Symantec in a variety of roles including field sales specialist, Technical Product Manager, and Sr. Manager of Product Management. His focus while at VERITAS was on current and next generation volume management and file systems. Chris is based out of the Philadelphia, PA area Patrick Strick is a Technical Marketing Engineer in NetApp s Datacenter Platforms group. He is currently focusing on ways to integrate NetApp and partner technology through joint best practices and product development. Previously, he led the technical marketing effort for Fibre Channel over Ethernet at NetApp and has over 10 years of experience in IT and storage administration and support. Introduction This paper will provide an overview of the various storage features, connectivity options, and best practices when using the Unified Computing System (UCS) with NetApp storage. This document will focus on storage in detail, both block and file protocols and all the best practices for using these features exposed in UCS with NetApp storage. There will not be an application or specific use case focus to this paper. There are existing Cisco Validated Designs that should be referenced for a deeper understanding of how to configure the UCS and NetApp systems in detail for various application centric use cases. These documents treat the combination of UCS and NetApp from a more holistic or, end to end approach and include the design details and options for the various elements of UCS and NetApp systems. The reader is encouraged to review these documents which are referenced below. Designing Secure Multi-Tenancy into Virtualized Data Centers Deploying Enhanced Secure Multi-Tenancy into Virtualized Data Centers Several FlexPod Cisco Validated Designs can be found at this URL The Cisco Unified Computing System (UCS) provides a flexible environment to host your virtualized applications. It also provides flexibility in the way enterprise storage can be provisioned to it. With this flexibility comes options and with options come questions. This paper will address many of those questions by outlining each of the ways storage can be attached and the recommended way it should be configured. The following examples use NetApp FAS storage appliances that can provide iscsi, Fibre Channel, and FCoE SAN and CIFS and NFS NAS simultaneously from one unified storage system 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 40
4 UCS storage Overview UCS is designed as a revolutionary compute blade architecture but from a storage perspective the entire UCS system appears to the outside world as a collection of Network Interface Cards (NICs) and Fibre Channel Host Bus Adapters (FC HBAs). This is the default mode of operation for UCS and the one that should be employed with the exception of those cases outlined in this paper. The UCS architecture is shown in the diagram below but it is important to realize that the Fabric Interconnects at the top of the UCS architecture are not general purpose Nexus FC or Ethernet switches, rather they are repurposed with special hardware and software to serve as controllers for the array of compute nodes behind them. As such they run in an End Host Mode of operation for both Ethernet and FC storage options. In the FC domain this is called NPort Virtualization or NPV mode. This mode allows the UCS Fabric Interconnects to act like a server allowing multiple hosts to login to the upstream fabric on the same number of FC uplinks Figure 1. UCS and External Storage Connectivity Default Mode of Operation Storage I/O enters and departs the UCS system on the Fabric Interconnect via the use of uplink ports. There are different types of uplink ports and as we shall see special port types when using UCS in a direct attach configuration. The paper will discuss direct connect in detail later, for our current purposes these are the uplink types described and then shown below in Figure Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 40
5 Figure 2. UCS Uplink Ports Fibre Channel and FCoE The use of FCoE in UCS is completely transparent to the host operating system. The OS simply sees 10G Ethernet and FC PCIe device handles and associated device drivers. It is possible for UCS to directly address FCoE storage targets via the use of the direct connect topology which will be discussed later. Many customers have deployed FCoE storage with NetApp and UCS using a Nexus 5000 family switch as a FC to FCoE bridge. A future version of UCS will support FCoE northbound out of the standard Ethernet uplinks. A requirement for the upstream FC switch is that it is running in NPIV mode. Most modern FC switches on the market support this mode of operation. Note that UCS does not require the upstream FC switches to be Cisco as NPIV is an industry standard. Configured as shown here all of the blades in the UCS chassis appear to the upstream Fabric as several HBAs with unique WWPNs FC and FCoE Direct Connect FC Switch mode UCS versions 1.4 and 2.0 do support directly connecting arrays to the Fabric Interconnects so long as an upstream Cisco MDS or Nexus 5000 FC switch is connected via one of the FC uplinks. This upstream FC switch (Cisco only) is necessary as these releases of UCS Manager do not support creating or managing zones on the fabric interconnects. The zones need to be created on the upstream MDS/N5K and the zones will be downloaded automatically to the UCS FIs once they are put into in FC switching mode. The following diagram shows a currently supported direct connect topology Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 40
6 Figure 3. Supported Direct Connect FC and FCoE Topology FC Adapter Policies UCS Manager instruments polices to program the Cisco VIC for key FC parameters. Many customers use host based utilities for such tasks with Emulex or Qlogic based FC adapters. Appendix I discusses the FC adapter polices in detail with each attribute explained as to its function as well as guidance on when to change. It is important to note that in general one does not need change the default settings unless following the instructions indicated in Appendix I. iscsi iscsi is a protocol that enables the transport of SCSI commands over TCP/IP. It was ratified in 2004 and is documented in RFC3720. iscsi is similar to Fibre Channel in that it allows multiple hosts to connect to a storage system for disk resources. The biggest difference is that iscsi runs at the application layer of the networking stack on top of TCP/IP, where FC has its own entire stack and FCoE implements the upper layers of Fibre Channel on top of Ethernet at the network layer. Because iscsi runs on top of TCP/IP it is routable and does not require the administrator to have knowledge of zoning, WWPNs, fabric services, or other Fibre Channel SAN specifics. It only requires a stable Ethernet network and knowledge of host and storage system management Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 40
7 To provide security and supportability iscsi traffic should be isolated on its own VLAN or physical network. Without this separation, not only is the iscsi data potentially visible to other devices on the network, but if a issue arises it will be more difficult to isolate the cause. Additionally, if more than one path from the host to the target is configured (MPIO), each path should be on its own network subnet (they can share the same VLAN). This will prevent potential routing issues. iscsi with Data ONTAP For the iscsi protocol in Data ONTAP 7.x and 8.x 7-mode, each storage controller in an HA pair services only its own LUNs unless in a controller failover. That means during normal operations the host will only access a LUN through the owning controller s network ports. This is different from Fibre Channel where the host can access the LUN through the ports on both controllers in the HA pair. With iscsi, in the case of a storage failover, the partner node assumes ownership of the LUNs and the identity of the network ports. Once the takeover is complete, the host will reconnect its iscsi session and any outstanding SCSI requests will be resent. UCS and iscsi Overview UCS has supported iscsi data LUN access via a variety of adapter models since UCS 1.0. This access was via the operating system iscsi software initiators when the Cisco Virtual Interface Card (VIC) is used. Cisco optionally offers the Broadcom adapter which can be configured for hardware iscsi outside of UCS using the host utilities provided by Broadcom. iscsi Hardware Offload is not a requirement to support booting, rather the adapter only need support configuring the iscsi Boot Firmware Table (ibft) in the OptionROM of the adatper. This allows the booting feature to be supported on the Cisco VIC even though this adapter does not have any hardware iscsi features. The need for hardware iscsi on an adapter has been overcome in recent years with the power of the Intel processors. There are more than enough cores and GHz available to process the iscsi traffic in additional to the application workload. Many customers struggle to drive CPU utilization to reasonable levels, thus fueling the trend towards high density virtualized environments. UCS 2.0 introduced the ability to boot from iscsi targets. This is done via a new object in UCS called an iscsi vnic which is an overlay or child object of the normal UCS vnic. The iscsi vnic has a variety of properties and consumes various pools and policies all of which are there to allow UCS to properly configure the Option ROM memory of the adapter. Once the server has completed the boot cycle then the iscsi vnic is not in the I/O path any longer and all the I/O is done via the normal vnics and host iscsi initiator if using the VIC adatper. UCS 2.0 supports the Cisco VIC and the Broadcom adapter models for this feature but NetApp has only formally tested and supports the VIC adapter. Broadcom will only be considered if a customer submits a product variance request (PVR) to NetApp. The support of iscsi boot now allows customers to deploy UCS with all NAS storage protocols and still continue to have the service profile mobility use case vs. requiring the OS to be installed on local drivers. There are some notable caveats that need mentioning for using this feature with UCS 2.0 (1) UCS iscsi Boot Caveats with the 2.0(1) Release Not all adapters instantiate iscsi HBAs to the operating system identically or at all. Contrast with vnics and vhbas which are standard across adapters in terms of what the OS sees and what UCSM shows in the equipment tab. The net effect of this is that the operating system view of iscsi HBAs (PCIe devices) will diverge from what is shown in the UCS equipment tab. This has been addressed in the UCSM interface via the use of the Properties Attribute in properties screen to show distinction VIC: iscsi vnics are not visible to the operating system 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 40
8 Broadcom: iscsi vnics are visible to the operating system Other caveats of note iscsi boot using Appliance Ports with C-Series servers under UCSM control is not supported until the 2.0(2) release of UCSM. The reader should not deploy this until NetApp completes qualification of this feature and topology. iscsi Qualified Names (iqn) are not pooled, they must be manually entered for each initiator. Note that with software iscsi initiators there is typically a single iqn per host. The Cisco VIC is not a hardware iscsi implementation and as such after booting is complete it plays no role in the iscsi I/O stack as the host initiator takes over. The user may want to use a single iscsi vnic or rename the individual iscsi vnics to match to accommodate this approach. iqn names are pooled in the UCS 2.0(2) release available in March 2012 The Cisco VIC does not support non native VLAN boot. This must be selected during configuration. Configured boot order when looking at UCSM screens will show storage for the iscsi targets when using the Cisco VIC. This is due to the SCSI interrupt vector that the VIC employs. There is no failover support for the iscsi vnics during the boot phase itself. If there is a path failure during the actual boot sequence the blade will have to be rebooted to use the remaining path. There are several ways to accomplish this which will be discussed in a later section on UCS iscsi and fabric failover. You can configure a single or two iscsi vnics per service profile. However in the 2.0(1-) release there are some points to consider when choosing which to use. When you configure two iscsi vnics on two separate physical adapters (full width blade scenario) and if the secondary iscsi vnic is discovered first and the primary fails, boot will not succeed. The work around is to change boot policy by swapping the primary/secondary roles and re-associate If the primary boot path fails, even with a secondary configured on a separate adapter the boot will fail. Work around is to configure primary and secondary on the same adapter or use the process mentioned in the preceding bullet. EUI formatting for initiators is not supported in this release There are no statistics for iscsi vnics Some operating systems required a pingable gateway IP address during the iscsi booting phase or the boot will fail. If the topology is using Appliance ports and there is no upstream network to supply this address then the solution here is to make the gateway IP address the same as the Storage controller IP. Windows 2K8 WFP Lightweight Filter Driver must be disabled manually on the secondary vnic (not used for iscsi boot) after the OS is up. If this is not done and primary path fails then subsequent reboot using secondary path will fail. Known MS issue: Remove local drives in blades for Windows iscsi. UCS iscsi Boot Changes with the 2.0(2) Release (Available March 2012) This maintenance release adds some enhancements to the iscsi feature set. Some of the caveats listed above have been addressed The primary and secondary iscsi vnics can now span adapters and in all failure cases the ibft will still get posted without service prolife reconfigurations or reboots Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 40
9 iqns are now pooled and as such uniqueness of iqn names is enforced. If the user still desires 1 iqn per host then the recommendation is to only configure 1 iscsi vnic UCS iscsi Boot and Fabric Failover The UCS feature of Fabric Failover (FF) is commonly used for vnics which are used for Ethernet traffic in UCS. However with iscsi the vnics should not use the FF feature as the best practices is to use the host based multipathing drivers for all load balancing and failure handling. This is consistent with the FC world where no FF feature exists for UCS vhbas. Additionally during boot or install time the FF feature is not even active or possible. The temporary management NIC interface created to program OptionROM with the boot parameters does not support the FF feature. The best practice is to use host multi-pathing driver to handle iscsi traffic load balancing and failover (analogous to FC protocol) and thus not use the FF feature. A common deployment would be to have two vnics used for iscsi traffic and then two or more additional vnics used for standard Ethernet traffic. One could enable the FF feature on the standard Ethernet traffic vnics, just not the ones used for iscsi traffic. Remember all the iscsi vnic does is program the VIC to support booting, then it is out of the way and the host is just using operating system iscsi initiators to access iscsi LUNs. Typical iscsi SAN deployment Upstream in End Host mode The standard method for connecting storage to a Cisco UCS is with the Fabric Interconnect in Ethernet End Host mode and the storage target connected through an upstream switch or switched network. Although the Cisco UCS uses 10 GbE internally, the storage target does not have to because it is connected through your existing network. However, it certainly can if both the network and target support 10 GbE. Whether the upstream network is 1 GbE or 10 GbE, using jumbo frames (e.g. an MTU size of 9000) will improve performance by reducing the number of individual frames that must be sent for a given amount of data and prevent having to break SCSI data blocks up over multiple Ethernet frames. These in turn also lower host and storage CPU utilization. If jumbo frames are used, you must be sure that all network equipment between and including the UCS and storage target are able and configured to support the larger frame size. These considerations can be discussed with the network administrator. The host s configuration will depend on which CNA is installed in the UCS blade. The Cisco M81KR Virtual Interface Card (VIC) has the ability to have many virtual interfaces presented to the host from a single card. With this capability, vnics can be created specifically for the iscsi traffic. In UCS Manager, when creating the vnic or vnic template, select the iscsi VLAN and assign it as the native VLAN. Note that VIC does not support non native VLAN booting with iscsi. Highlights of Configuring iscsi boot with UCS Manager We will take this from the perspective of creating a UCS service profile configured for iscsi boot. This will show the overall flow as well as point out some necessary or optional steps. A later section will cover the caveats one needs to be aware of. The overall steps are the following: Create vnics using expert Mode Add iscsi vnics Overlay vnic is the parent to the iscsi vnic you are creating 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 40
10 iscsi Adapter Policy. If you are using a Broadcom adapter than you will need one policy for install and one for boot or make changes to same policy. This is not necessary with the Cisco VIC VLAN Selection (from overlay/parent vnic VLANs) iscsi MAC Address Assignment VIC must be set to none Broadcom = standard MAC Pool, or user, etc. Boot Policy, add iscsi vnics IQN is manually entered here, these are not pooled until the 2.0(2) release. Choosing the expert mode for Networking will expose the bottom of the GUI screen and the ability to create the iscsi vnic objects. A detailed review of the iscsi vnic is key to understanding this feature Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 40
11 1. Object name, user preference 2. One of the vnics created in previous step. This is the child object and it inherits some attributes from the parent vnic you have previously created. 3. Adapter Policy, typically this is created before you create the service profile but optionally can be done here. 4. VLAN, inherited from overlay vnic, drop down list of available VLANs assigned to overlay vnic Note: the Cisco VIC only supports native VLAN boot. 5. For VIC set to None For Broadcom you can use the Standard MAC Pool The following shows the relationship between the vnic and the iscsi vnics after you have created all the objects Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 40
12 Placement polices is the next step but unlike traditional vnics there is no ability to configure placement policies for iscsi vnics, rather they will just inherit the parent vnic policy. However always put the primary iscsi vnic on the lowest vnic PCIe device so it is discovered first. This is discussed in the caveats section later in the document as well. The way to handle this is to assign the primary iscsi vnic to the lowest numbered vnic name assuming you will then use a placement policy to ensure this vnic is discovered first. However if no placement policy is used the first vnic is usually discovered first. Now we must configure the boot policy and the correct parameters that UCS will use for to properly program the Option ROM to post the ibft and then boot successfully. Note that in the releases of UCS 2.0(2) and earlier you cannot create a separate iscsi boot policy outside of the service profile. This FC like capability is coming in a future release of UCS. A Note about Templates and iscsi Boot Policies: A SP template can be created as it normally would be but you must then create the iscsi boot policy within the context of the template vs. creating an iscsi boot policy outside of the template creation. This capability will be introduced in a later release Now you can either manually populate the initiator IQN name in the2.0(1) release or use the new pool feature in the 2.0(2) software. The naming convention enforced to some degree but the user should check with NetApp documentation on specifics of formatting. The screen shot below shows how to configure iqn pools in the 2.0(2) release Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 40
13 Once you select OK, you will see a block of iqns created with the numerical suffix appended to your suffix that you used as shown here 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 40
14 Now when you are modifying the boot policy for the iscsi target you need to do a few steps. The Modify boot parameters is grayed out until you click on the create iscsi vnic button at the bottom as shown here Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 40
15 Clicking Create iscsi vnic really just allows UCSM to choose the overlay vnic at this step. You need not create a new iscsi vnic but this is necessary in case the user wants to change the assignment of iscsi vnic to vnic (overlay). So as shown here you must choose the overly vnic to use 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 40
16 Now the boot parameters button is active and upon clicking that you are presented with the following screen where you can manually enter an iqn or just use a previously created pool Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 40
17 Next we will need parameters for the target that the initiator is logging into 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 40
18 Manually populate the target IQN name which is obtained from the NetApp controller management software Authentication profile here is for the target, cannot be the same as initiator profile. This allows MCAP The authentication credentials set on a Target profile are used by the Initiator to verify the challenge string sent to it by that target The IP target address is manually entered. When the service profile is complete you will see the iscsi vnic objects in the UCSM object tree hierarchy. However with the Cisco VIC the OS does not see these additional interfaces as we discussed in the caveat section. The host visibility field was created to address any confusion Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 40
19 The diagram below shows a typical iscsi configuration with a NetApp FAS 3200 series HA pair. The blue lines represent the logical layout of the iscsi connections. Each UCS blade in this example has 2 vnics for iscsi, one for each UCS fabric with failover disabled. The Fabric Interconnects use virtual PortChannels to connect to the upstream top of rack switches, which in turn connect to the rest the LAN. It should be noted that the storage controllers could also be connected directly to the upstream top of rack switches instead of through the LAN cloud. Here, each storage controller is connected to their own pair of top of rack switches with four 1 GbE links. 1 GbE is used to show a common example; two alternatives would be using 10 GbE instead of 1 GbE, or only using a single link per switch instead of a PortChannel. In any of these cases, the initiator is configured with two paths to the target, one on each subnet with both on the iscsi VLAN. How many Ethernet links from your storage to switch you use will depend on the bandwidth requirements, whether they are being shared with other applications, and the speed of the storage network interfaces Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 40
20 Figure 4. Flow Chart for Using Interface Groups on NetApp Controllers 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 40
21 Figure 5. Reference or Typical iscsi Configuration iscsi SAN Direct Connect Appliance Port in End Host made With Cisco UCS firmware version 1.4 came the ability to directly connect storage to the Fabric Interconnect. This removes the need for an intermediate network and can be useful in a pod-like scenario. There are specific requirements and considerations for directly connected IP storage that must be observed Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 40
22 Figure 6. Direct Connect iscsi Topology with UCS When using an Appliance port to connect iscsi storage, VLANs must be configured on each Appliance port. Unless the storage controller is only going to be serving iscsi on the network interfaces connected to the UCS, the Appliance ports should be configured as trunk ports with all desired VLANS available. This allows the storage controller s network adapter to serve iscsi traffic on one VLAN and other data on separate VLANs. The default VLAN, VLAN 1, is also the default native VLAN Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 40
23 On the storage controller, the network ports connected to the Fabric Interconnect need to have VLAN interfaces configured for each non-native VLAN including the one used for iscsi. An example of this is shown in the example below. FAS3240-A> vlan create e1a 100 Ethernet e1a: Link being reconfigured. vlan: e1a-100 has been created Ethernet e1a: Link up. FAS3240-A> vlan create e1b 100 Ethernet e1b: Link being reconfigured. vlan: e1b-100 has been created Ethernet e1b: Link up. FAS3240-A> ifconfig e1a netmask mtusize 9000 partner e1a-100 FAS3240-A> ifconfig e1b netmask mtusize 9000 partner e1b-100 FAS3240-A> ifconfig a e1a: flags=0x80f0c867<broadcast,running,multicast,tcpcksum,vlan> mtu 9000 ether 00:c0:dd:11:40:2c (auto-10g_twinax-fd-up) flowcontrol full e1b: flags=0x80f0c867<broadcast,running,multicast,tcpcksum,vlan> mtu 9000 ether 00:c0:dd:11:40:2e (auto-10g_twinax-fd-up) flowcontrol full 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 40
24 e1a-100: flags=0xb4c867<up,broadcast,running,multicast,tcpcksum> mtu 9000 inet netmask 0xffffff00 broadcast ether 00:c0:dd:11:40:2c (auto-10g_twinax-fd-up) flowcontrol full e1b-100: flags=0xb4c867<up,broadcast,running,multicast,tcpcksum> mtu 9000 inet netmask 0xffffff00 broadcast ether 00:c0:dd:11:40:2e (auto-10g_twinax-fd-up) flowcontrol full In the image below a NetApp FAS 3200 series HA pair is connected to the Fabric Interconnect with one 10 GbE link to each fabric. The blue lines represent the logical layout of the iscsi connections. Each UCS blade has two vnics with the iscsi VLAN set as native. One vnic goes to Fabric A and one to Fabric B; failover is disabled on both. Figure 7. NetApp Direct Connect iscsi vnic fabric failover is not needed and should not be used with iscsi because the host OS will use MPIO to determine which paths to storage are active and handle any path failures. The best practices is to not use FF at all with iscsi traffic in UCS unless a host multi-pathing driver cannot be used for some reason. NAS and UCS Appliance Ports When using NFS or CIFS with Appliance ports it is essential to have upstream L2 Ethernet switching to allow traffic to flow in certain failure scenarios. This is not required for iscsi only traffic given the use of the host MPIO stack which manages path failures and recovery end to end. Appliance ports were introduced in UCS 1.4 and were designed to allow a direct connection between the UCS Fabric Interconnect and the NetApp storage controller. An appliance port is essentially a server port under the covers that also does MAC learning. Given the appliance port is a server port the same policies apply to them in terms of uplink or border ports. They have border ports associated with them and there is a network control policy that determines what do to in the event the last available border or uplink port goes down Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 24 of 40
25 Appliance ports like server ports have an uplink or border port assigned either via static or dynamic pinning. By default the loss of last uplink port will result in the appliance port being taken down. One can change the network control policy under the appliance cloud to have this be a warning only. For NFS configurations the default should be used which will down the appliance port if the last uplink port is lost. This ensure more deterministic failover in the topology. UCS Fabric Interconnects cannot run at vpc peers so from a UCS FI standpoint there is a active/passive data path for NFS traffic to the same IP address. You can of course do active/active I/O from both FIs to different backend NetApp volumes and controllers. Once should utilize the Interface Group (ifgrp) feature of managing Ethernet ports on the NetApp controller. This feature was called Virtual Interface (VIF) in earlier releases of Data ONTAP. There are single mode ifgrps and multimode ifgrps. Single mode are active-standby while multi-mode can be static or dynamic supporting LACP port channels. A detailed description of this feature can be found in the Data ONTAP Network Management Guide Figure shows the best practice topology for using UCS appliance ports with NFS/CIFS traffic. This is the steady state or reference topology from which we will examine various failure cases in subsequent sections. There some key principals one must understand when using appliance ports in such a configuration. The 2nd level VIF provides further grouping of multiple multimode VIFs and it provides a standby path in the event the primary multimode VIF fails. The primary multimode VIF will be constructed of a LACP port channel thus this complete failure is unlikely. Failure processing within UCS and within the NetApp controllers are not coordinated. This means what one tier might see as a failure the other will not. This is the fundamental reason for the upstream L2 switching infrastructure we will see in subsequent diagrams NetApp VIF failovers are based on link state only. This means that if something happens within the UCS system and it does not translate to the link down being sent to the NetApp controller than no failure processing will occur on the NetApp controller. Conversely a NetApp interface group port migration may not translate to any failures on the UCS tier which would trigger a vnic migration. Now let us walk through a few failure cases and diagrams where these principals will be evident. Here is the steady state traffic flow between the UCS and NetApp systems. The red arrows show the traffic path from the UCS vnics to the NetApp exported volumes Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 25 of 40
26 Figure 8. Steady State Direct Connect Appliance Ports with NFS Failure Scenario 1: UCS FI Fails (Simplest failure case) In this case the UCS FI on the left fails. UCS will see this as a situation where the vnic is moved from the A side to the B side by the Fabric Failover feature. No issues here as the traffic fails over to the standby VIF since NetApp would see the FI failure as a link down event on the primary interface and thus expect to start seeing traffic on the standby and accepting it Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 26 of 40
27 Figure 9. Failure Case 1: FI is Lost Recovery From Failure Scenario 1: FI is repaired, rebooted When the FI on the left comes back on-line the traffic will not automatically go back to the steady state unless one uses the NetApp FAVOR option when configuring the VIF interfaces. This is considered a best practice and should always be done. If it is not done then the traffic will be flowing through the upstream L2 infrastructure as shown in the next figure. Recall the Appliance ports do MAC learning and have associated uplink ports which is how this traffic pattern is able to flow Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 27 of 40
28 Figure 10. Data Path Upon Recovery of FI Failure Scenario 2: UCS Failure of Chassis IOM or all uplinks to FI. This is the most complex and confusing failure condition. UCS will see this as an event upon which to move the vnic to Fabric B however this will NOT appear as link down even from the NetApp perspective. Thus in this scenario to allow traffic to continue to flow an upstream L2 device must be present as shown in the following diagram. Traffic will not be able to reach the NetApp volumes without the use of the upstream L2 network Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 28 of 40
29 Figure 11. Failure Case 2: Lost of IOM A natural question to ask at this point is the following: If you need to have upstream L2 devices anyway, then why use Appliance ports in the first place? The answer is that the upstream is only used in this very rare failure condition and the remainder of the time your storage traffic is going directly from UCS to the array. Now when the IOM or links are restored in the diagram above traffic flows normally as in the steady state since the NetApp controller never made any interface primary changes. Failure Scenario 3: Underlying Multi-mode VIF Failure (Appliance port) This scenario shows a case where the appliance port link or port channel fails. NetApp would see this as al link down event and now expect to see traffic on the standby link. However UCS does not see this as a link down event for the vnic and would thus keep the vnic assigned to Fabric A. The uplink port on the FI would enable the traffic to go out the FI to the upstream L2 network back to Fabric B and then to the standby VIF on the NetApp. This is shown below Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 29 of 40
30 Figure 12. Loss of Appliance Port Links Recovery from this is similar to what has been discussed before. Ensure to use the FAVOR option in the VIF configuration to allow steady state to return. Failure Scenario 4: Last Uplink on UCS FI fails As of the UCS 1.4(3) releases this problem is identical the failure of a FI itself which we saw in Failure Scenario 1. The default policy is to bring down the appliance ports if the last uplink is down Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 30 of 40
31 Figure 13. Loss of Last Uplink Port on the FI Failure Scenario 5: NetApp Controller Failure This case examines what happens when the one of the two NetApp controllers fail. A NetApp Controller Failover (CFO) event will take place assigning ownership of the volumes to the remaining controller. However from a UCS perspective nothing happened so the vnic will stay on FI-A. Again the only way traffic can continue to flow is from the upstream L2 network since the failures are not coordinated. The following diagram shows this Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 31 of 40
32 Figure 14. Loss of NetApp Controller Recovery from this event is a manual process as a user must initiate the controller giveback command to return everything to the steady state. Here is a summary of the key principals when using Appliance ports for NFS traffic with NetApp arrays: Must have L2 Ethernet upstream for failure cases, one cannot just deploy a Storage array and UCS hardware. Always enable IP storage VLAN on the FI uplinks for data traffic failover scenarios Provision the uplink bandwidth with consideration of different failure cases discussed. Minimum of two uplink ports per FI required. Port Channel ideal configuration. Do not deploy 1 Gig uplinks with IP storage, lest performance suddenly decrease during a failure event and the user community is ok with this scenario. At a minimum pair of 10 Gig links per multimode VIF is recommended. Conclusion There are a wide variety of connecting external storage to the UCS system. The long standing industry best practices for FC, FCoE, iscsi and NFS/CIFS apply with no changes for UCS unless the user wishes to deploy a 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 32 of 40
33 direct connect topology with UCS and the storage. Using both NetApp and UCS in a combined best practices manner results in reliable, flexible and scalable compute and storage infrastructures. References Cisco UCS Manager GUI Configuration Guides Cisco UCS B-series OS Installation Guides NetApp Interoperability Matrix (Requires NOW account) NetApp Fibre Channel and iscsi Configuration Guide (Requires NOW account) NetApp Data ONTAP Documentation (Requires NOW account) Appendix I: FC Adapter Polices Explained Scope This document will explain the exposed settings for the Unified Computing System Manager (UCSM) FC adapter policy settings as well as outline those parameters which are hardcoded and not configurable by the user. The paper will also outline which settings are applicable to which adapter model. It is important to note that the default values shown in this document and UCSM have been carefully chosen by Cisco engineering and should only be changed after careful consideration following the guidance for that particular parameter. Purpose of the FC Adapter Policies UCS Manager provides the ability to create specific adapter policies for different combinations of adapters, operating systems and storage arrays. The policies should viewed as a way of ensuring the different error handing parameters are configured correctly for the previous mentioned combinations as the main objective the policy. There are very few performance based parameters exposed in the FC adapter polices as of the 2.0(1q) release of UCS. Finding and Creating New FC Adapter Policies The operational steps to view and create adapter policies are shown below Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 33 of 40
34 Default Settings 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 34 of 40
35 Exposed Parameters Detailed Review Parameter Name Transmit Queues Description Number of transmit queue resources to allocate Valid Adapter Models and Operating Systems Cisco Virtual Interface Card (VIC), All Supported Operating Systems Default Value 1 Valid Range of Values 1 Recommendations on Changing Default Values No need to change Parameter Name Description Valid Adapter Models and Operating Systems Default Value 64 Valid Range of Values Recommendations on Changing Default Values Transmit Queue Ring Size Number of descriptors in each transmit queue. This has to do with Extended Link Services (ELS) and Common Transport (CT) FC frames for generic services. This will not affect performance at all. Cisco Virtual Interface Card (VIC), All Supported Operating Systems No need to change. Parameter Name Receive Queues Description Number of receive queue resources to allocate Valid Adapter Models and Operating Systems Cisco Virtual Interface Card (VIC), All Supported Operating Systems Default Value 1 Valid Range of Values 1 Recommendations on Changing Default Values No need to change. Parameter Name Description Receive Queue Ring Size Number of descriptors in each receive queue. This has to do with Extended Link Services (ELS) and Common Transport (CT) FC frames for generic services. This will not affect performance at all Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 35 of 40
36 Valid Adapter Models and Operating Systems Cisco Virtual Interface Card (VIC), All Supported Operating Systems Default Value 64 Valid Range of Values Recommendations on Changing Default Values No need to change Parameter Name SCSI I/O Queues Description Number of SCSI I/O queue resources to allocate. Valid Adapter Models and Operating Systems Cisco Virtual Interface Card (VIC), All Supported Operating Systems Default Value 1 Valid Range of Values 1 (GUI is misleading showing this can go to 8) Recommendations on Changing Default Values Do not change Parameter Name Description Valid Adapter Models and Operating Systems Default Value 512 Valid Range of Values Recommendations on Changing Default Values SCSI I/O Queue Ring Size Number of descriptors in each SCSI I/O queue. This value can affect performance Cisco Virtual Interface Card (VIC), All Supported Operating Systems This value is already maximized by default thus yielding the optimal performance. Parameter Name Description Valid Adapter Models and Operating Systems Default Value Valid Range of Values Recommendations on Changing Default Values FCP Error Recovery Enables or disables the FC-TAPE protocol for sequence level error recovery with tape devices. This enables or disables the Read Exchange Concise (REC) and Sequence Retransmission Request (SRR) functions on the VIC firmware. Cisco Virtual Interface Card (VIC), All Supported Operating Systems Disabled Enabled or Disabled Change to Enabled when connecting to tape drive libraries Parameter Name Description Valid Adapter Models and Operating Systems Default Value 8 Valid Range of Values Recommendations on Changing Default Values Flogi Retries Number of times the Flogi is retried before operation is aborted Cisco Virtual Interface Card (VIC), M71KR-E/Q, M72KR-E/Q on all supported operating systems Any, -1 means infinite number of retries Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Default Value 4000 Flogi Timeout (ms) Valid Range of Values Recommendations on Changing Default Values Number of milliseconds to wait before timing out of the FLOGI exchange Cisco Virtual Interface Card (VIC), M71KR-E/Q, M72KR-E/Q on all supported operating systems Check with the storage array vendor s documentation on a recommended value for this parameter Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 36 of 40
37 Parameter Name Description Valid Adapter Models and Operating Systems Default Value 8 Plogi Retries Valid Range of Values Recommendations on Changing Default Values Number of times the Plogi is retried before operation is aborted Cisco Virtual Interface Card (VIC) on all supported operating systems Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Default Value Plogi Timeout (ms) Valid Range of Values Recommendations on Changing Default Values Number of milliseconds to wait before timing out of the PLOGI exchange Cisco Virtual Interface Card (VIC) on all supported operating systems Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Default Value 2000 Error Detect Timeout (ms) Valid Range of Values Recommendations on Changing Default Values Retry interval, how many ms to wait, for various FC commands, part of the FC standard being depreciated moving forward. Cisco Virtual Interface Card (VIC), M71KR-E/Q, M72KR-E/Q on all supported operating systems Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Port Down Timeout (ms) Number of milliseconds a remote FC port should be offline before informing the SCSI upper layer that the port has failed. This is important for host multipathing drivers and it is one of the key indicators that are used for error processing. Cisco Virtual Interface Card (VIC) on all supported operating systems Default Value (NOTE: ESX recommended value is 10000) Valid Range of Values Recommendations on Changing Default Values Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Default Value 8 Valid Range of Values Recommendations on Changing Default Values Port Down I/O Retry Number of times I/O is sent back to upper SCSI layer when a remote port is down before failing the I/O. Cisco Virtual Interface Card (VIC) on Windows only. Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Link Down Timeout (ms) Number of milliseconds the uplink should stay offline before informing the 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 37 of 40
38 Valid Adapter Models and Operating Systems Default Value Valid Range of Values Recommendations on Changing Default Values SCSI layer that the uplink is down and fabric connectivity is lost Cisco Virtual Interface Card (VIC) on Windows only. Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Default Value 10,000 Valid Range of Values Recommendations on Changing Default Values Resource Allocation Timeout (ms) Resource allocation timeout value as part of the general FC specification. Cisco Virtual Interface Card (VIC), M71KR E/Q, M72KR E/Q on all supported operating systems Hardcoded value. Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Default Value 16. Valid Range of Values Recommendations on Changing Default Values IO Throttle Count Maximum number of outstanding I/O requests, data or control, per HBA. Note that is *not* LUN queue depth that will be discussed later in this document. If this value is exhausted then I/O will wait in the queue for servicing Emulex and Qlogic CNA models as currently the Cisco Virtual Interface Card (VIC) and driver ignores this parameter. Check with the storage array vendor s documentation on a recommended value for this parameter. Parameter Name Description Valid Adapter Models and Operating Systems Max LUNs Per Target Number of LUNs behind a target controller that the FC driver will export/show. VIC and Qlogic and Emulex adatpers on all supported operating systems Default Value 256 Note: Recommended value for ESX and Linux is 1024 Valid Range of Values Recommendations on Changing Default Values This value will vary with each operating system and the user should consult the latest documentation on this from their respective OS vendor. Parameter Name Description Valid Adapter Models and Operating Systems Default Value Valid Range of Values Recommendations on Changing Default Values Interrupt Mode Method used to send interrupts to the operating system from the driver. Message Signaled Interrupt - Extended (MSI-X). MSI-X is part of the PCIe 3.0 specification and results in better performance. Cisco Virtual Interface Card (VIC), M71KR/M72KR E-Q on all supported operating systems except Windows where this is ignored. MSI-X MSI-X, MSI, INTX Do not change unless the operating system cannot support this method Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 38 of 40
39 Hardcoded Parameters LUN Queue Depth This value affects performance in a FC environment when the host throughput is limited by the various queues that exist in the FC driver and SCSI layer of the operating system This Cisco VIC adapter sets this value to 32 per LUN on ESX and Linux and 255 on Windows and does not expose this parameter in the FC adapter policy. Emulex and Qlogic expose this setting using their host based utilities. Many customers have asked about how to change this value using the Cisco VIC adapter. Cisco is considering this request as an enhancement for a future release. However FC performance with the VIC adapter has been excellent and there no cases in evidence (that the author is aware of) indicating that this setting is not optimal at its current value. It should be noted that this is the default value recommended by VMware for ESX and other operating systems vendors. Array Vendor Considerations Most storage array vendors will document the key error handling values they require for their controller for a specific adapter model and operating system. These values have been determined after significant testing on the performance aspect as well as the various failure scenarios. These values should be consulted in the documentation and support matrices of the respective storage vendor. Appendix II: Cisco and NetApp Support Matrices and References Many customers ask both Cisco and NetApp engineers how to navigate the various support matrices the two companies maintain. This section will attempt to provide links to the key resources as well as describe how a combination gets officially supported by both companies. Cisco HCLs The following is a public link to the Cisco server HCL. These documents cover operating systems versions, blade models, adapter models and the necessary Ethernet and FC driver versions for these combinations. A common error is that the customer does not verify the operating system driver is at the correct level for a given OS version. The Cisco VIC drivers (enic and fnic) are the ONLY UCS host components that exist but they must be carefully adhered to. Cisco also maintains a public link showing which FC switches and storage arrays have been tested and supported against different UCS versions. This document will show the FC switch code levels required for a given release of UCS and generally whether or not an array vendor is supported on this combination. For the details from NetApp on what exactly they support with a given UCS combination one must consult the NetApp Interoperability Matrix website and associated tools. As of the UCS 2.0(2) Release the information in this matrix will be moved into the UCS HCL referenced above. NetApp Interoperability Matrix Tool NetApp s interoperability group runs hosts, adapters, and switches at varying firmware and OS revisions through a rigorous set of tests to see that all of the components in a customer s SAN will work together correctly. For this reason it is important to ensure that your configuration and firmware revisions match one of the rows in the 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 39 of 40
40 NetApp IMT. Process Flow to Check Your Configuration 1. Check the Cisco Server HCL to ensure your blade/adapter/os combination is even supported 2. Use the Cisco Server HCL to ensure you have the right driver versions in your OS image. 3. Check the Cisco Storage HCL (2nd URL provided above) to make sure the UCS version you want to deploy is supported against the version of your FC switching infrastructure. You should also check here that the array vendor is supported against verson. 4. Use the NetApp IMT to confirm the ONTAP and UCS versions and to review any caveats for a UCS + Switch + NetApp Array combination which may be listed as a note or alert on the IMT row matching your configuration. Printed in USA C / Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 40 of 40
Backup and Recovery with Cisco UCS Solutions for SAP HANA
Configuration Guide Backup and Recovery with Cisco UCS Solutions for SAP HANA Configuration Guide March, 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page
Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
Direct Attached Storage
, page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel
Windows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Windows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager
Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager What You Will Learn This document describes the operational benefits and advantages of firmware provisioning with Cisco UCS Manager
Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**
Course: Duration: Price: $ 4,295.00 Learning Credits: 43 Certification: Implementing and Troubleshooting the Cisco Cloud Infrastructure Implementing and Troubleshooting the Cisco Cloud Infrastructure**Part
Configuring a FlexPod for iscsi Boot
Configuring a FlexPod for iscsi Boot Christopher Nickl World Wide Technology 12/15/2011 Table of Contents Introduction... 2 Prerequisites... 2 Overview... 2 Installation... 3 Configuring the UCS Part 1...
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led
Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Course Description Configuring Cisco Nexus 5000 Switches (DCNX5K) v2.1 is a 5-day ILT training program that is designed
UCS M-Series Modular Servers
UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend
DCICT: Introducing Cisco Data Center Technologies
DCICT: Introducing Cisco Data Center Technologies Description DCICN and DCICT will introduce the students to the Cisco technologies that are deployed in the Data Center: unified computing, unified fabric,
ANZA Formación en Tecnologías Avanzadas
Temario INTRODUCING CISCO DATA CENTER TECHNOLOGIES (DCICT) DCICT is the 2nd of the introductory courses required for students looking to achieve the Cisco Certified Network Associate certification. This
Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code
Information Price Course Code Free Course Introducing Cisco Data Center Networking No. Days: 4 No. Courses: 2 Introducing Cisco Data Center Technologies No. Days: 5 Contact us at: Telephone: 888-305-1251
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led
Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led Course Description Understanding Cisco Cloud Fundamentals (CLDFND) v1.0 is a five-day instructor-led training course that is designed
CCNA DATA CENTER BOOT CAMP: DCICN + DCICT
CCNA DATA CENTER BOOT CAMP: DCICN + DCICT COURSE OVERVIEW: In this accelerated course you will be introduced to the three primary technologies that are used in the Cisco data center. You will become familiar
How To Evaluate Netapp Ethernet Storage System For A Test Drive
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Configuration Maximums
Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the
HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios
HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios Part number 603028-003 Third edition August 2010 Copyright 2009,2010 Hewlett-Packard Development Company, L.P.
Fibre Channel and iscsi Configuration Guide
Fibre Channel and iscsi Configuration Guide for the Data ONTAP 8.0 Release Family NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
Introduction to MPIO, MCS, Trunking, and LACP
Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the
Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance
Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance This white paper compares the performance of blade-to-blade network traffic between two enterprise blade solutions: the Dell
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
N_Port ID Virtualization
A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February
Implementing Storage Concentrator FailOver Clusters
Implementing Concentrator FailOver Clusters Technical Brief All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject to
Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices
Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices White Paper January 2014 Preface As both Cisco Unified Computing System (Cisco UCS ) and Microsoft Hyper-V continue to gain market
UCS Network Utilization Monitoring: Configuration and Best Practice
UCS Network Utilization Monitoring: Configuration and Best Practice Steve McQuerry Technical Marketing Engineer Unified Computing Systems Cisco Systems, Inc. Document Version 1.0 1 Copyright 2013 Cisco
Citrix XenDesktop: Best Practices with Cisco UCS
Global Alliance Architects Citrix XenDesktop: Best Practices with Cisco UCS 2 Contents Overview...3 An Overview of Cisco UCS...3 Design Considerations...5 Prerequisites...6 Pool Design...7 Management IP
A Platform Built for Server Virtualization: Cisco Unified Computing System
A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease
Compellent Storage Center
Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the
Unified Computing Systems
Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified
Data ONTAP DSM 4.1 For Windows MPIO
Data ONTAP DSM 4.1 For Windows MPIO Installation and Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
hp ProLiant network adapter teaming
hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2
Netapp Interoperability Matrix
Netapp Interoperability Matrix FC, FCoE, iscsi and NAS (NFS) Storage Interoperability The Interoperability Matrix (IMT) defines the components and versions that have been qualified and which can be used
UCS Manager Configuration Best Practices (and Quick Start)
UCS Manager Configuration Best Practices (and Quick Start) UCS TME White Paper by Jeff Silberman Version 1.1 What You Will Learn The UCS Manager (UCSM) presents a new paradigm for server management. Among
VMware vsphere 4.1 with ESXi and vcenter
VMware vsphere 4.1 with ESXi and vcenter This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter. Assuming no prior virtualization
Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops
Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft
Next Generation Data Center Networking.
Next Generation Data Center Networking. Intelligent Information Network. עמי בן-עמרם, יועץ להנדסת מערכות [email protected] Cisco Israel. 1 Transparency in the Eye of the Beholder With virtualization, s
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
High Availability and MetroCluster Configuration Guide For 7-Mode
Data ONTAP 8.2 High Availability and MetroCluster Configuration Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone:
Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing
White Paper Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing What You Will Learn The data center infrastructure is critical to the evolution of IT from a cost center to a business
PassTest. Bessere Qualität, bessere Dienstleistungen!
PassTest Bessere Qualität, bessere Dienstleistungen! Q&A Exam : 642-999 Title : Implementing Cisco Data Center Unified Computing Version : Demo 1 / 5 1.When upgrading a standalone Cisco UCS C-Series server,
Cisco Nexus 5548UP. Switch Configuration Guide for Dell PS Series SANs. A Dell Deployment and Configuration Guide
Cisco Nexus 5548UP Switch Configuration Guide for Dell PS Series SANs Dell Storage Engineering October 2015 A Dell Deployment and Configuration Guide Revisions Date February 2013 October 2013 March 2014
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp
Server and Consolidation with iscsi Arrays David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may use this
Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation
Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation Cisco Nexus Family Provides a Granular, Cost-Effective Path for Data Center Evolution What You Will Learn As businesses move
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Configuration Maximums
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Using Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
How To Design A Data Centre
DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance
White Paper Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance What You Will Learn Modern data centers power businesses through a new generation of applications,
Citrix XenDesktop: Best Practices with Cisco UCS
Global Alliance Architects Citrix XenDesktop: Best Practices with Cisco UCS 2 Contents Overview...3 An Overview of Cisco UCS...3 Design Considerations...5 Prerequisites...6 Pool Design...7 Management IP
FCoE Deployment in a Virtualized Data Center
FCoE Deployment in a irtualized Data Center Satheesh Nanniyur ([email protected]) Sr. Staff Product arketing anager QLogic Corporation All opinions expressed in this presentation are that of
Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.
Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...
VMware vsphere 5.0 Evaluation Guide
VMware vsphere 5.0 Evaluation Guide Advanced Networking Features TECHNICAL WHITE PAPER Table of Contents About This Guide.... 4 System Requirements... 4 Hardware Requirements.... 4 Servers.... 4 Storage....
Implementing Cisco Data Center Unified Computing (DCUCI)
Certification CCNP Data Center Implementing Cisco Data Center Unified Computing (DCUCI) 5 days Implementing Cisco Data Center Unified Computing (DCUCI) is designed to serve the needs of engineers who implement
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Part 1 - What s New in Hyper-V 2012 R2. [email protected] Datacenter Specialist
Part 1 - What s New in Hyper-V 2012 R2 [email protected] Datacenter Specialist Microsoft Cloud OS Vision Public Cloud Azure Virtual Machines Windows Azure Pack 1 Consistent Platform Windows Azure
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study
SAP Solutions on VMware Infrastructure 3: Table of Contents Introduction... 1 SAP Solutions Based Landscape... 1 Logical Architecture... 2 Storage Configuration... 3 Oracle Database LUN Layout... 3 Operations...
Linux Host Utilities 6.1 Installation and Setup Guide
Linux Host Utilities 6.1 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.
Deployment Guide How to prepare your environment for an OnApp Cloud deployment. Document version 1.07 Document release date 28 th November 2011 document revisions 1 Contents 1. Overview... 3 2. Network
Customer Education Services Course Overview
Customer Education Services Course Overview Accelerated SAN Essentials (UC434S) This five-day course provides a comprehensive and accelerated understanding of SAN technologies and concepts. Students will
Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper
Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN
Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN A Dell EqualLogic best practices technical white paper Storage Infrastructure and Solutions Engineering Dell Product Group November 2012 2012
SanDisk ION Accelerator High Availability
WHITE PAPER SanDisk ION Accelerator High Availability 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Introduction 3 Basics of SanDisk ION Accelerator High Availability 3 ALUA Multipathing
iscsi: Accelerating the Transition to Network Storage
iscsi: Accelerating the Transition to Network Storage David Dale April 2003 TR-3241 WHITE PAPER Network Appliance technology and expertise solve a wide range of data storage challenges for organizations,
Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center
Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of
Emulex OneConnect NIC Teaming and Multiple VLAN Driver and Application Release Notes
Emulex OneConnect NIC Teaming and Multiple VLAN Driver and Application Release Notes Version (Kit): 2.5.25-2 Date: August 2011 Purpose and Contact Information These release notes describe the new features,
Hardware and Software Interoperability Matrix
Hardware and Software Interoperability Matrix This document provides interoperability matrices for Cisco Unified Computing components and configurations that have been tested and validated by Cisco, by
vrealize Operations Manager Customization and Administration Guide
vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.0.1 This document supports the version of each product listed and supports all subsequent versions until
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01
EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,
vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
VERITAS Backup Exec 9.0 for Windows Servers
WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS
Installing Intercloud Fabric Firewall
This chapter contains the following sections: Information About the Intercloud Fabric Firewall, page 1 Prerequisites, page 1 Guidelines and Limitations, page 2 Basic Topology, page 2 Intercloud Fabric
N5 NETWORKING BEST PRACTICES
N5 NETWORKING BEST PRACTICES Table of Contents Nexgen N5 Networking... 2 Overview of Storage Networking Best Practices... 2 Recommended Switch features for an iscsi Network... 2 Setting up the iscsi Network
StorSimple Appliance Quick Start Guide
StorSimple Appliance Quick Start Guide 5000 and 7000 Series Appliance Software Version 2.1.1 (2.1.1-267) Exported from Online Help on September 15, 2012 Contents Getting Started... 3 Power and Cabling...
SAN Implementation Course SANIW; 3 Days, Instructor-led
SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
IP SAN Fundamentals: An Introduction to IP SANs and iscsi
IP SAN Fundamentals: An Introduction to IP SANs and iscsi Updated April 2007 Sun Microsystems, Inc. 2007 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, CA 95054 USA All rights reserved. This
Deploying Windows Streaming Media Servers NLB Cluster and metasan
Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design: Designing XenServer Network Configurations www.citrix.com Contents About... 5 Audience... 5 Purpose of the Guide... 6 Finding Configuration Instructions... 6 Visual Legend... 7
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging
Cloud Ready: Architectural Integration into FlexPod with Microsoft Private Cloud Solution
Cloud Ready: Architectural Integration into FlexPod with Microsoft Private Cloud Solution What You Will Learn Industry trends indicate a dramatic data center transformation to shared infrastructure. To
