Brocade VCS Fabric Technology and NAS with NFS Validation Test

Size: px
Start display at page:

Download "Brocade VCS Fabric Technology and NAS with NFS Validation Test"

Transcription

1 Brocade VCS Fabric Technology and NAS with NFS Validation Test NetApp/VMware vsphere 5.0 Red Hat Enterprise Linux This material outlines sample configurations and associated test results of Brocade VCS Fabric technology with NFS file servers.

2 CONTENTS Contents Preface Overview Purpose of This Document Audience Objectives Brocade VCS Features Summary Related Documents About Brocade Test Case #1: NetApp FAS Test Case #1 Summary Topology Hardware Resources Compute Resources Software Resources Test 1: I/O Verification Test 2: Link Failure Test 3: Active Path Failure Test 4: Switch Failure Test 5: vmotion with Link/Path Failure Test Case #2: Red Hat Enterprise Linux NFS Server Test Case #2 Results Topology Hardware Resources Compute Resources Software Resources Test 1: I/O Verification Test 2: Link Failure Test 3: Switch Failure Test Procedure...37 Appendix A: Test Case # Brocade VCS Deployment Considerations: Dynamic LACP vlag with NetApp...41 Brocade VDX Deployment Considerations: Static LACP vlag with ESXi Server and Brocade 1020 CNA...43 Brocade VCS Deployment Considerations: Enable Brocade VDX Jumbo Frame Support...45 Brocade VCS Deployment Considerations: Enable Ethernet Pause/Flow Control Support...45 NetApp FAS3050 Deployment Procedure: Volume and VIF Creation...46 NetApp Volume Creation...46 NetApp VIF Creation...46 Brocade VCS Fabric Technology and NAS with NFS Validation Test 2 of 52

3 VMware vsphere Client Deployment Procedure: ESXi Datastore Creation...46 ESX Datastore Creation...46 VMware vsphere Client Deployment Procedure: Virtual Machine Creation...46 VM Creation...46 VMware ESXi Deployment Procedure: NIC Teaming for vswitch...47 VMware ESXi NIC Teaming for vswitch...47 Appendix b: Test Case # Brocade VCS Fabric Configuration: Static LACP vlag with Red Hat Enterprise Linux NFS Server and Brocade 1020 CNA...48 Red Hat Enterprise Linux NFS Server: NIC Bonding Configuration...50 Appendix C: References...52 Brocade VCS Fabric Technology and NAS with NFS Validation Test 3 of 52

4 1 PREFACE 1.1 Overview As per the Gartner 2011 NAS Magic Quadrant report, the midrange and high-end Network-Attached Storage (NAS) market for 2010 experienced a growth rate of 33 percent over 2009 in terms of hardware vendor revenue. This favorable growth rate was a result of several factors: fast-growing unstructured file data, widespread availability of data de-duplication/compression in NAS storage solutions, ease of management, support of virtualized environments such as VMware, and the flexibility of unified storage. 1 Additionally, IDC predicts that by 2014 more than 83 percent of enterprise storage system capacity will be shipped for file-based data, taking the Compound Annual Growth Rate (CAGR) for file serving storage capacity to 2.5 times the CAGR for block storage capacity 2. As also discussed in the Gartner 2011 NAS Magic Quadrant report, NAS support of the VMware environment has become more prominent in the past year, as more and more NAS vendors have invested in this area to increase the appeal of their products. Additionally, NAS products use industry-standard remote file protocols, including Network File System (NFS). The Gartner report also observes that because some applications, such as Oracle database applications and VMware, are built on files instead of blocks, NAS is increasingly used as application storage for those environments, providing ease-of-use benefits to users as compared with storage arrays that use block protocols, which may offer higher performance than NAS. As a result, many midrange and high-end NAS products are used to consolidate storage for both server applications and home directories for PC clients. Given the tremendous growth rates in NAS and the large number of existing deployments, this presents an opportunity to demonstrate that Brocade VCS Fabric technology interoperates with the underlying NFS protocol used by these systems. 1.2 Purpose of This Document This document provides the validation of Brocade VCS Fabric technology with two implementations of the Network File System (NFS) Protocol, including both the NetApp FAS3050 NFS filer and Red Hat Enterprise Linux configured as an NFS server. This validation demonstrates that existing deployments using NFS will interoperate with Brocade VCS fabrics and exhibit resiliency to failure scenarios. This ensures that inputs/outputs (I/Os) between clients and servers operate in a non-disruptive manner. The testing demonstrates NFS interoperability with Brocade VCS Fabric technology, while providing sample configurations and test results associated with fabric failover scenarios. This document should provide peace of mind for both network and storage administrators and architects who are already using NFS and are considering the use of Brocade VCS Fabric technology. 1.3 Audience The content in this document is written for a technical audience, including solution architects, solutioneers, system engineers, and technical development representatives. This document assumes the audience is familiar with Brocade VCS Fabric technology. 1.4 Objectives The objectives of this document are to evaluate NFS protocol interoperability with Brocade VCS Fabric technology in the following two test cases: Test Case #1 with the NetApp FAS3050: This test consists of a 6-node Brocade VCS fabric with 2 ESXi hosts using Brocade 1020 FCoE (Fiber Channel over Ethernet) CNAs and the NetApp FAS3050. For this test, the Virtual Machine (VM) data store associated with the ESXi cluster resides on a volume of the NetApp FAS3050. Iometer is used as a measurement and characterization tool for this test. Brocade VCS Fabric Technology and NAS with NFS Validation Test 4 of 52

5 Test Case #2 with Red Hat Enterprise Linux NFS server: This test consists of a 4-node Brocade VCS fabric with a Red Hat Enterprise Linux NFS server. For this test, Spirent is used as a characterization tool for the NFS clients. Spirent emulates many NFS clients accessing the Red Hat Enterprise Linux NFS server share. Each of the Spirent-emulated NFS clients mounted the NFS share of the Red Hat Enterprise Linux NFS server. Both tests demonstrate that the topology of the network can change, with limited implication to the application, using NFS as an underlying protocol. Please note that this does not include a full configuration or validation of specific features on ESXi, Red Hat, NetApp, and VMware. 1.5 Brocade VCS Features The following Brocade VCS features are used in the validation testing for both test cases. These features are considered best practices when utilizing NFS over a Brocade VCS fabric. Please refer to Appendices A and B for the actual configuration procedures for these features Brocade Inter-Switch Link (ISL) Trunks For both Test Case #1 and Test Case #2, Brocade Inter-Switch Link (ISL) Trunking is used within the Brocade VCS fabric to provide additional redundancy and load balancing between the NFS clients and NFS server. Typically, multiple links between two switches are bundled together in a Link Aggregation Group (LAG) to provide redundancy and load balancing. Setting up a LAG requires lines of configuration on the switches and selecting a hash-based algorithm for load balancing based on source-destination IP or MAC addresses. All flows with the same hash traverse the same link, regardless of the total number of links in a LAG. This might result in some links within a LAG, such as those carrying flows to a storage target, being overutilized and packets being dropped, while other links in the LAG remain underutilized. Instead of LAG-based switch interconnects, Brocade VCS Ethernet fabrics automatically form ISL trunks when multiple connections are added between two Brocade VDX switches. Simply adding another cable increases bandwidth, providing linear scalability of switch-to-switch traffic, and this does not require any configuration on the switch. In addition, ISL trunks use a frame-by-frame load balancing technique, which evenly balances traffic across all members of the ISL trunk group Equal-Cost Multipath (ECMP) A standard link-state routing protocol that runs at Layer 2 determines if there are Equal-Cost Multipaths (ECMPs) between RBridges in an Ethernet fabric and load balances the traffic to make use of all available ECMPs. If a neighbor switch is reachable via several interfaces with different bandwidths, all of them are treated as equal-cost paths. While it is possible to set the link cost based on the link speed, such an algorithm complicates the operation of the fabric. Simplicity is a key value of Brocade VCS Fabric technology, so an implementation is chosen in the test cases that does not consider the bandwidth of the interface when selecting equal-cost paths. This is a key feature needed to expand network capacity, to keep ahead of customer bandwidth requirements Virtual Link Aggregation Group (vlag) For both Test Case #1 and Test Case #2, Virtual Link Aggregation Groups (vlags) are used for the ESXi hosts, the NetApp FAS3050, and the Red Hat Enterprise Linux NFS server. In the case of the NetApp FAS3050, a dynamic Link Aggregation Control Protocol (LACP) vlag is used. In the case of both ESXi hosts and the Red Hat Enterprise Linux NFS server, static LACP vlags are used. While Brocade ISLs are used as interconnects between Brocade VDX switches within a Brocade VCS fabric, industrystandard LACP LAGs are supported for connecting to other network devices outside the Brocade VCS fabric. Typically, LACP LAGs can only be created using ports from a single physical switch to a second physical switch. In a Brocade VCS fabric, a vlag can be created using ports from two Brocade VDX switches to a device to which both VDX switches are connected. This provides an additional degree of device-level redundancy, while providing active-active link-level load balancing. For additional configuration details please refer to Appendices A and B. Brocade VCS Fabric Technology and NAS with NFS Validation Test 5 of 52

6 1.5.4 Pause Flow Control For these test cases, Pause Flow Control is enabled on vlag-facing interfaces connected to the ESXi hosts, the NetApp FAS3050, and the Red Hat Enterprise Linux NFS server. Brocade VDX Series switches support the Pause Flow Control feature. IEEE 802.3x Ethernet pause and Ethernet Priority-Based Flow Control (PFC) are used to prevent dropped frames by slowing traffic at the source end of a link. When a port on a switch or host is not ready to receive more traffic from the source perhaps due to congestion it sends pause frames to the source to pause the traffic flow. When the congestion is cleared, the port stops requesting the source to pause traffic flow, and traffic resumes without any frame drop. When Ethernet pause is enabled, pause frames are sent to the traffic source. Similarly, when PFC is enabled, there is no frame drop; pause frames are sent to the source switch. For configuration details on Ethernet Pause and PFC on Brocade VDX Series switches, please refer to Appendix A Ultra-Low Latency The Brocade VDX series of switches provides industry-leading performance and ultra-low latency through wire-speed ports with 600 nanosecond port-to-port latency and hardware-based Brocade ISL Trunking. This is helpful for environments that require high availability, such as providing Ethernet storage connectivity for FCoE, Internet Small Computer Systems Interface (iscsi), and NAS Jumbo Frames Brocade VDX Series switches support the transport of jumbo frames. Jumbo frames are enabled by default on the Brocade ISL trunks. However, to accommodate end-to-end jumbo frame support on the network for the edge hosts, this feature can be enabled under the vlag interface connected to the ESXi hosts, the NetApp FAS3050, and Red Hat Enterprise Linux NFS server. The default Maximum Transmission Unit (MTU) on these interfaces is This MTU is set to 9216 to optimize the network for jumbo frame support. For additional configuration details of jumbo frames on Brocade VDX Series switches, please refer to Appendix A. For additional details on best practices for enabling jumbo frames on the host devices that transmit the frames, please reference the NetApp and VMware vsphere Storage Best Practices guide: Summary These results support the validation that Brocade VCS Fabric technology interoperates with two implementations of the Network File System (NFS) Protocol, including both the NetApp FAS3050 NFS filer and the Red Hat Enterprise Linux NFS server. This validation demonstrates that existing deployments using NFS interoperate with Brocade VCS fabrics and exhibit resiliency to failure scenarios. This ensures that I/Os between clients and servers operate in a non-disruptive manner. 1.7 Related Documents For more information about Brocade VCS Fabric technology, please see the Brocade VCS Fabric Technical Architecture brief: For the Brocade Network OS (NOS) Admin Guide and NOS Command Reference: The Brocade NOS Release notes can be found at Brocade VCS Fabric Technology and NAS with NFS Validation Test 6 of 52

7 For more information about the Brocade VDX Series of switches, please see the product Data sheets: Brocade VDX 6710 Data Center Switch: Brocade VDX 6720 Data Center Switch: Brocade VDX 6730 Data Center Switch: About Brocade As information becomes increasingly mobile and distributed across the enterprise, organizations are transitioning to a highly virtualized infrastructure, which often increases overall IT complexity. To simplify this process, organizations must have reliable, flexible network solutions that utilize IT resources whenever and wherever needed enabling the full advantages of virtualization and cloud computing. As a global provider of comprehensive networking solutions, Brocade has more than 15 years of experience in delivering Ethernet, storage, and converged networking technologies that are used in the world s most missioncritical environments. Based on the Brocade One strategy, this unique approach reduces complexity and disruption by removing network layers, simplifying management, and protecting existing technology investments. As a result, organizations can utilize cloud-optimized networks to achieve their goals of non-stop operations in highly virtualized infrastructures where information and applications are available anywhere. Brocade VCS Fabric Technology and NAS with NFS Validation Test 7 of 52

8 2. TEST CASE #1: NETAPP FAS3050 For this test, one type of NAS storage is used. NetApp FAS3050 is used as a NAS server. This storage option is commonly deployed when using NAS storage pools with VMware vsphere. Test Case #1 with the NetApp FAS3050 consists of a 6-node Brocade VCS fabric with 2 ESXi hosts using Brocade 1020 CNAs and NetApp FAS3050. For this test, the VM data store associated with the ESXi cluster resides on a volume of the NetApp FAS3050. Iometer is used as a measurement and characterization tool for this test. The demonstration shows that the topology of the network can change with limited implication to the application using NFS as an underlying protocol. Please note that this does not include a full configuration or validation of specific features on ESXi, Red Hat, NetApp, and VMware. Lastly, please note that for this test case, the NetApp FAS3050 used is limited to Gigabit Ethernet (GbE) interfaces. Therefore, a Brocade VDX 6710 was used to extend Gigabit Ethernet to both RB1 and RB2. Assuming the NetApp FAS3050 had 10 GbE interfaces, it would simply be connected to RB1 and RB2 without the need for RB21 and RB Test Case #1 Summary All tests for Test Case #1 were performed successfully, with no issues. This validation demonstrates that existing deployments using NFS do interoperate with Brocade VCS fabrics and exhibit resiliency to I/Os between clients and servers in a non-disruptive manner. The following tests were conducted. 1. Test 1 validated baseline I/O using the shortest equal-cost paths in the fabric, with NFS clients on RB3 and RB4 accessing the NetApp FAS3050 on the vlag interface on RB21 and RB Test 2 validated that I/O continues between a VM using an NFS client accessing a storage pool on a NetApp FAS3050 when failing a link in the Brocade ISL trunk. 3. Test 3 validated that I/O continues between a VM using an NFS client accessing a storage pool on a NetApp FAS3050 when a complete path fails. 4. Test 4 validated that I/O flows are not impacted between a VM using an NFS client accessing a storage pool on a NetApp FAS3050 when a switch in the Brocade VCS fabric fails. 5. Test 5 validated that there was no impact throughout the duration of a successful vmotion between ESX servers acting as NFS clients with a link/path failure. This summarizes the test results for Test Case #1. Test Description Results 1 Baseline of shortest paths and traffic distribution in the Brocade VCS fabric Pass 2 Perform link failure within a Brocade ISL trunk Pass 3 Perform a complete active path failure within the Brocade VCS fabric 4 Perform multiple switch failures within the Brocade VCS fabric 5 Perform vmotion between ESX servers as NFS clients with switch failure Pass Pass Pass Brocade VCS Fabric Technology and NAS with NFS Validation Test 8 of 52

9 2.2 Topology Figure 1. NetApp FAS3050 NFS validation topology and components for Test Case # Hardware Resources The following equipment was used in this configuration: Description Quantity Revisions Brocade VDX Brocade NOS Brocade VDX Brocade NOS Brocade VDX Brocade NOS NetApp FAS Release 7.3.6; NFS Brocade VCS Fabric Technology and NAS with NFS Validation Test 9 of 52

10 2.4 Compute Resources The following equipment was used in this configuration: Description Quantity Revisions VMware ESX 2 Intel Xeon X5670, 2.93 GHz 2 socket, 6 cores, 32 GB RAM VMware ESX Continued 2 Brocade 1020 VMware vsphere Management 1 Intel Xeon X3430, 2.4 GHz Quad cores, 8 GB RAM 2.5 Software Resources The following software was used in this configuration: Description Revision Brocade NOS NetApp Data ONTAP VMware vsphere VMware ESX Test 1: I/O Verification The purpose of test 1 is to validate baseline I/O is using the shortest equal-cost paths in the fabric, with the NFS clients on RB3 and RB4 accessing the NetApp FAS3050 on the vlag interface on RB21 and RB22. Brocade VCS Fabric Technology and NAS with NFS Validation Test 10 of 52

11 Figure 2. NetApp FAS3050 NFS validation topology for test Test Procedure Step 1: Run Iometer on the Windows VM. Note that NIC teaming for vswitch was configured for the Route based on IP hash policy. Please see Appendix A for details. Note that the I/O path shown with a green line shows valid ECMPs in the fabric between ESXi and the NetApp FAS3050. The actual flows that traverse these links vary, depending on the number of hosts, MAC addresses, and flows. Step 2: Execute the following command on all Brocade VDX ISLs and VDX vlag interfaces to confirm that traffic flow and I/O path is evenly distributed as expected. The following is an example: VDX6710-RB21# do show interface port-channel 33 in rate Expected Results The expected results for test 1 are to show distributed I/O due to an active/active NIC teaming vlag end-to-end from ESX VM NFS clients to the NetApp server datastore Actual Results The actual results for test 1 confirm evenly distributed I/O due to an active/active NIC teaming vlag end-to-end from ESX VM NFS clients to the NetApp server datastore. Performance from Iometer on the Windows VM (maximum throughput was approximately 119 MB/sec): Brocade VCS Fabric Technology and NAS with NFS Validation Test 11 of 52

12 2.7 Test 2: Link Failure The purpose of test 2 is to validate that I/O flows are not impacted between the NFS client VM and NetApp FAS3050 while introducing a link failure on one link in the Brocade trunk, which is an active shortest path in the fabric. The links that are failed as part of this test are highlighted below in red. Figure 3. NetApp FAS3050 NFS validation topology for test 2. Brocade VCS Fabric Technology and NAS with NFS Validation Test 12 of 52

13 The NFS clients on RB3 and RB4 continue to access the NetApp FAS3050 on the active/active vlag interface on RB21 and RB Test Procedure Step 1: Run Iometer on the Windows VM for the duration of the test. Step 2: Execute the following command on the desired Brocade VDX switches to validate baseline traffic prior to failure. The following is an example: VDX6730-RB3# show in te 3/0/1 in rate VDX6730-RB3# show in te 3/0/2 in rate Step 3: Now that active traffic on both Brocade ISLs is confirmed, an ISL in the trunk is failed. In this example, Interface TenGbE3/0/1 is failed, which is one of the active ISLs in the trunk between RB3 and RB1. VDX RB3(config)# in te 3/0/1 VDX6730-RB3(conf-if-te-3/0/1)# shut Step 4: Execute the following command on the desired Brocade VDX switches to observe traffic levels after interface failure. The following is an example: VDX6730-RB3# show in te 3/0/1 in rate VDX6730-RB3# show in te 3/0/2 in rate Step 5: Execute the following command on the desired Brocade VDX switches to re-enable the failed ISL: VDX6730-RB3# conf Entering configuration mode terminal VDX6730-RB3(config)# in te 3/0/1 VDX6730-RB3(conf-if-te-3/0/1)# no shut Step 6: Execute the following command on the desired Brocade VDX switches to observe traffic levels after interface re-enabling. The following is an example: VDX6730-RB3# show in te 3/0/1 in rate VDX6730-RB3# show in te 3/0/2 in rate Repeat Steps 1 6: Steps 1 6 were also performed for the other Brocade ISLs that are highlighted in red under section Expected Results The expected results for test 2 are to confirm that distributed I/O is not impacted while failing an active Brocade member ISL within the fabric. The NFS clients on RB3 and RB4 continue to access the NetApp FAS3050 on the active/active vlag interface on RB21 and RB22. Brocade VCS Fabric Technology and NAS with NFS Validation Test 13 of 52

14 All I/Os from the NFS client on RB3 and RB4 fail over to the remaining link on the Brocade trunks going into RB21 and RB22, where the active/active vlag is located on the NFS server. Since the trunks have 20 Gbps of aggregated bandwidth, there should be no loss of throughput, unless there is more than 10 Gbps of I/O going through the fabric when a single 10 Gbps link is failed during the test. When the links in the trunk are re-enabled, the I/Os get evenly redistributed to all of the active links in the trunk Actual Results The actual results for test 2 confirm that distributed I/O is not impacted while failing an active Brocade member ISL within the fabric. During the link failure in the Brocade trunk, while I/O was evenly distributed between the two 10 GbE links on the 20 GbE trunk, all I/O was immediately switched to the remaining 10 GbE link. When the link was re-enabled, all I/O was immediately rebalanced between both links again. The same results were observed between other Brocade ISL trunk link failures in the fabric. These failures were performed for the other Brocade ISLs, highlighted in red under section Again, there was no disruption of I/O from the perspective of the Windows VM/NFS client. The maximum throughput of 119 MB/sec was the same before, during, and after the link failure. 2.8 Test 3: Active Path Failure The purpose of test 3 is to validate that I/O flows are not impacted between the NFS client VM and NetApp FAS3050 while introducing a complete active path failure. This is done by failing both interfaces in the Brocade trunk, which is an active shortest path in the fabric. The links that are failed as part of this test are highlighted below in red. Observe that all I/Os get rerouted via fabric switching in the remaining paths. Brocade VCS Fabric Technology and NAS with NFS Validation Test 14 of 52

15 Figure 4. NetApp FAS3050 NFS validation topology for test Test Procedure Step 1: Run Iometer on the Windows VM for the duration of the test. Step 2: Execute the following command on RB3 to validate fabric topology prior to path failure. VDX6730-RB3# show fabric route topology Total Path Count: 8 Src Dst Out Out Nbr Nbr RB-ID RB-ID Index Interface Hops Cost Index Interface BW Trunk Te 3/0/ Te 1/0/7 20G Yes 2 6 Te 3/0/ Te 2/0/7 20G Yes 4 2 Te 3/0/ Te 1/0/7 20G Yes 4 6 Te 3/0/ Te 2/0/7 20G Yes 21 2 Te 3/0/ Te 1/0/7 20G Yes 21 6 Te 3/0/ Te 2/0/7 20G Yes 22 6 Te 3/0/ Te 2/0/7 20G Yes 22 2 Te 3/0/ Te 1/0/7 20G Yes Brocade VCS Fabric Technology and NAS with NFS Validation Test 15 of 52

16 In the base testing, it is known that the NFS clients on RB3 and RB4 take the shortest path to the NFS server. In the fabric topology prior to the path failure, note that total path count is 8, and from RB3 to RB1, note that the hop count is 1 and the cost is 500. Step 3: In this example, interfaces TenGigE3/0/1 and TenGigE3/0/2 are failed, to fail the active path that includes both active ISLs in the trunk between RB3 and RB1. VDX6730-RB3(config)# in te 3/0/1 VDX6730-RB3(conf-if-te-3/0/1)# shut VDX6730-RB3(conf-if-te-3/0/1)# in te 3/0/2 VDX6730-RB3(conf-if-te-3/0/2)# shut Step 4: Execute the following command on RB3 to observe fabric topology after path failure: VDX6730-RB3# show fabric route topology Total Path Count: 5 Src Dst Out Out Nbr Nbr RB-ID RB-ID Index Interface Hops Cost Index Interface BW Trunk Te 3/0/ Te 2/0/7 20G Yes 2 6 Te 3/0/ Te 2/0/7 20G Yes 4 6 Te 3/0/ Te 2/0/7 20G Yes 21 6 Te 3/0/ Te 2/0/7 20G Yes 22 6 Te 3/0/ Te 2/0/7 20G Yes Step 5: Execute the following command on the desired Brocade VDX switches to observe traffic levels after interface failure. The following is an example: VDX6710-RB21# show interface Port-channel 33 in rate Step 6: Execute the following command on the desired Brocade VDX switches to re-enable the failed ISL: VDX6730-RB3# conf Entering configuration mode terminal VDX6730-RB3(config)# in te 3/0/1 VDX6730-RB3(conf-if-te-3/0/1)# no shut VDX6730-RB3(config)# in te 3/0/2 VDX6730-RB3(conf-if-te-3/0/2)# no shut Brocade VCS Fabric Technology and NAS with NFS Validation Test 16 of 52

17 Step 7: Execute the following command on the desired Brocade VDX switches to observe traffic levels after interface re-enabling. The following is an example: VDX6710-RB22# show interface Port-channel 33 in rate Expected Results The expected results for test 3 are to confirm that distributed I/O is not impacted while introducing a complete active path failure. It should be observed that all I/Os are rerouted via fabric switching in the remaining paths. The NFS client on RB3 and RB4 should continue to access the NetApp FAS3050 on the active vlag interface on RB21 and RB22. This test will show the resiliency of the fabric. In this example, the shortest path on the left side from RB3 to RB21 will go down between RB1 and RB3. Through Brocade VCS Ethernet fabric switching, all of the I/Os are redirected through RB2 to RB1, in order to get to RB Actual Results The actual results for test 3 confirm that distributed I/O is not impacted while introducing a complete active path failure. It is observed that all I/Os are rerouted via fabric switching in the remaining paths. As expected, during the shortest path failure on the left side from the NFS client to the NetApp FAS3050, all I/Os were immediately switched over to the alternate path. Again, there was no disruption of I/O from the perspective of the Windows VM/NFS client. From the perspective of the Iometer application, there were no errors, and all I/Os continued to flow through the fabric. Upon re-enabling the shortest path, all I/Os failed back to the shortest path between RB3 and RB1 without any disruption. 2.9 Test 4: Switch Failure The purpose of test 4 is to validate that I/O flows are not impacted between the NFS client VM and NetApp FAS3050 while introducing multiple switch failures in the active path of the Brocade VCS fabric. The first switch to be reloaded, RB2, is the principal switch in the Brocade VCS fabric. The second switch to be reloaded, RB3, is a switch directly connected to an active vlag interface of the NFS client. For this test, all MACs were hashed onto the GbE 0/c of the NetApp FAS3050. Brocade VCS Fabric Technology and NAS with NFS Validation Test 17 of 52

18 Figure 5. NetApp FAS3050 NFS validation topology for test Test Procedure Step 1: Validate that all switches in the Brocade VCS fabric are up, and the shortest paths between the NFS client and the server are being used as depicted in test 1. Step 2: Execute the following command on the desired Brocade VDX switches to validate fabric topology prior to failure: VDX6720-RB4# show fabric route topology Total Path Count: 8 Src Dst Out Out Nbr Nbr RB-ID RB-ID Index Interface Hops Cost Index Interface BW Trunk Te 4/0/ Te 1/0/10 20G Yes 2 5 Te 4/0/ Te 2/0/11 20G Yes 3 5 Te 4/0/ Te 2/0/11 20G Yes 3 2 Te 4/0/ Te 1/0/10 20G Yes Brocade VCS Fabric Technology and NAS with NFS Validation Test 18 of 52

19 21 2 Te 4/0/ Te 1/0/10 20G Yes 21 5 Te 4/0/ Te 2/0/11 20G Yes 22 5 Te 4/0/ Te 2/0/11 20G Yes 22 2 Te 4/0/ Te 1/0/10 20G Yes Step 3: Execute the following command on any Brocade VDX switch to validate the existing principal switch in the fabric: VDX6720-RB2# show fabric all VCS Id: 1 Config Mode: Local-Only Rbridge-id WWN IP Address Name :00:00:05:33:46:6A:8D VDX6720-RB1 2 10:00:00:05:33:67:FE:A > VDX6720-RB2 * 3 10:00:00:05:33:91:3A:E VDX6730-RB3 4 10:00:00:05:33:67:CA:BC VDX6720-RB :00:00:05:33:8C:AD: VDX6710-RB :00:00:05:33:8C:D3:FD VDX6710-RB22 The Fabric has 6 Rbridge(s) Step 4: Run Iometer on the Windows VM for the duration of the test. Step 5: Now that it is confirmed that the principal switch is RB2, the disabling of the RB2 switch can proceed. VDX6720-RB2# reload Warning: Unsaved configuration will be lost. Are you sure you want to reload the switch? [y/n]:y The system is going down for reload NOW!! Broadcast message from root Wed Jan 18 18:44: The system is going down for reboot NOW!! Step 6: Execute the following command on the desired Brocade VDX switches to validate the fabric topology after RB2 failure: VDX6720-RB4# show fabric all VCS Id: 1 Config Mode: Local-Only Rbridge-id WWN IP Address Name :00:00:05:33:46:6A:8D > VDX6720-RB1 3 10:00:00:05:33:91:3A:E VDX6730-RB3 4 10:00:00:05:33:67:CA:BC VDX6720-RB4 * Brocade VCS Fabric Technology and NAS with NFS Validation Test 19 of 52

20 21 10:00:00:05:33:8C:AD: VDX6710-RB21 The Fabric has 4 Rbridge(s) Step 7: Verify that I/Os are still running after principal switch RB2 failure: Step 8: Disable the RB3 switch. VDX6730-RB3# reload Warning: Unsaved configuration will be lost. Are you sure you want to reload the switch? [y/n]:y The system is going down for reload NOW!! Broadcast message from root Wed Jan 18 18:51: The system is going down for reboot NOW!! Step 9: Execute the following command on the desired Brocade VDX switches to validate the fabric topology after RB3 failure: VDX6720-RB4# show fabric all VCS Id: 1 Config Mode: Local-Only Rbridge-id WWN IP Address Name :00:00:05:33:46:6A:8D > VDX6720-RB1 4 10:00:00:05:33:67:CA:BC VDX6720-RB4 * 21 10:00:00:05:33:8C:AD: VDX6710-RB21 Brocade VCS Fabric Technology and NAS with NFS Validation Test 20 of 52

21 The Fabric has 3 Rbridge(s) Step 10: Verify that I/Os are still running after principal switch RB3 failure: Step 11: Execute the following command on the desired Brocade VDX switches to observe traffic levels after interface re-enabling. The following is an example: VDX6710-RB21# show interface port-channel 33 in rate Expected Results The expected results for test 4 are to confirm that distributed I/O is not impacted while introducing multiple switch failures within the active path of the Brocade VCS fabric. It should be observed that all I/Os are rerouted via fabric switching in the remaining paths. During the failure of the switches in the fabric impacting both end-device vlags, all Iometer I/Os will continue to flow. This will show little to no disruption of I/Os during the simultaneous failure of the RB2 principal switch, as well as RB3. Since the NFS client has redundant connections on RB3 and RB4 on its vlag, it gets continuous access to the NetApp FAS3050. In addition to that, the NetApp FAS3050 also has redundant connections on RB21 and RB22 on the vlag. This test shows the resiliency of Brocade VCS Fabric technology and its end devices when vlags are used. Note that in this scenario, when the principal switch RB2 is rebooted, RB22 should also exit the fabric. When RB3 gets rebooted, there should be a total of three switches leaving the fabric, and I/O will continue to flow from the NFS client to the server Actual Results The actual results for test 4 confirm that distributed I/O is not impacted while introducing multiple switch failures in the active path of the Brocade VCS fabric. It is observed that all I/Os are rerouted via fabric switching in the remaining paths. As expected, during the failure of the principal switches RB2 and RB3, all Iometer traffic continued to flow between the NFS client and server. Brocade VCS Ethernet fabric switching utilized all available paths between the end devices and failed over I/O within sub-second convergence times. The vlags between the NFS server and NFS client to the Brocade VCS Fabric Technology and NAS with NFS Validation Test 21 of 52

22 fabric had redundant connections without having to connect to the same physical switch. This resulted in high availability of application traffic between the NFS client and server, even during switch failures Test 5: vmotion with Link/Path Failure The purpose of test 5 is to validate that I/O flows were not impacted throughout the duration of a successful vmotion between ESX servers acting as NFS clients with a link/path failure. In this test scenario, there are two 10 GbE ESX servers dual connected to both RBridges 3 and 4 to the Brocade VCS fabric with NIC Teaming/vLAG, using LACP/Port-Channels 44 and 55. There are two virtual machines between the ESX servers, one RedHat Linux Enterprise 5 and the other a Windows 2008 Server, serving as NFS clients using the NetApp FAS3050 as the VM datastore. For the NFS server, there is 1 GbE NetApp storage that is dual connected to the Brocade VCS fabric with NIC Teaming/vLAG using LACP/Port-Channel 33. See Figure 6. Brocade VCS Fabric Technology and NAS with NFS Validation Test 22 of 52

23 Figure 6. NetApp FAS3050 NFS validation topology for test Test Procedure Step 1: Validate that all switches in the Brocade VCS fabric are up and that the shortest paths between the NFS client and server are being used as depicted in test 1. Step 2: Execute the following command on the desired Brocade VDX switches to validate fabric topology prior to failure: VDX6730-RB3# show fabric route topology Total Path Count: 8 Src Dst Out Out Nbr Nbr RB-ID RB-ID Index Interface Hops Cost Index Interface BW Trunk Te 3/0/ Te 1/0/7 20G Yes 2 6 Te 3/0/ Te 2/0/7 20G Yes 4 6 Te 3/0/ Te 2/0/7 20G Yes 4 2 Te 3/0/ Te 1/0/7 20G Yes 21 6 Te 3/0/ Te 2/0/7 20G Yes 21 2 Te 3/0/ Te 1/0/7 20G Yes 22 6 Te 3/0/ Te 2/0/7 20G Yes 22 2 Te 3/0/ Te 1/0/7 20G Yes Brocade VCS Fabric Technology and NAS with NFS Validation Test 23 of 52

24 Step 3: Execute the following command on any Brocade VDX switch to validate the member RBridges in the fabric: VDX6730-RB3# show fabric all VCS Id: 1 Config Mode: Local-Only Rbridge-id WWN IP Address Name :00:00:05:33:46:6A:8D > VDX6720-RB1 2 10:00:00:05:33:67:FE:A VDX6720-RB2 3 10:00:00:05:33:91:3A:E VDX6730-RB3 * 4 10:00:00:05:33:67:CA:BC VDX6720-RB :00:00:05:33:8C:AD: VDX6710-RB :00:00:05:33:8C:D3:FD VDX6710-RB22 The Fabric has 6 Rbridge(s) Step 4: Validate the vsphere client view of the VM NFS clients and NetApp FAS3050 datastore while on ESX server #2 (on the right of the figure) before vmotion and prior to RB4 failure. Note that from the ESX server #2 on the right, vmnic2 will be disabled when RB4 is failed. This is the vsphere view prior to RB4 failure: Step 5: On the Windows VM, send I/Os throughout the duration of the vmotion and prior to RB4 failure, using Iometer to read/write to the NFS server. Brocade VCS Fabric Technology and NAS with NFS Validation Test 24 of 52

25 Step 6: On the Linux VM, send I/Os before the vmotion and prior to RB4 failure, using dd to read/write to the NFS server. Step 7: Begin vmotion. Brocade VCS Fabric Technology and NAS with NFS Validation Test 25 of 52

26 Step 8: Proceed with disabling the RB4 switch. VDX6720-RB4# reload Warning: Unsaved configuration will be lost. Are you sure you want to reload the switch? [y/n]:y The system is going down for reload NOW!! Broadcast message from root Tue Jan 23 14:22: The system is going down for reboot NOW!! Step 9: Execute the following command on the desired Brocade VDX switches to validate fabric topology after RB4 failure: VDX6730-RB3# show fabric all VCS Id: 1 Config Mode: Local-Only Rbridge-id WWN IP Address Name :00:00:05:33:46:6A:8D > VDX6720-RB1 2 10:00:00:05:33:67:FE:A VDX6720-RB2 3 10:00:00:05:33:91:3A:E VDX6730-RB3 * 21 10:00:00:05:33:8C:AD: VDX6710-RB :00:00:05:33:8C:D3:FD VDX6710-RB22 The Fabric has 5 Rbridge(s) Step 10: Execute the following command on any Brocade VDX switch to validate the member RBridges in the fabric after RB4 failure: VDX6730-RB3# show fabric route topology Total Path Count: 6 Src Dst Out Out Nbr Nbr RB-ID RB-ID Index Interface Hops Cost Index Interface BW Trunk Te 3/0/ Te 1/0/7 20G Yes 2 6 Te 3/0/ Te 2/0/7 20G Yes 21 6 Te 3/0/ Te 2/0/7 20G Yes 21 2 Te 3/0/ Te 1/0/7 20G Yes 22 6 Te 3/0/ Te 2/0/7 20G Yes 22 2 Te 3/0/ Te 1/0/7 20G Yes Brocade VCS Fabric Technology and NAS with NFS Validation Test 26 of 52

27 Step 11: Validate the vsphere client view of the VM NFS clients and NetApp FAS3050 datastore during RB4 failure and vmotion: Step 12: Validate the vsphere client view of the VM NFS clients and NetApp FAS3050 datastore during RB4 failure and after successful vmotion: Brocade VCS Fabric Technology and NAS with NFS Validation Test 27 of 52

28 Step 13: Verify that I/Os are still running after switch RB4 failure that the Windows VM is sending I/Os during vmotion and RB4 failure, using Iometer to read/write to the NFS server. Step 14: Verify that I/Os are still running after switch RB4 failure that the Linux VM is sending I/Os during vmotion and RB4 failure, using dd to write to the NFS server. Brocade VCS Fabric Technology and NAS with NFS Validation Test 28 of 52

29 Expected Results The expected results for test 5 are to confirm that distributed I/O is not impacted while introducing a switch failure within the active path of a Brocade VCS fabric while VM clients are sending I/Os to the NFS server; they will be moved to another ESX server using vmotion. During the vmotion, there will be a failure on RB4, but there will be no disruption of I/Os Actual Results The actual results for test 5 confirm that distributed I/O is not impacted while introducing a switch failure in the active path of the Brocade VCS fabric while VM clients are sending I/Os to the NFS server; they are moved to another ESX server using vmotion. Before, during, and after RB4 failure and vmotion of both VMs, while they were accessing the NFS server, there was no disruption of I/O. Note that one ping from the Windows VM was lost due to an inflight frame that was lost during the failure. It was observed that all I/Os were rerouted via fabric switching in the remaining paths. Brocade VCS Fabric Technology and NAS with NFS Validation Test 29 of 52

30 3. TEST CASE #2: RED HAT ENTERPRISE LINUX NFS SERVER The topology used for validation is shown below. For this test, one type of NAS storage was used, with Red Hat Enterprise Linux configured as an NFS server. Test Case #2 with Red Hat Enterprise Linux NFS server: This test consists of a 4-node Brocade VCS fabric with a Red Hat Enterprise Linux NFS server. For this test, Spirent is used as a characterization tool for NFS clients. Spirent will emulate many NFS clients accessing the Red Hat Enterprise Linux NFS server share. Each of the NFS clients mounted the NFS share of the Red Hat Enterprise Linux NFS server. The purpose is to demonstrate that the topology of the network can change with limited implication to the application using NFS as an underlying protocol. Please note that this does not include a full configuration or validation of specific features on ESXi, Red Hat, NetApp, and VMware. Also, please note that for this test case, the NetApp FAS3050 has Gigabit Ethernet interfaces. Therefore, a Brocade VDX 6710 was used to extend Gigabit Ethernet to both RB1 and RB2. However, for most cases, it is recommended that you use redundant links for all fabric facing interfaces. 3.1 Test Case #2 Results Test Case #2 was performed successfully, with no issues. The Brocade VCS fabric interoperates with the Network File System (NFS) Protocol using Red Hat Enterprise Linux NFS server. This validation demonstrates that existing deployments using NFS will interoperate with Brocade VCS fabrics and exhibit resiliency to input/output (I/O) between clients and servers in a non-disruptive manner. This is a summary of the tests conducted for Test Case #2. 1. Test 1 validated that baseline I/O is using the shortest equal-cost paths in the fabric. NFS clients on RB3 and RB4 will access the Linux NFS server on the active vlag interface on RB1. 2. Test 2 validated that I/O flows are not impacted between the NFS clients emulated by Spirent Avalanche and Red Hat Enterprise Linux NFS server while introducing a link failure on one link in the Brocade trunk, which is an active shortest path in the fabric. 3. Test 4 validated that I/O flows are not impacted between the NFS clients emulated by Spirent Avalanche and Red Hat Enterprise Linux NFS server while introducing a switch failure in the active path of the Brocade VCS fabric. This summarizes the test results for Test Case #2. Test Description Results 1 Baseline of shortest paths and traffic distribution in the Brocade VCS fabric Pass 2 Perform link failure within a Brocade ISL trunk Pass 3 Perform multiple switch failures within the Brocade VCS fabric Pass Brocade VCS Fabric Technology and NAS with NFS Validation Test 30 of 52

31 3.2 Topology Figure 7. Red Hat Enterprise Linux NFS server validation topology and components for Test Case # Hardware Resources The following equipment was used in this configuration: Description Quantity Revisions Brocade VDX Brocade NOS Brocade VDX Brocade NOS Linux NFS Server 1 Red Hat Enterprise Linux Server Release 5.2 (Tikanga) Spirent Avalanche Compute Resources The following equipment was used in this configuration: Description Quantity Revisions Linux NFS Server 1 HP Proliant DL380G5p 320 GB HD 4 GB RAM & 1020 CNA Brocade VCS Fabric Technology and NAS with NFS Validation Test 31 of 52

32 3.5 Software Resources The following software was used in this configuration: Description Revision Brocade NOS Brocade NOS Linux NFS Server Red Hat Enterprise Linux Server release 5.2 (Tikanga) 3.6 Test 1: I/O Verification The purpose of test 1 is to validate baseline I/O is using the shortest equal-cost paths in the fabric. NFS clients on RB3 and RB4 will access the Linux NFS server on the active vlag interface on RB1. Figure 8. Red Hat Enterprise Linux NFS server validation topology for test Test Procedure Step 1: Execute the following command on all Brocade VDX ISLs and VDX vlag interfaces to confirm traffic flow and that the I/O path is distributed as expected. The following is an example: VDX6730-RB3# show in te 3/0/24 in rate Step 2: Run Spirent Test Center, which emulates the NFS clients on RB3/RB2. Brocade VCS Fabric Technology and NAS with NFS Validation Test 32 of 52

33 3.6.2 Expected Results The expected results for test 1 is that all I/Os from each of the NFS clients on RB3 and RB4 will be spread evenly across the Brocade trunks going into RB1, where the active vlag on the server is located Actual Results The actual results for test 1 confirm that all I/Os from each of the NFS clients on RB3 and RB4 are spread evenly across the Brocade trunks going into RB1, where the active vlag on the server is located. The total bandwidth from the 95 Spirent clients on RB3 (3/0/24) and RB4 (4/0/24) are received on the Egress port going to the NFS server (1/0/23). Brocade VCS Fabric Technology and NAS with NFS Validation Test 33 of 52

34 3.7 Test 2: Link Failure The purpose of test 2 is to validate that I/O flows are not impacted between the NFS clients emulated by Spirent Avalanche and Red Hat Enterprise Linux NFS server while introducing a link failure on one link in the Brocade trunk, which is an active shortest path in the fabric. The links that are failed as part of this test are highlighted below in red. Figure 9. Red Hat Enterprise Linux NFS server validation topology for test 2. The NFS clients on RB3 and RB4 continue to access the Red Hat Enterprise Linux NFS server on the active vlag interface on RB Test Procedure Step 1: Execute the following command on the desired Brocade VDX switches to validate baseline traffic prior to failure. The following is an example: VDX6730-RB3# show in te 3/0/1 in rate VDX6730-RB3# show in te 3/0/2 in rate Step 2: Run Spirent Avalanche for the duration of the test to emulate the NFS clients. Step 3: Now that active traffic on both Brocade ISLs is confirmed, an ISL in the trunk is failed. In this example, Interface TenGigE3/0/1 on RB3 is failed, which is one of the active ISLs in the trunk between RB3 and RB1. VDX6730-RB3# conf t Entering configuration mode terminal VDX6730-RB3(config)# in te 3/0/1 VDX6730-RB3(conf-if-te-3/0/1)# shut Brocade VCS Fabric Technology and NAS with NFS Validation Test 34 of 52

35 Step 4: Execute the following command on the desired Brocade VDX switches to observe traffic levels after interface failure. The following is an example: VDX6730-RB3# show in te 3/0/1 in rate VDX6730-RB3# show in te 3/0/2 in rate Step 5: Execute the following command on the desired Brocade VDX switches to re-enable the failed ISL between RB3 and RB1. VDX6730-RB3# conf t Entering configuration mode terminal VDX6730-RB3(config)# in te 3/0/1 VDX6730-RB3(conf-if-te-3/0/1)# no shut Step 6: Execute the following command on the desired Brocade VDX switches to observe traffic levels after interface re-enabling. The following is an example: VDX6730-RB3# show in te 3/0/1 in rate VDX6730-RB3# show in te 3/0/2 in rate Repeat Steps 1 6: Steps 1 6 were also performed for the other Brocade ISLs, highlighted in red under section Expected Results The expected results for test 2 are to confirm that distributed I/O is not impacted while failing an active Brocade member ISL within the fabric. The NFS clients on RB3 and RB4 continue to access the Red Hat Enterprise Linux NFS server on the active vlag interface on RB1. All I/Os from each of the NFS clients on RB3 and RB4 fail over to the remaining link on the Brocade trunk that goes into RB1, where the active vlag on the server is located. The total bandwidth from the clients on RB3 (3/0/24) and RB4 (4/0/24) are received on the Egress port going to the NFS server (1/0/23). There should be no loss in I/O, with the exception of in-flight frames during the failure of the link. When the links in the trunk are re-enabled, the I/Os should be evenly redistributed to all of the active links in the trunk. The same results should be observed between other Brocade ISL trunk link failures in the fabric. These failures were performed for the other Brocade ISLs highlighted in red under section Actual Results The actual results for test 2 confirm that distributed I/O is not impacted while failing an active Brocade member ISL within the fabric. During the link failure in the Brocade trunk while I/O was evenly distributed between the two 10 GbE links on the 20 GbE trunk; all I/O was immediately switched to the remaining 10 GbE link. When the link was re-enabled, all I/O was immediately rebalanced between both links again. The same results were observed between other Brocade ISL trunk link failures in the fabric. These failures were performed for the other Brocade ISLs highlighted in red under section Again, there was no disruption of I/O from the perspective of the Windows VM/NFS client. Brocade VCS Fabric Technology and NAS with NFS Validation Test 35 of 52

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

Nutanix Tech Note. VMware vsphere Networking on Nutanix

Nutanix Tech Note. VMware vsphere Networking on Nutanix Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS 全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

Nutanix Hyperconverged Appliance with the Brocade VDX ToR Switch Deployment Guide

Nutanix Hyperconverged Appliance with the Brocade VDX ToR Switch Deployment Guide January 8, 2016 Nutanix Hyperconverged Appliance with the Brocade VDX ToR Switch Deployment Guide 2016 Brocade Communications Systems, Inc. All Rights Reserved. Brocade, Brocade Assurance, the B-wing symbol,

More information

How To Evaluate Netapp Ethernet Storage System For A Test Drive

How To Evaluate Netapp Ethernet Storage System For A Test Drive Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200

More information

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers WHITE PAPER www.brocade.com Data Center Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers At the heart of Brocade VDX 6720 switches is Brocade Virtual Cluster

More information

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011 FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient

More information

Introduction to MPIO, MCS, Trunking, and LACP

Introduction to MPIO, MCS, Trunking, and LACP Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the

More information

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the

More information

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan) Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations

More information

Brocade VCS Fabric Technology with the EMC VNX5300

Brocade VCS Fabric Technology with the EMC VNX5300 Junet 2015 Brocade VCS Fabric Technology with the EMC VNX5300 Test Report Supporting Network OS 6.0.1 2015, Brocade Communications Systems, Inc. All Rights Reserved. ADX, Brocade, Brocade Assurance, the

More information

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER TECHNICAL WHITE PAPER Table of Contents Intended Audience.... 3 Overview.... 3 Virtual SAN Network... 3 Physical Network Infrastructure... 4 Data Center Network... 4 Host Network Adapter.... 5 Virtual

More information

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation

More information

DATA CENTER. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1

DATA CENTER. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1 Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1 CONTENTS Introduction...4 Building a Multi-Node VCS Fabric...4 Design Considerations...4 Topology...4 Clos Fabrics...4

More information

DEDICATED NETWORKS FOR IP STORAGE

DEDICATED NETWORKS FOR IP STORAGE DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores

More information

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage Flash memory has been used to transform consumer devices such as smartphones, tablets, and ultranotebooks, and now it

More information

VMware vsphere 5.1 Advanced Administration

VMware vsphere 5.1 Advanced Administration Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.

More information

The Advantages of IP Network Design and Build

The Advantages of IP Network Design and Build WHITE PAPER www.brocade.com DATA CENTER The Benefits of a Dedicated IP Network for Storage Application, storage, and virtualization companies recommend a dedicated IP storage network to ensure deterministic

More information

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

Expert Reference Series of White Papers. VMware vsphere Distributed Switches Expert Reference Series of White Papers VMware vsphere Distributed Switches info@globalknowledge.net www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,

More information

Building Tomorrow s Data Center Network Today

Building Tomorrow s Data Center Network Today WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,

More information

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02 vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 Technical Brief v1.0 February 2013 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01 vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

Dell EqualLogic Best Practices Series

Dell EqualLogic Best Practices Series Dell EqualLogic Best Practices Series Scaling and Best Practices for Implementing VMware vsphere Based Virtual Workload Environments with the Dell EqualLogic FS7500 A Dell Technical Whitepaper Storage

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

VMware and Brocade Network Virtualization Reference Whitepaper

VMware and Brocade Network Virtualization Reference Whitepaper VMware and Brocade Network Virtualization Reference Whitepaper Table of Contents EXECUTIVE SUMMARY VMWARE NSX WITH BROCADE VCS: SEAMLESS TRANSITION TO SDDC VMWARE'S NSX NETWORK VIRTUALIZATION PLATFORM

More information

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) Extreme Networks White Paper Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) The evolution of the data center fabric has been well documented. The

More information

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized

More information

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture

More information

VMware Horizon View and All Flash Virtual SAN Reference Architecture

VMware Horizon View and All Flash Virtual SAN Reference Architecture VMware Horizon View and All Flash Virtual SAN Reference Architecture Deploying VDI Desktops with All Flash Software-Defined Storage REFERENCE ARCHITECTURE WHITE PAPER REFERENCE ARCHITECTURE WHITE PAPER

More information

W H I T E P A P E R. Best Practices for High Performance NFS Storage with VMware

W H I T E P A P E R. Best Practices for High Performance NFS Storage with VMware Best Practices for High Performance NFS Storage with VMware Executive Summary The proliferation of large-scale, centralized pools of processing, networking, and storage resources is driving a virtualization

More information

SN0054584-00 A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP

SN0054584-00 A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP SN0054584-00 A Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Information

More information

Private Cloud Migration

Private Cloud Migration W H I T E P A P E R Infrastructure Performance Analytics Private Cloud Migration Infrastructure Performance Validation Use Case October 2012 Table of Contents Introduction 3 Model of the Private Cloud

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

Network Troubleshooting & Configuration in vsphere 5.0. 2010 VMware Inc. All rights reserved

Network Troubleshooting & Configuration in vsphere 5.0. 2010 VMware Inc. All rights reserved Network Troubleshooting & Configuration in vsphere 5.0 2010 VMware Inc. All rights reserved Agenda Physical Network Introduction to Virtual Network Teaming - Redundancy and Load Balancing VLAN Implementation

More information

Enterasys Data Center Fabric

Enterasys Data Center Fabric TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed

More information

How to Create a Virtual Switch in VMware ESXi

How to Create a Virtual Switch in VMware ESXi How to Create a Virtual Switch in VMware ESXi I am not responsible for your actions or their outcomes, in any way, while reading and/or implementing this tutorial. I will not provide support for the information

More information

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments As the value and volume of business data continues to rise, small/medium/ enterprise (SME) businesses need high-performance Network Attached

More information

Storage Area Network Design Overview Using Brocade DCX 8510. Backbone Switches

Storage Area Network Design Overview Using Brocade DCX 8510. Backbone Switches Storage Area Network Design Overview Using Brocade DCX 8510 Backbone Switches East Carolina University Paola Stone Martinez April, 2015 Abstract The design of a Storage Area Networks is a very complex

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

Using NetApp Unified Connect to Create a Converged Data Center

Using NetApp Unified Connect to Create a Converged Data Center Technical Report Using NetApp Unified Connect to Create a Converged Data Center Freddy Grahn, Chris Lemmons, NetApp November 2010 TR-3875 EXECUTIVE SUMMARY NetApp extends its leadership in Ethernet storage

More information

What s New in VMware vsphere 5.5 Networking

What s New in VMware vsphere 5.5 Networking VMware vsphere 5.5 TECHNICAL MARKETING DOCUMENTATION Table of Contents Introduction.................................................................. 3 VMware vsphere Distributed Switch Enhancements..............................

More information

Networking Topology For Your System

Networking Topology For Your System This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.

More information

An Introduction to Brocade VCS Fabric Technology

An Introduction to Brocade VCS Fabric Technology WHITE PAPER www.brocade.com DATA CENTER An Introduction to Brocade VCS Fabric Technology Brocade VCS Fabric technology, which provides advanced Ethernet fabric capabilities, enables you to transition gracefully

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

A High-Performance Storage and Ultra-High-Speed File Transfer Solution

A High-Performance Storage and Ultra-High-Speed File Transfer Solution A High-Performance Storage and Ultra-High-Speed File Transfer Solution Storage Platforms with Aspera Abstract A growing number of organizations in media and entertainment, life sciences, high-performance

More information

Ethernet Storage Best Practices

Ethernet Storage Best Practices Technical Report Ethernet Storage Best Practices David Klem, Trey Layton, Frank Pleshe, NetApp January 2010 TR-3802 TABLE OF CONTENTS 1 INTRODUCTION... 3 2 USING VLANS FOR TRAFFIC SEPARATION... 3 2.1 VLAN

More information

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand

More information

> Resilient Data Center Solutions for VMware ESX Servers Technical Configuration Guide

> Resilient Data Center Solutions for VMware ESX Servers Technical Configuration Guide > Resilient Data Center Solutions for VMware ESX Servers Technical Configuration Guide Enterprise Networking Solutions Document Date: September 2009 Document Number: NN48500-542 Document Version: 2.0 Nortel

More information

Scalable Approaches for Multitenant Cloud Data Centers

Scalable Approaches for Multitenant Cloud Data Centers WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,

More information

iscsi: Accelerating the Transition to Network Storage

iscsi: Accelerating the Transition to Network Storage iscsi: Accelerating the Transition to Network Storage David Dale April 2003 TR-3241 WHITE PAPER Network Appliance technology and expertise solve a wide range of data storage challenges for organizations,

More information

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

IP ETHERNET STORAGE CHALLENGES

IP ETHERNET STORAGE CHALLENGES ARISTA SOLUTION GUIDE IP ETHERNET STORAGE INSIDE Oveview IP Ethernet Storage Challenges Need for Massive East to West Scalability TCP Incast Storage and Compute Devices Interconnecting at Different Speeds

More information

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS ADVANCED PCIE 2.0 10GBASE-T ETHERNET NETWORKING FOR SUN BLADE AND RACK SERVERS KEY FEATURES Low profile adapter and ExpressModule form factors for Oracle

More information

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series . White Paper Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series What You Will Learn The Cisco Nexus family of products gives data center designers the opportunity

More information

Brocade VCS Fabric Technology with the NetApp 3170 NAS Storage Array

Brocade VCS Fabric Technology with the NetApp 3170 NAS Storage Array November 2015 Brocade VCS Fabric Technology with the NetApp 3170 NAS Storage Array Validation Test Report Supporting Network OS 6.0.1 and 4.1.2 2015, Brocade Communications Systems, Inc. All Rights Reserved.

More information

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance This white paper compares the performance of blade-to-blade network traffic between two enterprise blade solutions: the Dell

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009 Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208

More information

FIBRE CHANNEL OVER ETHERNET

FIBRE CHANNEL OVER ETHERNET FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,

More information

Configuring iscsi Multipath

Configuring iscsi Multipath CHAPTER 13 Revised: April 27, 2011, OL-20458-01 This chapter describes how to configure iscsi multipath for multiple routes between a server and its storage devices. This chapter includes the following

More information

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms

More information

Isilon IQ Network Configuration Guide

Isilon IQ Network Configuration Guide Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3

More information

JOB ORIENTED VMWARE TRAINING INSTITUTE IN CHENNAI

JOB ORIENTED VMWARE TRAINING INSTITUTE IN CHENNAI JOB ORIENTED VMWARE TRAINING INSTITUTE IN CHENNAI Job oriented VMWARE training is offered by Peridot Systems in Chennai. Training in our institute gives you strong foundation on cloud computing by incrementing

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

Best practices when deploying VMware vsphere 5.0 connected to HP Networking Switches

Best practices when deploying VMware vsphere 5.0 connected to HP Networking Switches Technical white paper Best practices when deploying VMware vsphere 5.0 connected to HP Networking Switches Table of contents Executive summary 2 Overview 2 VMware ESXi 5 2 HP Networking 3 Link aggregation

More information

Validating Long-distance VMware vmotion

Validating Long-distance VMware vmotion Technical Brief Validating Long-distance VMware vmotion with NetApp FlexCache and F5 BIG-IP F5 BIG-IP enables long distance VMware vmotion live migration and optimizes NetApp FlexCache replication. Key

More information

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco Cisco Datacenter 3.0 Datacenter Trends David Gonzalez Consulting Systems Engineer Cisco 2009 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Agenda Data Center Ethernet (DCE) Fiber Channel over

More information

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly

More information

VMware vsphere on NetApp. Course: 5 Day Hands-On Lab & Lecture Course. Duration: Price: $ 4,500.00. Description:

VMware vsphere on NetApp. Course: 5 Day Hands-On Lab & Lecture Course. Duration: Price: $ 4,500.00. Description: Course: VMware vsphere on NetApp Duration: 5 Day Hands-On Lab & Lecture Course Price: $ 4,500.00 Description: Managing a vsphere storage virtualization environment requires knowledge of the features that

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

Best Practices for Implementing iscsi Storage in a Virtual Server Environment

Best Practices for Implementing iscsi Storage in a Virtual Server Environment white paper Best Practices for Implementing iscsi Storage in a Virtual Server Environment Server virtualization is becoming a no-brainer for any that runs more than one application on servers. Nowadays,

More information

Fibre Channel Over and Under

Fibre Channel Over and Under Fibre Channel over : A necessary infrastructure convergence By Deni Connor, principal analyst April 2008 Introduction Consolidation of IT datacenter infrastructure is happening in all forms. IT administrators

More information

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Enabling automatic migration of port profiles under Microsoft Hyper-V with Brocade Virtual Cluster Switching technology

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...

More information

Running Philips IntelliSpace Portal with VMware vmotion, DRS and HA on vsphere 5.1 and 5.5. September 2014

Running Philips IntelliSpace Portal with VMware vmotion, DRS and HA on vsphere 5.1 and 5.5. September 2014 Running Philips IntelliSpace Portal with VMware vmotion, DRS and HA on vsphere 5.1 and 5.5 September 2014 D E P L O Y M E N T A N D T E C H N I C A L C O N S I D E R A T I O N S G U I D E Running Philips

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

VCS Monitoring and Troubleshooting Using Brocade Network Advisor

VCS Monitoring and Troubleshooting Using Brocade Network Advisor VCS Monitoring and Troubleshooting Using Brocade Network Advisor Brocade Network Advisor is a unified network management platform to manage the entire Brocade network, including both SAN and IP products.

More information

RFP-MM-1213-11067 Enterprise Storage Addendum 1

RFP-MM-1213-11067 Enterprise Storage Addendum 1 Purchasing Department August 16, 2012 RFP-MM-1213-11067 Enterprise Storage Addendum 1 A. SPECIFICATION CLARIFICATIONS / REVISIONS NONE B. REQUESTS FOR INFORMATION Oracle: 1) What version of Oracle is in

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information