VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

Size: px
Start display at page:

Download "VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER"

Transcription

1 TECHNICAL WHITE PAPER

2 Table of Contents Intended Audience Overview Virtual SAN Network... 3 Physical Network Infrastructure... 4 Data Center Network... 4 Host Network Adapter Virtual Network Infrastructure VMkernel Network Network Adapter Teaming... 6 Multicast vsphere Network I/O Control Jumbo Frames... 7 Network Availability Highly Available Virtual SAN Network Architectures Architecture 1 Uplinked Switches... 8 Overview... 8 Network Characteristics Under Normal Conditions Network Characteristics Under Failover Conditions vsphere Network I/O Control Architecture 2 Stacked Switches Overview Network Characteristics Under Normal Conditions Network Characteristics Under Failover Conditions vsphere Network I/O Control Conclusion References TECHNICAL WHITE PAPER / 2

3 Intended Audience This document is targeted toward virtualization, network, and storage architects interested in deploying VMware Virtual SAN solutions. Overview Virtual SAN is a hypervisor-converged, software-defined storage solution for the software-defined data center (SDDC). It is the first policy-driven storage product designed for VMware vsphere environments that simplifies and streamlines storage provisioning and management. Virtual SAN is a distributed shared storage solution that enables the rapid provisioning of storage within VMware vcenter as part of virtual machine creation and deployment operations. It uses the concept of disk groups to pool together locally attached flash devices and magnetic disks as management constructs. Disk groups are composed of at least one flash device and several magnetic disks. The flash devices are used as read cache and write buffer in front of the magnetic disks to optimize virtual machine and application performance. The Virtual SAN datastore aggregates the disk groups across all hosts in the Virtual SAN cluster to form a single shared datastore for all hosts in the cluster. Virtual SAN requires a correctly configured network for virtual machine I/O as well as communication among cluster nodes. Because the majority of virtual machine I/O travels the network due to the distributed storage architecture, highly performing and available network configuration is critical to a successful Virtual SAN deployment. This paper provides a technology overview of Virtual SAN network requirements and summarizes Virtual SAN network design and configuration best practices for deploying a highly available and scalable Virtual SAN solution. Virtual SAN Network The hosts in a Virtual SAN cluster must be part of the Virtual SAN network and on the same subnet regardless of whether or not the hosts contribute storage. Virtual SAN requires a dedicated VMkernel port type and uses a proprietary transport protocol for traffic between the hosts. The Virtual SAN network is an integral part of an overall vsphere network configuration and therefore cannot work in isolation from other vsphere network services. Virtual SAN utilizes either a VMware standard virtual switch (VSS) or VMware vsphere (VDS) to construct a dedicated storage network. However, Virtual SAN and other vsphere workloads commonly share the underlying virtual and physical network infrastructure, so the Virtual SAN network must be carefully designed following general vsphere networking best practices in addition to its own. The following sections review general guidelines that should be followed when designing the Virtual SAN network. These recommendations do not conflict with general vsphere network design best practices. TECHNICAL WHITE PAPER / 3

4 Physical Network Infrastructure Data Center Network The traditional access aggregation core three-tier network model was built to serve north south traffic in and out of a data center. Although the model offers great redundancy and resiliency, it limits overall bandwidth by as much as 50 percent due to the blocking of critical network links using the Spanning Tree Protocol (STP) to prevent network looping. As virtualization and cloud computing have evolved, more data centers have adopted the leaf spine topology for data center fabric simplicity, scalability, bandwidth, fault tolerance, and quality of service (QoS). The Virtual SAN network operates in both topologies regardless of how the core switch layer is constructed. Core Aggregation X X X Access X X X X Spine Leaf Figure 1. Data Center Network Topologies The leaf switches are fully meshed with the spine switches, with links that can be either switched or routed; these are referred to respectively as layer 2 and layer 3 leaf spine architectures. Virtual SAN over layer 3 networks is currently not supported. Until it is, a layer 2 leaf spine architecture must be used if Virtual SAN cluster nodes are connected to different top-of-rack (ToR) switches where internode communication must travel through the spine. TECHNICAL WHITE PAPER / 4

5 Host Network Adapter The following practices should be applied on each Virtual SAN cluster node: At least one physical network adapter must be used for the Virtual SAN network. To provide failover capability, one or more additional physical network adapters are recommended. The physical network adapter(s) can be shared among other vsphere networks such as the virtual machine network and the VMware vsphere vmotion network. Logical layer 2 separation of Virtual SAN VMkernel traffic VLANs is recommended when physical network adapter(s) share traffic types. QoS can be provided for traffic types via VMware vsphere Network I/O Control. A 10-Gigabit Ethernet (GbE) network adapter is strongly recommended for Virtual SAN. If a 1GbE network adapter is used, VMware recommends that it be dedicated for Virtual SAN. A 40GbE network adapter is supported if vsphere supports it. However, Virtual SAN does not currently guarantee utilization of the full bandwidth. In the leaf spine architecture, due to the full-mesh topology and port density constraints, leaf switches are normally oversubscribed for bandwidth. For example, a fully utilized 10GbE uplink used by the Virtual SAN network in reality might achieve only 2.5Gbps throughput on each node when the leaf switches are oversubscribed at a 4:1 ratio and Virtual SAN traffic must go across the spine, as is illustrated in Figure 2. The impact of network topology on available bandwidth should be considered when designing a Virtual SAN cluster. Spine 4 x 100Gbps 4 x 100Gbps ToR Switch 1 16 x 10Gbps 4:1 Oversubscription vsphere + Virtual SAN ToR Switch 2 16 x 10Gbps Rack1 Nodes 1 16 Rack1 Nodes Figure 2. Bandwidth Oversubscription for a Virtual SAN Network in a Leaf Spine Architecture TECHNICAL WHITE PAPER / 5

6 Virtual Network Infrastructure VMkernel Network A new VMkernel type called Virtual SAN traffic is introduced in vsphere for Virtual SAN. To participate in Virtual SAN, each cluster node must have this VMkernel port configured. A VMkernel port group for Virtual SAN should be created in VSS or VDS for each cluster, and the same network label should be used to ensure consistency across all hosts. Unlike multiple network adapter vsphere vmotion, Virtual SAN does not support multiple VMkernel adapters on the same subnet. Multiple Virtual SAN VMkernel adapters on different networks, such as VLAN and separate physical fabric, are supported but not recommended, because the operational complexity of setting it up greatly outweighs the high-availability benefit. High availability can also easily be accomplished by network adapter teaming, as described in the following subsection. Network Adapter Teaming The Virtual SAN network can use teaming and failover policy to determine how traffic is distributed between physical adapters and how to reroute traffic in the event of adapter failure. Network adapter teaming is used mainly for high availability, but not for load balancing, when the team is dedicated for Virtual SAN. However, additional vsphere traffic types sharing the same team can still leverage the aggregated bandwidth by distributing various types of traffic to different adapters within the team. In general, VMware recommends setting up a separate port group for each traffic type and configuring the teaming and failover policy for each port group so as to use a different active adapter within the team if possible. One exception is the IP hash-based policy. Under this policy, Virtual SAN, either alone or together with other vsphere workloads, is capable of balancing load between adapters within a team, although there is no guarantee of performance improvement for all configurations. This policy requires the physical switches to be configured with either EtherChannel or Link Aggregation Control Protocol (LACP). Only static mode EtherChannel is supported with VSS; LACP is supported only with VDS. More information about network adapter teaming support is presented in later sections of this paper. Multicast IP multicast sends source packets to multiple receivers as a group transmission. Packets are replicated in the network only at the points of path divergence, normally switches or routers. This results in the most efficient delivery of data to a number of destinations, with minimum network bandwidth consumption. Virtual SAN uses multicast to deliver metadata traffic among cluster nodes for efficiency and bandwidth conservation. Layer 2 multicast is required for VMkernel ports utilized by Virtual SAN. All VMkernel ports on the Virtual SAN network subscribe to a multicast group using Internet Group Management Protocol (IGMP). IGMP snooping configured with an IGMP snooping querier can be used to limit the physical switch ports participating in the multicast group to only Virtual SAN VMkernel port uplinks. The need to configure an IGMP snooping querier to support IGMP snooping varies by switch vendor. Consult specific switch vendor and model best practices for IGMP snooping configuration. As mentioned previously, multicast over the layer 3 network currently is not supported. A default multicast address is assigned to each Virtual SAN cluster at the time of creation. When multiple Virtual SAN clusters reside on the same layer 2 network, the default multicast address should be changed within the additional Virtual SAN clusters. Although it is supported to have the same multicast address for more than one Virtual SAN cluster, different addresses prevent multiple clusters from receiving all multicast streams and hence can reduce network processing overhead. Similarly, multicast address ranges must be carefully planned in environments where other network services such as VXLAN also utilize multicast. VMware Knowledge Base article can be consulted for the detailed procedure of changing the default Virtual SAN multicast address. TECHNICAL WHITE PAPER / 6

7 vsphere Network I/O Control vsphere Network I/O Control can be used to set QoS for Virtual SAN traffic over the same network adapter uplink in a VDS shared by other vsphere traffic types, including iscsi traffic, vsphere vmotion traffic, management traffic, VMware vsphere Replication traffic, NFS traffic, VMware vsphere Fault Tolerance (vsphere FT) traffic, and virtual machine traffic. General vsphere Network I/O Control best practices apply: For bandwidth allocation, use shares instead of limits, because the former has greater flexibility for unused capacity redistribution. Consider imposing limits on a given resource pool to prevent it from saturating the physical network. Always assign a reasonably high relative share for the vsphere FT resource pool because vsphere FT is a very latency-sensitive traffic type. Use vsphere Network I/O Control together with network adapter teaming to maximize network capacity utilization. Leverage the VDS port group and traffic shaping policy features for additional bandwidth control on various resource pools. NOTE: Virtual SAN does not currently support hosting vsphere FT protected virtual machines. vsphere FT traffic considerations might be applicable in a vsphere environment where Virtual SAN is used in conjunction with external SAN attached storage. Specifically for Virtual SAN, we make the following additional recommendations: Do not set a limit on the Virtual SAN traffic; by default, it is unlimited. Set a relative share for the Virtual SAN resource pool, based on application performance requirements on storage and also holistically taking into account other workloads such as bursty vsphere vmotion traffic that is required for virtual machine mobility and availability. Jumbo Frames Virtual SAN supports jumbo frames. VMware testing finds that using jumbo frames can reduce CPU utilization and improve throughput; however, both gains are at a minimum level because vsphere already uses TCP segmentation offload (TSO) and large receive offload (LRO) to deliver similar benefits. In data centers where jumbo frames already are enabled in the network infrastructure, they are recommended for Virtual SAN deployment. Otherwise, jumbo frames are not recommended because the operational cost of configuring them throughout the network infrastructure might outweigh the limited CPU and performance benefits. Network Availability For high availability, the Virtual SAN network should have redundancy in both physical and virtual network paths and components to prevent single points of failure. The architecture should configure all port groups or distributed virtual port groups with at least two uplink paths using different network adapters that are configured with network adapter teaming. It should also set a failover policy specifying the appropriate active active or active standby mode and connect each network adapter to a different physical switch for an additional level of redundancy. The remainder of this paper discusses these configurations in detail for two different architecture designs. TECHNICAL WHITE PAPER / 7

8 Highly Available Virtual SAN Network Architectures Architectures described in this section include switches at the access layer only; whether the physical data center fabric uses the access aggregation core or leaf spine topology makes no difference. Using a team of two physical network adapters connected to separate physical switches can improve the availability and reliability of the Virtual SAN network. Because servers are connected to each other through separate switches with redundant network adapters, the cluster is more resilient. One VSS or VDS should be created accordingly to support Virtual SAN and other network traffic. VDS is chosen in our testing and is referred to primarily hereafter in various discussions. A separate port group for each traffic type is recommended for Virtual SAN and other vsphere traffic that shares the same team. Teaming and failover policy on each port group should use a different active adapter if possible. To minimize interswitch traffic, such a configuration should be created uniformly on all participating cluster nodes. There are various ways to interconnect switches, including stacking and uplinking. Because not all switches support stacking, uplinking is the more common method of interswitch communication. To prevent the switch interconnect from being the overall network bottleneck, total bandwidth of the interswitch links must be carefully planned in both stacking and uplink architectures. The related switch oversubscription issue depicted in Figure 2 must be taken into account to ensure that sufficient bandwidth is available for interswitch traffic. Architecture 1 Uplinked Switches Overview Figure 3 shows the Virtual SAN network architecture using uplinked switches. Each host uses a team of network adapters to connect to each switch for redundancy. One VDS or VSS must be created with separate port groups for Virtual SAN and other vsphere workloads respectively. If bandwidth control for different network traffic types is required, it is better to configure network resource pools through vsphere Network I/O Control. Switches Uplink Switch 1 Switch 2 Uplink Virtual SAN Virtual SAN Virtual SAN Host1 Host2 Host3 Figure 3. Virtual SAN Network Architecture Using Uplinked Switches TECHNICAL WHITE PAPER / 8

9 Network Characteristics Under Normal Conditions There are five different network adapter teaming policies for a VDS port group. Not all the policies are supported in the switch uplink mode, as is shown in Table 1. TEAMING POLICY NETWORK HIGH AVAILABILITY BAN DWI DTH AGGREGATION vsphere N E T WO R K I/O CONTROL SHARES Route based on IP hash N/A N/A N/A Route based on the originating virtual port (team dedicated to Virtual SAN) (team shared by other vsphere traffic types) Route based on source MAC hash (team dedicated to Virtual SAN) (team shared by other vsphere traffic types) Route based on physical network adapter load (team dedicated to Virtual SAN) (team shared by other vsphere traffic types) Use explicit failover order (team dedicated to Virtual SAN) (team shared by other vsphere traffic types) Table 1. Network Adapter Teaming Policies When Switches Are Uplinked The IP hash policy requires static EtherChannel or LACP link aggregation support on physical switches, and the switches must support stacking or functionality similar to a virtual PortChannel (vpc). Because the switches are not stacked in this architecture, the IP hash policy is not supported. When any of the supported policies is applied, Virtual SAN traffic uses only one network adapter at a time, no matter whether the multiple network adapters are teamed in active active or active standby mode. To demonstrate this, a test is designed to have a virtual machine running on host2, with mirrored copies of its virtual disk stored on host1 and host3 respectively. TECHNICAL WHITE PAPER / 9

10 All hosts use 1GbE network adapters for the Virtual SAN network. When simulated write workload against the Virtual SAN cluster is run in the virtual machine, only one vmnic is used for data transfer, regardless of which teaming policy is in effect. In active standby mode, accordingly, the other hosts use the vmnic that is physically connected to the same switch to receive data; there is no interswitch traffic in this case. In active active mode, because destination hosts can use the vmnic that is physically connected to a different switch to receive data, interswitch traffic can occur using the uplinks. Figure 4 shows network traffic on host2 during the test, and Figure 5 lists esxtop command output; both point to as being the only adapter used by Virtual SAN. Figure 4. Network Traffic Uplinked Switch Architecture Figure 5. esxtop Command Output Uplinked Switch Architecture TECHNICAL WHITE PAPER / 10

11 Network Characteristics Under Failover Conditions In this architecture, if one physical adapter fails, the other physical adapter in the same team takes over network traffic. The other hosts are not affected, continuing to use the same physical adapters. For example, when fails on one host, VMkernel changes to use, as is shown in Figures 6 and 7. The VMware vsphere Client issues three Network uplink redundancy lost alarms, one for each host. Figure 6. Network Traffic During Failover Uplinked Switch Architecture Figure 7. esxtop Command Output During Failover Uplinked Switch Architecture If one physical switch fails, all hosts use network adapters that are connected to the other physical switch instead. In this scenario, the Virtual SAN network remains in the Normal state, and virtual machines on those hosts continue running without errors. vsphere Network I/O Control vsphere Network I/O Control is supported in this switch uplink architecture using VDS. Virtual SAN traffic and other vsphere traffic types can load-share by using different physical network adapters in the team. When one adapter fails, vsphere combines all traffic on the remaining adapter, where vsphere Network I/O Control shares can be used to deliver QoS to Virtual SAN in case of bandwidth contention. TECHNICAL WHITE PAPER / 11

12 In Figure 8, vsphere Network I/O Control is enabled and different shares are assigned to each traffic type. vsphere vmotion traffic is used as an illustration example. All other vsphere traffic types including vsphere FT traffic, iscsi traffic, management traffic, vsphere Replication traffic, NFS traffic, and virtual machine traffic work the same way with vsphere Network I/O Control. Switches Uplink Switch 1 Switch 2 Uplink vmk1 vsphere vmotion Virtual SAN 50 Shares 100 Shares vmk1 vsphere vmotion Virtual SAN 50 Shares 100 Shares vmk1 vsphere vmotion Virtual SAN 50 Shares 100 Shares Host1 Host2 Host3 Figure 8. Virtual SAN Network Architecture with vsphere Network I/O Control Using Uplinked Switches In active standby mode, when Virtual SAN and vsphere vmotion port group teaming policies are set to use different active adapters in the team, the vsphere cluster uses different vmnics for concurrent Virtual SAN and vsphere vmotion traffic, as is depicted in Figure 9. In the active active teaming mode, Virtual SAN and vsphere vmotion traffic can use the same vmnic uplink. Figure 9. Network Traffic with vsphere Network I/O Control Uplinked Switch Architecture TECHNICAL WHITE PAPER / 12

13 Figure 10 shows esxtop command output on all three hosts. On each host, is used for Virtual SAN traffic, and is used for vsphere vmotion traffic. The red arrows represent Virtual SAN data flow, and the blue arrows indicate vsphere vmotion data flow. Figure 10. esxtop Command Output with vsphere Network I/O Control Uplinked Switch Architecture In Table 2, the plus sign refers to network receiving rate, and the minus sign refers to network transmission rate. When a virtual machine is migrated from host2 to host1, on those hosts is used for vsphere vmotion traffic. When a Virtual SAN write workload is generated from a virtual machine on host2, data is transferred using on all three hosts. NETWORK UPLINK HOST1 H OST2 HOST3 vmk1 (vsphere vmotion) (Virtual SAN) N/A N/A N/A vmk1 (vsphere vmotion) (Virtual SAN) N/A N/A N/A Table 2. Network Traffic with vsphere Network I/O Control Uplinked Switch Architecture TECHNICAL WHITE PAPER / 13

14 When one physical network adapter fails, the other in the same team takes over the responsibility for all traffic. Virtual SAN and vsphere vmotion consume bandwidth of the remaining network adapter in accordance with the vsphere Network I/O Control shares that are assigned to each traffic type. Table 3 illustrates how when fails on host2, takes over the Virtual SAN workload, in addition to its running the vsphere vmotion workload, and shares the bandwidth in accordance with vsphere Network I/O Control settings. NETWORK UPLINK HOST1 H OST2 HOST3 vmk1 (vsphere vmotion) (Virtual SAN) N/A +288 Failed N/A +299 vmk1 (vsphere vmotion) (Virtual SAN) N/A -588 N/A Table 3. Network Traffic During Network Adapter Failure with vsphere Network I/O Control Uplinked Switch Architecture Similarly, when one physical switch fails, an alarm is triggered on each host. Traffic carried by the vmnics connected to the failed switch are failed over to the other vmnics connected to the remaining switch. Virtual SAN and vsphere vmotion traffic types also share the vmnic bandwidth in accordance with vsphere Network I/O Control settings, as is shown in Table 4. NETWORK UPLINK HOST1 H OST2 HOST3 vmk1 (vsphere vmotion) (Virtual SAN) Failed Failed Failed vmk1 (vsphere vmotion) (Virtual SAN) Table 4. Network Traffic During Switch Failure with vsphere Network I/O Control Uplinked Switch Architecture TECHNICAL WHITE PAPER / 14

15 Architecture 2 Stacked Switches Overview A stackable network switch can operate both fully functionally standalone and together with one or more other network switches. A group of switches that have been set up to operate together is normally termed as a stack. This stack has the characteristics of a single switch but contains the port capacity of the sum of the combined switches. The stack members perform together as a unified system. At the hardware layer, several switches must be configured in the stacking mode for multiple physical links from the same host. For redundancy, each host uses a team of network adapters to connect to members of the stacked switch group. At the hypervisor layer, one VDS or VSS must be created with separate port groups for Virtual SAN and other vsphere workloads respectively. If bandwidth control for different network traffic types is required, it is better to configure network resource pools through vsphere Network I/O Control. Figure 11 shows the Virtual SAN network architecture using stacked switches. Stacked Switches Switch 1 (Stack Member 1) uplink Switch 2 (Stack Member 2) Virtual SAN Virtual SAN Virtual SAN Host1 Host2 Host3 Figure 11. Virtual SAN Network Architecture Using Stacked Switches Network Characteristics Under Normal Conditions One of the main differences between the two network architectures is the support of IP hash-based network adapter teaming policy, which requires the physical switches to be configured in stack mode. IP hash-based load balancing does not support standby uplinks, so IP hash-based policy does not allow the active standby network adapter teaming mode. TECHNICAL WHITE PAPER / 15

16 Table 5 lists network adapter teaming policies supported when switches are stacked. To achieve bandwidth aggregation, the Route based on IP hash policy with all adapters being active should be set in the distributed port group for Virtual SAN and other port groups that share the same uplinks. All other policies can result in load sharing, but not load balancing, regardless of whether the teaming is in active standby or active active mode, as with architecture 1. TEAMING POLICY NETWORK HIGH AVAILABILITY BAN DWI DTH AGGREGATION vsphere NETWORK I/O CONTROL SHARES Route based on IP hash (active active mode only) Route based on the originating virtual port (team dedicated to Virtual SAN) (team shared by other vsphere traffic types) Route based on source MAC hash (team dedicated to Virtual SAN) (team shared by other vsphere traffic types) Route based on physical network adapter load (team dedicated to Virtual SAN) (team shared by other vsphere traffic types) (team dedicated to Virtual SAN) Use explicit failover order (team shared by other vsphere traffic types) Table 5. Network Adapter Teaming Policies When Switches Are Stacked With IP hash-based load-balancing policy, all physical switch ports connected to the active vmnic uplinks must be configured with static EtherChannel or LACP. This ensures that the same hash algorithm is used for traffic returning in the opposite direction. And IP hash-based load balancing should be set for all port groups using the same set of uplinks. All vmnics in the team can be used for Virtual SAN traffic. TECHNICAL WHITE PAPER / 16

17 To demonstrate load balancing under the IP hash policy, the same test that is run in architecture 1 is executed. As is shown in Figure 12 as compared to Figure 4, host2 makes use of both and to transmit two copies of data to host1 and host3 respectively. Consequently, the test achieves twice the throughput overall. Figure 12. Network Traffic Stacked Switch Architecture TECHNICAL WHITE PAPER / 17

18 Network Characteristics Under Failover Conditions Active Standby Mode When an active physical adapter fails, an alarm is triggered for the host in vcenter; its standby takes over the responsibility. There is no impact on other hosts. When a switch in the stack fails, an alarm is triggered for each host. All connected physical adapters fail over to their standbys. Overall, Virtual SAN network performance is identical to that on architecture 1. Active Active Mode IP hash policy is the only policy that is truly effective for active active utilization of teamed network adapters. Under this policy, an alarm is triggered for the host in vcenter when an active adapter fails. The other active adapters in the team collectively take over its bandwidth. There is no impact on other hosts. When a switch fails, an alarm is triggered for each host. All connected physical adapters transfer their responsibilities to the other active adapters. Figures 13 and 14 show esxtop command outputs that reflect the number of active vmnics prior to and after an adapter failure. Figure 13. esxtop Command Output Before Failure Stacked Switch Architecture Figure 14. esxtop Command Output After Failure Stacked Switch Architecture TECHNICAL WHITE PAPER / 18

19 vsphere Network I/O Control vsphere Network I/O Control is also supported in this stacked switch architecture using VDS. Virtual SAN traffic and other vsphere traffic types can use vsphere Network I/O Control shares to set different QoS targets. Again, the vsphere vmotion traffic is used as an illustration example. All other vsphere traffic types including vsphere FT traffic, iscsi traffic, management traffic, vsphere Replication traffic, NFS traffic, and virtual machine traffic work the same way with vsphere Network I/O Control. Figure 15 illustrates the architecture where vsphere Network I/O Control is enabled. Stacked Switches Switch 1 (Stack Member 1) uplink Switch 2 (Stack Member 2) vmk1 vsphere vmotion Virtual SAN 50 Shares 100 Shares vmk1 vsphere vmotion Virtual SAN 50 Shares 100 Shares vmk1 vsphere vmotion Virtual SAN 50 Shares 100 Shares Host1 Host2 Host3 Figure 15. Virtual SAN Network Architecture with vsphere Network I/O Control Using Stacked Switches Active Standby Mode In active standby mode, the Virtual SAN network cannot leverage the bandwidth aggregation benefit of IP hash policy; therefore, the network characteristics with vsphere Network I/O Control under both normal and failover conditions are the same as on architecture 1. Active Active Mode In active active mode with IP hash policy, if the Virtual SAN port group and other vsphere workload port groups share the same uplinks, the workloads leverage the aggregated bandwidth together in accordance with their vsphere Network I/O Control share settings. TECHNICAL WHITE PAPER / 19

20 For example, in Table 6, when Virtual SAN and vsphere vmotion traffic are generated simultaneously, Virtual SAN uses and separately to transmit two copies of data. Meanwhile, vsphere vmotion leverages only one vmnic to transmit data because the test initiates migration of only one virtual machine. On the vmnic that is used by both workloads, Virtual SAN consumes bandwidth nearly twice that of vsphere vmotion, based on their respective vsphere Network I/O Control shares. NETWORK UPLINK HOST1 H OST2 HOST3 vmk1 (vsphere vmotion) (Virtual SAN) vmk1 (vsphere vmotion) (Virtual SAN) Table 6. Network Traffic with vsphere Network I/O Control Stacked Switch Architecture Figure 16 shows the network performance view of host2 during the test, providing more insight into how the network handles the workloads with vsphere Network I/O Control. Figure 16. Network Traffic with vsphere Network I/O Control Stacked Switch Architecture TECHNICAL WHITE PAPER / 20

21 When the Virtual SAN workload starts, and are equally utilized to transmit a copy of data, using their entire bandwidth to the destination hosts where mirrors of the virtual disk object reside. As soon as a vsphere vmotion migration of another virtual machine is initiated on the host, is selected to carry the vsphere vmotion workload, and vsphere Network I/O Control distributes the network adapter bandwidth in a 2:1 ratio to Virtual SAN and vsphere vmotion workloads in accordance with their respective shares. Because Virtual SAN data transmission is reduced by one-third on, Virtual SAN automatically adjusts transmission of the other copy on to synchronize the pace. Therefore, until the vsphere vmotion operation finishes, while is still fully utilized to carry both Virtual SAN and vsphere vmotion traffic, operates at only two-thirds of its bandwidth. If a vmnic fails, the remaining vmnic takes over all traffic and assigns bandwidth to different workloads in accordance with vsphere Network I/O Control shares, as is shown in Table 7. NETWORK UPLINK HOST1 H OST2 HOST3 vmk1 (vsphere vmotion) (Virtual SAN) 0 0 Failed vmk1 (vsphere vmotion) (Virtual SAN) Table 7. Network Traffic During Network Adapter Failure with vsphere Network I/O Control Stacked Switch Architecture Conclusion VMware Virtual SAN network design should be approached in a holistic fashion, taking into account other traffic types utilized in the VMware vsphere cluster in addition to the Virtual SAN network. The physical network topology and the overprovisioning posture of the physical switch infrastructure are other factors that should be considered. Virtual SAN requires a 1GbE network at the minimum. As a best practice, VMware strongly recommends a 10GbE network for Virtual SAN to prevent the possibility of the network s becoming a bottleneck. As demonstrated in this paper, a 1GbE network can easily be saturated by Virtual SAN traffic, and teaming of multiple network adapters can provide only availability benefits in most cases. If a 1GbE network is utilized, VMware recommends that it be used for smaller clusters and dedicated to Virtual SAN traffic. To implement a highly available network infrastructure for Virtual SAN, redundant hardware components and network paths are recommended. Switches can be configured either in uplink or stack mode, depending on switch capability and the physical switch configuration. Under the IP hash policy, the Virtual SAN network can leverage the aggregated bandwidth of multiple teamed network adapters only in stack mode. Virtual SAN supports both vsphere VSS and VDS. However, VMware recommends the use of VDS to realize network QoS benefits offered by vsphere Network I/O Control. When various vsphere network traffic types must share the same network adapters as Virtual SAN, they should be separated onto different VLANs and should use shares as a QoS mechanism to guarantee the level of performance expected for Virtual SAN in possible contention scenarios. TECHNICAL WHITE PAPER / 21

22 References 1. Virtual SAN: Software-Defined Shared Storage 2. VMware Virtual SAN Hardware Guidance 3. VMware NSX Network Virtualization Design Guide 4. VMware Network Virtualization Design Guide 5. Understanding IP Hash Load Balancing : VMware Knowledge Base Article Sample Configuration of EtherChannel/Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP Switches : VMware Knowledge Base Article Changing the Multicast Address Used for a VMware Virtual SAN Cluster : VMware Knowledge Base Article Understanding TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) in a VMware Environment : VMware Knowledge Base Article IP Multicast Technology Overview Essential Virtual SAN: Administrator s Guide to VMware Virtual SAN by Cormac Hogan, Duncan Epping 11. VMware Network I/O Control: Architecture, Performance and Best Practices TECHNICAL WHITE PAPER / 22

23 VMware, Inc Hillview Avenue Palo Alto CA USA Tel Fax Copyright 2014 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-TWP-vSAN-Netwrk-Dsn-Guide-USLET-101 Docsource: OIC-FP-1204

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01 vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014

VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014 VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014 Table of Contents Introduction... 3 1.1 VMware Virtual SAN...3 1.2 Virtual SAN Datastore Characteristics and

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

VMware Virtual SAN Hardware Guidance. TECHNICAL MARKETING DOCUMENTATION v 1.0

VMware Virtual SAN Hardware Guidance. TECHNICAL MARKETING DOCUMENTATION v 1.0 VMware Virtual SAN Hardware Guidance TECHNICAL MARKETING DOCUMENTATION v 1.0 Table of Contents Introduction.... 3 Server Form Factors... 3 Rackmount.... 3 Blade.........................................................................3

More information

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02 vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

Nutanix Tech Note. VMware vsphere Networking on Nutanix

Nutanix Tech Note. VMware vsphere Networking on Nutanix Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking

More information

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

VMware Virtual SAN Layer 2 and Layer 3 Network Topologies

VMware Virtual SAN Layer 2 and Layer 3 Network Topologies VMware Virtual SAN Layer 2 and Layer 3 Network Topologies Deployments TECHNICAL WHITE PAPER Table of Contents Introduction... 2 Network and vsphere Technologies... 2 Networking Related Technologies...

More information

What s New in VMware vsphere 5.5 Networking

What s New in VMware vsphere 5.5 Networking VMware vsphere 5.5 TECHNICAL MARKETING DOCUMENTATION Table of Contents Introduction.................................................................. 3 VMware vsphere Distributed Switch Enhancements..............................

More information

What s New in VMware vsphere Flash Read Cache TECHNICAL MARKETING DOCUMENTATION

What s New in VMware vsphere Flash Read Cache TECHNICAL MARKETING DOCUMENTATION What s New in VMware vsphere TECHNICAL MARKETING DOCUMENTATION v 0.1/September 2013 Table of Contents Introduction.... 3 1.1 Software-Defined Datacenter... 3 1.2 Software-Defined Storage... 3 1.3 What

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Storage VMware vsphere 4.1 W H I T E P A P E R Introduction VMware vsphere 4.1 brings many new capabilities to further extend the benefits of vsphere 4.0. These new features

More information

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the

More information

Deploying 10 Gigabit Ethernet on VMware vsphere 4.0 with Cisco Nexus 1000V and VMware vnetwork Standard and Distributed Switches - Version 1.

Deploying 10 Gigabit Ethernet on VMware vsphere 4.0 with Cisco Nexus 1000V and VMware vnetwork Standard and Distributed Switches - Version 1. Deploying 10 Gigabit Ethernet on VMware vsphere 4.0 with Cisco Nexus 1000V and VMware vnetwork Standard and Distributed Switches - Version 1.0 Table of Contents Introduction...3 Design Goals...3 VMware

More information

Understanding Data Locality in VMware Virtual SAN

Understanding Data Locality in VMware Virtual SAN Understanding Data Locality in VMware Virtual SAN July 2014 Edition T E C H N I C A L M A R K E T I N G D O C U M E N T A T I O N Table of Contents Introduction... 2 Virtual SAN Design Goals... 3 Data

More information

Multipathing Configuration for Software iscsi Using Port Binding

Multipathing Configuration for Software iscsi Using Port Binding Multipathing Configuration for Software iscsi Using Port Binding Technical WHITE PAPER Table of Contents Multipathing for Software iscsi.... 3 Configuring vmknic-based iscsi Multipathing.... 3 a) Configuring

More information

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

Expert Reference Series of White Papers. VMware vsphere Distributed Switches Expert Reference Series of White Papers VMware vsphere Distributed Switches info@globalknowledge.net www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,

More information

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan) Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations

More information

VMware EVO SDDC. General. Q. Is VMware selling and supporting hardware for EVO SDDC?

VMware EVO SDDC. General. Q. Is VMware selling and supporting hardware for EVO SDDC? FREQUENTLY ASKED QUESTIONS VMware EVO SDDC General Q. What is VMware A. VMware EVO SDDC is the easiest way to build and run an SDDC private cloud on an integrated system. Based on an elastic, highly scalable,

More information

Introduction to VMware EVO: RAIL. White Paper

Introduction to VMware EVO: RAIL. White Paper Introduction to VMware EVO: RAIL White Paper Table of Contents Introducing VMware EVO: RAIL.... 3 Hardware.................................................................... 4 Appliance...............................................................

More information

Configuration Maximums VMware vsphere 4.1

Configuration Maximums VMware vsphere 4.1 Topic Configuration s VMware vsphere 4.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.1. The limits presented in the

More information

hp ProLiant network adapter teaming

hp ProLiant network adapter teaming hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2

More information

VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014

VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014 VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014 Table of Contents Introduction.... 3 VMware Virtual SAN....

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical

More information

VMware vcloud Networking and Security Overview

VMware vcloud Networking and Security Overview VMware vcloud Networking and Security Overview Networks and Security for Virtualized Compute Environments WHITE PAPER Overview Organizations worldwide have gained significant efficiency and flexibility

More information

VMware High Availability (VMware HA): Deployment Best Practices

VMware High Availability (VMware HA): Deployment Best Practices VMware High Availability (VMware HA): Deployment Best Practices VMware vsphere 4.1 TECHNICAL WHITE PAPER This paper describes best practices and guidance for properly deploying VMware High Availability

More information

IP SAN BEST PRACTICES

IP SAN BEST PRACTICES IP SAN BEST PRACTICES PowerVault MD3000i Storage Array www.dell.com/md3000i TABLE OF CONTENTS Table of Contents INTRODUCTION... 3 OVERVIEW ISCSI... 3 IP SAN DESIGN... 4 BEST PRACTICE - IMPLEMENTATION...

More information

Configuration Maximums

Configuration Maximums Topic Configuration s VMware vsphere 5.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.0. The limits presented in the

More information

VXLAN Performance Evaluation on VMware vsphere 5.1

VXLAN Performance Evaluation on VMware vsphere 5.1 VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5

More information

On-Demand Infrastructure with Secure Networks REFERENCE ARCHITECTURE

On-Demand Infrastructure with Secure Networks REFERENCE ARCHITECTURE REFERENCE ARCHITECTURE Table of Contents Executive Summary.... 3 Audience.... 3 Overview.... 3 What Is an On-Demand Infrastructure?.... 4 Architecture Overview.... 5 Cluster Overview.... 8 Management Cluster...

More information

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices A Dell Technical White Paper Dell Symantec THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

VMware vshield App Design Guide TECHNICAL WHITE PAPER

VMware vshield App Design Guide TECHNICAL WHITE PAPER ware vshield App Design Guide TECHNICAL WHITE PAPER ware vshield App Design Guide Overview ware vshield App is one of the security products in the ware vshield family that provides protection to applications

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

Virtual Networking Features of the VMware vnetwork Distributed Switch and Cisco Nexus 1000V Series Switches

Virtual Networking Features of the VMware vnetwork Distributed Switch and Cisco Nexus 1000V Series Switches Virtual Networking Features of the vnetwork Distributed Switch and Cisco Nexus 1000V Series Switches What You Will Learn With the introduction of ESX, many virtualization administrators are managing virtual

More information

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000 VMware NSX Network Virtualization Design Guide Deploying VMware NSX with Cisco UCS and Nexus 7000 Table of Contents Intended Audience... 3 Executive Summary... 3 Why deploy VMware NSX on Cisco UCS and

More information

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015 VMware vsphere Data Protection REVISED APRIL 2015 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Requirements.... 4 Evaluation Workflow... 5 Overview.... 5 Evaluation

More information

Configuration Maximums VMware vsphere 4.0

Configuration Maximums VMware vsphere 4.0 Topic Configuration s VMware vsphere 4.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.0. The limits presented in the

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

Esri ArcGIS Server 10 for VMware Infrastructure

Esri ArcGIS Server 10 for VMware Infrastructure Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure

More information

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports

More information

What s New in VMware Virtual SAN TECHNICAL WHITE PAPER

What s New in VMware Virtual SAN TECHNICAL WHITE PAPER What s New in VMware Virtual SAN TECHNICAL WHITE PAPER v1.0/february 2014 Update Table of Contents 1. Introduction.... 4 1.1 Software-Defined Datacenter.... 4 1.2 Software-Defined Storage.... 4 1.3 VMware

More information

VMware vsphere Storage Appliance 5.1.x Brownfield Deployments. TECHNICAL MARKETING DOCUMENTATION v 1.0

VMware vsphere Storage Appliance 5.1.x Brownfield Deployments. TECHNICAL MARKETING DOCUMENTATION v 1.0 VMware vsphere Storage Appliance 5.1.x TECHNICAL MARKETING DOCUMENTATION v 1.0 Table of Contents Introduction.... 3 vsphere Storage Appliance 5.1.x Brownfield Deployment.... 3 vsphere Storage Appliance

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

VMware ESX Server 3 802.1Q VLAN Solutions W H I T E P A P E R

VMware ESX Server 3 802.1Q VLAN Solutions W H I T E P A P E R VMware ESX Server 3 802.1Q VLAN Solutions W H I T E P A P E R Executive Summary The virtual switches in ESX Server 3 support VLAN (IEEE 802.1Q) trunking. Using VLANs, you can enhance security and leverage

More information

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

What s New in VMware Site Recovery Manager 6.1

What s New in VMware Site Recovery Manager 6.1 What s New in VMware Site Recovery Manager 6.1 Technical Overview AUGUST 2015 Table of Contents Introduction... 2 Storage profile based protection... 2 Stretched Storage and Orchestrated vmotion... 5 Enhanced

More information

Configuration Maximums

Configuration Maximums Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the

More information

VMware. NSX Network Virtualization Design Guide

VMware. NSX Network Virtualization Design Guide VMware NSX Network Virtualization Design Guide Table of Contents Intended Audience... 3 Overview... 3 Components of the VMware Network Virtualization Solution... 4 Data Plane... 4 Control Plane... 5 Management

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

What s New in VMware vsphere 5.0 Networking TECHNICAL MARKETING DOCUMENTATION

What s New in VMware vsphere 5.0 Networking TECHNICAL MARKETING DOCUMENTATION What s New in ware vsphere 5.0 TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated April 2011 What s New in ware vsphere 5.0 Table of Contents Introduction.... 3 Network Monitoring And Troubleshooting....

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Table of Contents Executive Summary....4 Audience....4 Overview....4 VMware Software Components....6 Architectural Overview... 7 Cluster...

More information

W H I T E P A P E R. Best Practices for High Performance NFS Storage with VMware

W H I T E P A P E R. Best Practices for High Performance NFS Storage with VMware Best Practices for High Performance NFS Storage with VMware Executive Summary The proliferation of large-scale, centralized pools of processing, networking, and storage resources is driving a virtualization

More information

Microsegmentation Using NSX Distributed Firewall: Getting Started

Microsegmentation Using NSX Distributed Firewall: Getting Started Microsegmentation Using NSX Distributed Firewall: VMware NSX for vsphere, release 6.0x REFERENCE PAPER Table of Contents Microsegmentation using NSX Distributed Firewall:...1 Introduction... 3 Use Case

More information

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Software-Defined Storage: What it Means for the IT Practitioner WHITE PAPER

Software-Defined Storage: What it Means for the IT Practitioner WHITE PAPER What it Means for the IT Practitioner WHITE PAPER Extending the Power of Virtualization to Storage Server virtualization has changed the way IT runs data centers across the world. According to Gartner,

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011 FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient

More information

Why Choose VMware vsphere for Desktop Virtualization? WHITE PAPER

Why Choose VMware vsphere for Desktop Virtualization? WHITE PAPER Why Choose VMware vsphere for Desktop Virtualization? WHITE PAPER Table of Contents Thin, Legacy-Free, Purpose-Built Hypervisor.... 3 More Secure with Smaller Footprint.... 4 Less Downtime Caused by Patches...

More information

Kronos Workforce Central on VMware Virtual Infrastructure

Kronos Workforce Central on VMware Virtual Infrastructure Kronos Workforce Central on VMware Virtual Infrastructure June 2010 VALIDATION TEST REPORT Legal Notice 2010 VMware, Inc., Kronos Incorporated. All rights reserved. VMware is a registered trademark or

More information

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Leveraging NIC Technology to Improve Network Performance in VMware vsphere Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...

More information

ESXi Configuration Guide

ESXi Configuration Guide ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

New Features in SANsymphony -V10 Storage Virtualization Software

New Features in SANsymphony -V10 Storage Virtualization Software New Features in SANsymphony -V10 Storage Virtualization Software Updated: May 28, 2014 Contents Introduction... 1 Virtual SAN Configurations (Pooling Direct-attached Storage on hosts)... 1 Scalability

More information

EMC Data Domain Boost and Dynamic Interface Groups

EMC Data Domain Boost and Dynamic Interface Groups EMC Data Domain Boost and Dynamic Interface Groups Maximize the Efficiency of Multiple Network Interfaces ABSTRACT EMC delivers dynamic interface groups to simplify the use of multiple network interfaces

More information

AlwaysOn Desktop Implementation with Pivot3 HOW-TO GUIDE

AlwaysOn Desktop Implementation with Pivot3 HOW-TO GUIDE Implementation with Pivot3 HOW-TO GUIDE Solution Overview Highly available servers and storage are critical components of the architecture and must be designed into the VDI clusters at each site. Desktop

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

VMware vsphere Storage DRS. Interoperability TECHNICAL MARKETING DOCUMENTATION V 1.0/UPDATED MAY 2012

VMware vsphere Storage DRS. Interoperability TECHNICAL MARKETING DOCUMENTATION V 1.0/UPDATED MAY 2012 Storage DRS Interoperability TECHNICAL MARKETING DOCUMENTATION V 1.0/UPDATED MAY 2012 Table of Contents Purpose and Overview.... 3 ware vsphere Storage DRS Introduction.... 3 Datastore Clusters.... 3 Placement

More information

VMware vcloud Networking and Security

VMware vcloud Networking and Security VMware vcloud Networking and Security Efficient, Agile and Extensible Software-Defined Networks and Security BROCHURE Overview Organizations worldwide have gained significant efficiency and flexibility

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged

More information

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Important Considerations When Selecting Top-of-Rack Switches table of contents + Advantages of Top-of-Rack Switching.... 2 + How to Get from

More information

DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch

DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch What You Will Learn A demilitarized zone (DMZ) is a separate network located in the neutral zone between a private (inside)

More information

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud Introduction Cloud computing is one of the most important topics in IT. The reason for that importance

More information

How to Create a Virtual Switch in VMware ESXi

How to Create a Virtual Switch in VMware ESXi How to Create a Virtual Switch in VMware ESXi I am not responsible for your actions or their outcomes, in any way, while reading and/or implementing this tutorial. I will not provide support for the information

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman. WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures

More information

Configuration Maximums

Configuration Maximums Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out

BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out Contents Introduction... 3 Terminology... 3 Planning Scale-Out Clusters and Pools... 3 Cluster Arrays Based on Management Boundaries...

More information

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Simplify Your Data Center Network to Improve Performance and Decrease Costs Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should

More information

Enhancing Cisco Networks with Gigamon // White Paper

Enhancing Cisco Networks with Gigamon // White Paper Across the globe, many companies choose a Cisco switching architecture to service their physical and virtual networks for enterprise and data center operations. When implementing a large-scale Cisco network,

More information

NSX TM for vsphere with Arista CloudVision

NSX TM for vsphere with Arista CloudVision ARISTA DESIGN GUIDE NSX TM for vsphere with Arista CloudVision Version 1.0 August 2015 ARISTA DESIGN GUIDE NSX FOR VSPHERE WITH ARISTA CLOUDVISION Table of Contents 1 Executive Summary... 4 2 Extending

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

VMware Virtual SAN 6.0 Design and Sizing Guide

VMware Virtual SAN 6.0 Design and Sizing Guide Virtual SAN Design and Sizing Guide VMware Virtual SAN 6.0 Design and Sizing Guide Cormac Hogan Storage and Availability Business Unit VMware v 1.0.5/April 2015 V M w a r e S t o r a g e a n d A v a i

More information

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices VMware vstorage Virtual Machine File System Technical Overview and Best Practices A V M wa r e T e c h n i c a l W h i t e P a p e r U p d at e d f o r V M wa r e v S p h e r e 4 V e r s i o n 2. 0 Contents

More information

The next step in Software-Defined Storage with Virtual SAN

The next step in Software-Defined Storage with Virtual SAN The next step in Software-Defined Storage with Virtual SAN VMware vforum, 2014 Lee Dilworth, principal SE @leedilworth 2014 VMware Inc. All rights reserved. The Software-Defined Data Center Expand virtual

More information