1 Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking concepts and how they can be used in a Nutanix environment to ensure optimal performance and availability. It also covers the recommended configuration settings for different vsphere networking options.
2 Table of Contents Executive Summary... 3 VMware vsphere Networking Overview... 3 Network Discovery Protocols... 4 Cisco Nexus 1000V / VMware NSX... 5 Network I/O Control (NIOC)... 5 Management Traffic... 6 vmotion Traffic... 6 Fault Tolerance Traffic... 6 Virtual Machine Traffic... 7 NFS Traffic... 7 Nutanix CVM Traffic... 7 iscsi Traffic... 8 vsphere Replication Traffic... 8 Other Defaults... 8 Load Balancing, Failover, and NIC Teaming... 9 NIC Team Load Balancing... 9 Recommendation for VSS - Route Based on Originating Virtual Port... 9 Recommendation for VDS - Route Based on Physical NIC Load (LBT)... 9 Network Failover Detection Notify Switches Failover Order Failback Summary Security Virtual Networking Configuration Examples Option 1 - Virtual Distributed Switch (VDS) with VSS for Nutanix Internal deployment Option 2 - Virtual Standard Switch (VSS) with VSS for Nutanix Internal Deployment Network Performance Optimization with Jumbo Frames Sample Jumbo Frame Configuration Recommended MTU Sizes for Traffic Types using Jumbo Frames Jumbo Frames Recommendations Conclusion Further Information... 18
3 Executive Summary The Nutanix Virtual Computing Platform is a highly resilient converged compute and storage platform, designed for supporting virtual environments such as VMware vsphere. The Nutanix architecture runs a storage controller in a VM, called the Nutanix Controller VM (CVM). This VM is run on every Nutanix server node in a Nutanix cluster to form a highly distributed, shared-nothing converged infrastructure. All CVMs actively work together to aggregate storage resources into a single global pool that can be leveraged by user virtual machines running on the Nutanix server nodes. The storage resources are managed by the Nutanix Distributed File System (NDFS) to ensure that data and system integrity is preserved in the event of node, disk, application, or hypervisor software failure. NDFS also delivers data protection and high availability functionality that keeps critical data and VMs protected. Networking and network design are critical parts of any distributed system. A resilient network design is important to ensure connectivity between Nutanix CVMs, for virtual machine traffic, and for vsphere management functions, such as ESXi management and vmotion. The current generation of Nutanix Virtual Computing Systems comes standard with redundant 10GbE and 1GbE NICs, which can be used by vsphere for resilient virtual networking. This Tech Note is intended to help the reader understand core networking concepts and configuration best practices for a Nutanix cluster running with VMware vsphere. Implementing the following best practices will enable Nutanix customers to get the most out of their storage, networking, and virtualization investments. VMware vsphere Networking Overview VMware vsphere supports two main types of virtual switches, the Virtual Standard Switch (VSS) and the Virtual Distributed Switch (VDS). The main differences between these virtual switch types are their functionality and how they are created and managed. VSS is available in all versions of VMware vsphere and is the default method to connect virtual machines on the same host to each other and to the external (or physical) network. VDS is available only with vsphere Enterprise Plus and has more advanced features and functionality. In general, VSS is simple to configure and maintain, but it lacks support for Network I/O Control (NIOC), has no automated network load balancing functionality, and it can t be centrally managed.
4 Bundled in vsphere Enterprise Plus, VDS supports NIOC with Network Resource Pools and can be centrally managed, allowing the VDS configuration to be applied to remote ESXi hosts easily. It also supports network discovery protocols, including Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP). The following sections include a discussion of the various features of VSS and VDS, along with configuration recommendations on the Nutanix platform. Network Discovery Protocols Network discovery protocols give vsphere administrators visibility on connectivity between the virtual and physical network. This visibility makes troubleshooting easier in the event of issues such as cabling problems or MTU / packet fragmentation. VMware vsphere offers three configuration settings for switch discovery Listen, Advertise, or Both. These configuration settings are used to determine the information sent and/or received by the discovery protocol. Nutanix recommends using the "Both" setting, which ensures information is collected and displayed from the physical switch, and also ensures that ESXi sends information about the virtual distributed switch (VDS) to the physical switch. The following information is visible in the vsphere client for advertising switches: 1. The physical switch interface the dvuplink is connected to 2. MTU size (i.e., If Jumbo Frames are enabled or not). Some switches will report maximum MTU of 9, The switch management IP address(es) 4. The switch name, description, software version, and capabilities The following are general recommendations for Discovery Protocol Configuration, but Nutanix suggests all customers careful consider the advantages and disadvantages of discovery protocols for their specific security and discovery requirements. Recommendations for Discovery Protocol Configuration Type Depending on your switching infrastructure, use either: 1) CDP - For Cisco-based environments 2) LLDP - For non-cisco environments Operation Both - Generally acceptable in most environments and provides maximum operational benefits. This allows both vsphere and the physical network to openly advertise information.
5 Cisco Nexus 1000V / VMware NSX The Cisco Nexus 1000V and VMware NSX solutions can be used with Nutanix solutions, however this is out of the scope for this document. You can engage on the topic of SDN with Nutanix on our community site Network I/O Control Network I/O control (NOIC) is a feature available since vsphere 4.1 with the Virtual Distributed Switch (VDS). NOIC uses network resource pools to determine the bandwidth that different network traffic types provide. When NIOC is enabled, distributed switch traffic is divided into custom and predefined network resource pools, including fault tolerance traffic, NFS traffic, iscsi traffic, vmotion traffic, ESXi management traffic, vsphere replication (VR) traffic, and VM traffic. The physical adapter shares assigned to a network resource pool determine what share of the total available bandwidth will be guaranteed to the traffic associated with that network resource pool in the event of contention. It is important to understand that NIOC has no impact on the network traffic, unless there is contention. So during times where the network is less than 100% utilized, NIOC will have no advantage or disadvantage. Bandwidth made available to a network resource pool is determined by the share assigned to that pool, compared to other network resource pools. For further information about shares, review the following best practice guide from VMware: Limits can also be applied to selected traffic types if required. Nutanix recommends not to set limits as they may unnecessarily impact performance for given workloads when there is available bandwidth. Using shares ensures burst workloads, such as vmotion, can complete their workload (in this example the migration of a VM to a different ESXi host) as fast as possible where bandwidth is available, and NIOC shares will prevent other workloads from being significantly impacted in the event available bandwidth is limited.
6 The following table shows recommended values for the Network Resource Pool Share: Network Resource Pool Share Value Physical Adapter Shares Host Limit - MB/s Management Traffic 25 Low Unlimited vmotion Traffic 50 Normal Unlimited Fault Tolerance (FT) Traffic 50 Normal Unlimited Virtual Machine Traffic 100 High Unlimited iscsi Traffic High Unlimited NFS Traffic 100 High Unlimited Nutanix Traffic High Unlimited vsphere Replication (VR) 50 Normal Unlimited Traffic 2 Other pools, including: 50 Normal Unlimited vsphere/virtual SAN Traffic 2 Notes: 1. This is a custom Network Resource Pool, which needs to be created manually. 2. These pools are generally not applicable or relevant in Nutanix deployments. Management Traffic Management traffic requires minimal bandwidth with a share value of 25 over two 10Gb interfaces. This configuration will ensure a minimum of approximately 1.5Gbps bandwidth. This is more than sufficient for ESXi management traffic and above the minimum requirement of 1Gbps. vmotion Traffic vmotion is a burst-type workload, which uses no bandwidth until DRS or a vsphere administrator starts a vmotion (or puts a host into maintenance mode). As such, it is unlikely to have any significant ongoing impact on the network traffic. Nutanix recommends a share value of 50 over two 10Gb interfaces. This will guarantee a minimum of approximately 3Gbps which is sufficient for vmotion activity to complete in a timely manner. This also ensures vmotion has well above the minimum bandwidth requirement of 1Gbps. Fault Tolerance Traffic Fault tolerance (FT) traffic is dependent on how many FT VMs per host (current maximum of 4 per host) and will generally be a sustained workload, as opposed to a burst workload such as vmotion. This is because FT needs to keep the primary and
7 secondary VMs in "lockstep". Generally, virtual machines using FT are critical and you need to ensure FT traffic (which is also sensitive to latency) is not impacted during periods of contention. Nutanix recommends using a share value of 50 and sharing two 10Gb interfaces. Based on this configuration, FT will be guaranteed a minimum of 3Gbps which is well above VMware's recommended minimum of 1Gbps. Virtual Machine Traffic VM traffic is why datacenters exist in the first place, so this traffic is always important if not critical. If the VM s network connectivity slows, it can quickly impact end users and reduce productivity. As such, it is important to ensure this traffic has a significant share of the available bandwidth during periods of contention. Therefore, Nutanix recommends a share value of 100 over two 10Gb interfaces. Based on this configuration, virtual machine traffic will be guaranteed a minimum of approximately 6Gbps. For most environments, this bandwidth will be more than what is required and ensure a good amount of headroom in case of unexpected burst activity. NFS Traffic NFS traffic is essential to the Nutanix Distributed File System and to virtual machine performance, so this traffic is always critical. If NFS performance is degraded, it will have an immediate impact on Nutanix CVM and VM performance. As such, it is important to ensure this traffic has a significant share of the available bandwidth during periods of contention. In normal operation, NFS traffic is serviced locally. So NFS traffic will not impact the physical network card unless the Nutanix Controller VM is offline in the event of maintenance or unavailability. Under normal circumstances, there will be no NFS traffic going across the physical NICs and the NIOC share value will have no impact on other traffic. For this reason, it is excluded from calculations. As a safety measure, to ensure that in the event of network contention, CVM maintenance, or unavailability, a share value of 100 is assigned to NFS traffic. This guarantees a minimum of 6Gbps bandwidth. Nutanix CVM Traffic For the Nutanix DFS to function, it requires connectivity to the other CVMs in the cluster. This connectivity is used for tasks such as write I/O synchronous replication and Nutanix cluster management. Under normal circumstances, there will be minimal to no read I/O traffic going across the physical NICs. This is because the Nutanix NDFS architecture was designed with the key concept of VM-data locality. However,
8 write I/O will always utilize the physical network due to synchronous replication for data fault tolerance and availability. For optimal NDFS performance, each CVM will be guaranteed a minimum of 6Gbps bandwidth. In most environments, this bandwidth will be more than what is required and ensure a good amount of headroom in case of unexpected burst activity. iscsi Traffic iscsi is not a normal protocol within a Nutanix environment. However, if iscsi is used within the environment, this traffic is also given a share value of 100. Note: NIOC does not cover In Guest iscsi traffic regulation by default. In the event In Guest iscsi is used, it is recommended to create a dvportgroup for In Guest iscsi traffic and assign it to a custom network resource pool called "In Guest iscsi" and give it a share value of 100 (High). vsphere Replication Traffic vsphere Replication (VR) traffic may be critical to your environment if you choose to use VR with or without VMware Site Recovery Manager (SRM). When using SRM, it is highly recommended to leverage the Nutanix Storage Replication Adaptor (SRA) as this is more efficient than using vsphere-based replication alone. If using VR without SRM, the default share value of 50 (Normal) should be suitable for most environments. This guarantees approximately 3Gbps of network bandwidth. Other Defaults Other default points are generally not relevant and can be disregarded with their share value of 50.
9 Load Balancing, Failover, and NIC Teaming vsphere provides a number of load balancing, failover, and NIC teaming options. Each option should be carefully understood and considered for a vsphere networking deployment. NIC Team Load Balancing The available options for NIC team load balancing include: 1. Route based on originating virtual port 2. Route based on IP hash 3. Route based on source MAC hash 4. Route based on physical NIC load called load-based teaming or LBT (VDS only) 5. Explicit failover order Recommendation for VSS - Route Based on Originating Virtual Port The route based on originating virtual port option is the default load balancing policy and has no requirement for advanced switching configuration such as LACP, Cisco EtherChannel, or HP teaming. These attributes make it simple to implement, maintain, and troubleshoot. Route based on originating virtual port requires 802.1q VLAN tagging for secure separation of traffic types. The main disadvantages are no load balancing based on network load, which results in traffic from a single VM always being sent to the same physical NIC, unless there is a failover event caused by a NIC or upstream link failure. This is less of an issue with the high throughput 10GbE network interfaces of the Nutanix Virtual Computing platform. Recommendation for VDS Route Based on Physical NIC Load (LBT) For environments using VDS, Nutanix recommends using route based on physical NIC load. With this option, LBT has no requirement for advanced switching configuration such as LACP, Cisco EtherChannel, or HP teaming. This option provides fully automated load balancing which takes effect when the utilization of one or more NICs reach and sustain 75% for a period of 30 seconds or more based on egress and ingress traffic. LBT requires 802.1q VLAN tagging for secure separation of traffic types. As a result, LBT is a very simple and effective solution to implement and maintain, and works very well in Nutanix deployments.
10 Network Failover Detection VMware ESXi uses one of two methods of network failover detection: beacon probing and link status. Beacon probing works by sending out and listening for beacon probes which are made up of broadcast frames. Beacon probing is dependent on having three network connections. As a result of this requirement, it is not recommended for the current generation of Nutanix Virtual Computing Platforms which currently have two network ports. Link status is dependent on the link status provided by the physical NIC. Link status can detect failures, such as a cable disconnection and/or physical switch power failures. Link status cannot detect configuration errors or upstream failures. To avoid the limitations of link status relating to upstream failures, enable "link state tracking" on physical switches that support this option. This enables the switch to pass upstream link state information back to ESXi, which will allow link status to trigger a link down on ESXi where appropriate. Notify Switches The purpose of the notify switches policy setting is to enable or disable communication by ESXi with the physical switch in the event of a failover. If configured as "Yes", ESXi will send a notification to the physical switch to update its lookup tables on a failover event. Nutanix recommends enabling this option to ensure that failover occurs in a timely manner with minimal interruption to network connectivity. Failover Order Using failover order allows the vsphere administrator to specify the order in which NICs failover in the event of an issue. This is configured by assigning a physical NIC to one of three groups: active adapters; standby adapters; or unused adapters. In the event all active adapters lose connectivity, the highest priority standby adapter will be used. Failover order is only required in a Nutanix environment when using Multi-NIC vmotion. When configuring Multi-NIC vmotion, the first dvportgroup used for vmotion must be configured to have one dvuplink active and the other standby, with the reverse configured for the second dvportgroup used for vmotion. For more information see: Multiple-NIC vmotion in vsphere 5 (KB )
11 Failback For customers not using the VDS and LBT, the failback feature can help rebalance network traffic across the original NIC. This can result in improved network performance. The only significant disadvantage of setting failback to Yes is in the unlikely event of network instability or network route flapping, since having network traffic fail back to the original NIC may result in intermittent or degraded network connectivity. Nutanix recommends setting failback to Yes when using VSS and No when using VDS. Summary The following table summarizes Nutanix recommendations for NIC Recommendation for Load Balancing, Failover, and NIC teaming Virtual Distributed Switch (VDS) Load Balancing Network Failover Detection Notify Switches Failback Virtual Standard Switch (VSS) Load Balancing Network Failover Detection Notify Switches Failback Route based on physical NIC load (LBT) Link Status Only (Ensure "Link State Tracking" or equivalent is enabled on Physical switches) Yes No Route based on originating virtual port Link Status Only (ensure "Link State Tracking" or equivalent is enabled on physical switches) Yes Yes Security When configuring a VSS or VDS, there are three configurable options under security: promiscuous mode; MAC address changes; and forged transmits. Each of these can be set to "Accept" or "Reject". In general, the most secure and appropriate setting for each of the three options is "Reject". There are several use cases which may require you to set a specific option to Accept. An example of use cases to consider configuring "Accept" on forged transmits and MAC address changes are: 1. Microsoft load balancing in Unicast mode 2. iscsi deployments on select storage types
12 For more information, see the Network Load Balancing Unicast Mode Configuration (KB ) in the VMware Knowledgebase. The following are general recommendations for Virtual Network Security settings, but Nutanix suggests all customers carefully consider their requirements for their specific applications. Recommendation for Virtual Networking Security Promiscuous Mode Reject MAC Address Changes Reject Forged Transmits Reject Virtual Networking Configuration Examples The following two virtual networking configuration examples cover the Nutanix recommended configurations for both VSS and VDS solutions. Each configuration discusses the advantages, disadvantages, and common use cases. All Nutanix deployments use an internal-only VSS for the NFS communication between the ESXi host and the Nutanix CVM. This VSS remains unmodified regardless of the virtual network configuration for ESXi management, virtual machine traffic, vmotion, and so on. Nutanix recommends that no changes be made to this internal-only VSS. In both of the following options, Nutanix recommends all vmnics be set as "Active" on the Portgroup and/or dvportgroup unless otherwise specified.
13 Option 1 - Virtual Distributed Switch (VDS) with VSS for Nutanix Internal deployment Option 1 is recommended for customers using VMware vsphere Enterprise Plus who would like to use Virtual Distributed Switches. Option 1 has a number of benefits, including: 1. The ability to leverage advanced networking features, such as NIOC and LBT (route based on physical NIC load) 2. It reduces cabling/switching requirements 3. It provides the ability for all traffic types to "burst" where required up to 10Gbps 4. It is a simple solution which only requires 802.1q configured on the physical network 5. It can be centrally configured and managed The following diagram shows a sample configuration for a VDS in a Nutanix environment. Note how the Nutanix internal VSS is unmodified. Distributed Virtual Switch: dvswitchnutanix VMKernel Port Management Network Vmk0 : ESXi Management IP Address VLAN: 10 Physical Adapters Vmnic Auto Vmnic Auto IPMI Out of Band Management Interface 1GB Network (LAN) 802.1q VLAN 10 VMKernel Port vmotion Vmk2 : vmotion IP Address VLAN: 11 VMKernel Port Fault Tolerance Vmk3 : Fault Tolerance IP Address VLAN: 12 Onboard Dual Port 1GB NIC UNUSED 1GB 802.1q Trunk 1GB 802.1q Trunk Virtual Machine Port Group Virtual Machine Traffic VLAN 15 VLAN: 15 Virtual Machine Port Group Virtual Machine Traffic VLAN 16 10Gb Adapter 1 Port 1 VLAN: 16 Virtual Machine Port Group Nutanix VLAN 10 10GB 802.1q Trunk 10Gb Adapter 1 Port 2 VLAN: 10 Virtual Switch: vswitch- Nutanix 10GB 802.1q Trunk Virtual Machine Port group Svm- iscsi- pg 1 virtual machine(s) NTNX- XXXXXXXXXXX- A- CVM Physical Adapters No adapters VMKernel Port Vmk- svm- iscsi- pg Vmk1 : Figure 1: Virtual Networking Option 1 with Virtual Distributed Switch
14 Option 2 - Virtual Standard Switch (VSS) with VSS for Nutanix Internal Deployment Option 2 is for customers not using VMware vsphere Enterprise Plus, or those who do not wish to use the VDS. Option 2 and has a number of benefits, including: 1. It reduces cabling/switching requirements (No requirement for 1Gb ports) 2. It is a simple solution which only requires 802.1q configured on the physical network. The following diagram shows a sample configuration for a VDS in a Nutanix environment: Virtual Switch: vswitchntnx VMKernel Port Management Network Vmk0 : ESXi Management IP Address VLAN: 10 Physical Adapters Vmnic Auto Vmnic Auto VMKernel Port vmotion Vmk2 : vmotion IP Address VLAN: 11 VMKernel Port Fault Tolerance Vmk3 : Fault Tolerance IP Address VLAN: 12 IPMI Out of Band Management Interface 1GB Network (LAN) 802.1q VLAN 10 Virtual Machine Port Group Virtual Machine Traffic VLAN 15 VLAN: 15 Virtual Machine Port Group Virtual Machine Traffic VLAN 16 VLAN: 16 Onboard Dual Port 1GB NIC UNUSED 1GB 802.1q Trunkc 1GB 802.1q Trunkc Virtual Machine Port Group Nutanix VLAN 10 VLAN: 10 Virtual Switch: vswitch- Nutanix 10Gb Adapter 1 Port 1 Virtual Machine Port group Svm- iscsi- pg 1 virtual machine(s) NTNX- XXXXXXXXXXX- A- CVM Physical Adapters No adapters 10GB 802.1q Trunkc 10Gb Adapter 1 Port 2 VMKernel Port Vmk- svm- iscsi- pg Vmk1 : GB 802.1q Trunk Figure 2: Virtual Networking Option 2 with Virtual Standard Switch
15 Network Performance Optimization with Jumbo Frames Jumbo Frames is a term given to an Ethernet frame that has a data payload and MTU size greater than 1,500 bytes. When configuring Jumbo Frames, the MTU size will typically be configured to 9,000, which is near the maximum size for an Ethernet frame. The idea behind Jumbo Frames is as the speed of networks increase, a 1,500-byte frame is unnecessarily small. With solutions such as IP-based storage (iscsi/nfs) leveraging converged networks, larger frames will assist with both throughput and reducing the overhead on the network. By default, the network will be moving large numbers of 1,500 byte frames. By configuring the network to use larger 9,000 byte frames, it will process six times fewer packets, therefore improving throughput by reducing the overhead on the network devices. Solutions such as VXLAN, require a payload greater than 1,500 bytes. As a result, VMware recommends using Jumbo Frames for VXLAN to avoid packet fragmentation. One of the most important factors when considering the use of Jumbo Frames is confirming that all network devices can provide end-to-end support for Jumbo Frames, and can be enabled on all switches globally. If this is the case, Jumbo Frames is recommended. It is also important to point out Jumbo Frames does not have to be enabled for all traffic types.
16 Sample Jumbo Frame Configuration The following diagram shows an MTU size of 9,216 bytes configured on the physical network with two ESXi hosts configured with Jumbo Frames where appropriate. This illustrates the recommended configuration in a Nutanix environment. A combination of Jumbo and non-jumbo Frames can be supported on the same deployment. (dv)portgroup Virtual Machine Traffic VMKernel NFS or iscsi (dv)portgroup Nutanix (dv)portgroup Virtual Machine Traffic VMKernel NFS or iscsi (dv)portgroup Nutanix MTU 1500 MTU 9000 MTU 9000 MTU 1500 MTU 9000 MTU 9000 ESXi Host MTU SIZE 9216 ESXi Host MTU 9000 VMKernel Fault Tolerance MTU 9000 VMKernel vmotion MTU 1500 VMKernel ESXi Management MTU 9000 VMKernel Fault Tolerance MTU 9000 VMKernel vmotion MTU 1500 VMKernel ESXi Management Figure 3: Using Jumbo Frames in ESXi environments Recommended MTU Sizes for Traffic Types using Jumbo Frames Note that performance is still excellent with the standard MTU of 1,500 bytes. The following table shows the various traffic types in a VMware vsphere / Nutanix environment, and Nutanix recommended Maximum Transmission Unit (MTU). Traffic Types Jumbo Frames 2 (MTU of 9,000 bytes) Jumbo Frames (MTU of 1,500 bytes) 1) Local and remote Nutanix CVM traffic 2) vmotion / Multi NIC vmotion 3) Fault tolerance 4) iscsi 5) NFS 6) VXLAN 1 1) ESXi management 2) Virtual machine traffic 3 Notes:
17 1 Minimum MTU supported in 1,524, but >=1,600 is recommended 2 Assumes traffic types are not routed 3 Jumbo Frames can be beneficial in selected use cases, but not always required. In summary, the benefits of Jumbo Frames include: 1. A reduced number of interrupts for the switches to process 2. Lower CPU utilization on the switches and Nutanix CVMs 3. Increased performance for some workloads, such as Nutanix CVM and vmotion/ft 4. Performance is either the same or better than without Jumbo Frames as long as they are properly configured on the end-to-end network path Jumbo Frames Recommendations The following are general recommendations for the configuration of Jumbo frames, but Nutanix suggests all customers carefully consider requirements for their specific applications and abilities of their network switching equipment. Recommendations for Jumbo Frames Recommendations when supported by the entire network stack 1) Configure MTU of 9,216 bytes on a) All switches and interfaces 2) Configure MTU of 9,000 on a) VMK for NFS / iscsi b) Nutanix CVM internal and external interfaces c) VMK(s) for vmotion / FT Note: VMware also recommends using Jumbo Frames for IP-based storage, as discussed in Performance Best Practices for vsphere 5.5 on the VMware Knowledge Base site. The Nutanix CVMs must be configured for Jumbo Frames on both the internal and external interfaces. The converged network also needs to be configured for Jumbo Frames. Most importantly, the configuration needs to be validated to ensure Jumbo Frames are properly implemented end-to-end. If Jumbo Frames are not properly implemented on the end-to-end network, packet fragmentation can occur. Packet fragmentation will result in degraded performance and higher overhead on the network. The following ping commands can help you to test that end-to-end communication can be achieved at Jumbo Frame MTU size without fragmentation occurring:
18 Windows: ping l f Linux: ping s 9000 M do ESXi: vmkping d s 8972 <ip_address> Note: In ESXi 5.1 and later, you can specify which vmkernel port to use for outgoing ICMP traffic with the I option. Conclusion The Nutanix Virtual Computing Platform is a highly resilient converged compute and storage platform designed for supporting virtual environments such as VMware vsphere. Understanding fundamental Nutanix and VMware networking configuration features and recommendations is key to designing a scalable and high performing solution which meets customer requirements. Leveraging the best practices outlined in this document will enable Nutanix and VMware customers to get the most out of their storage, compute, virtualization, and networking investments. Further Information You can continue the conversation on the Nutanix Next online community (next.nutanix.com). For more information relating to VMware vsphere or to review other Nutanix Tech Notes, please visit the Nutanix website at
Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations
Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Network Troubleshooting & Configuration in vsphere 5.0 2010 VMware Inc. All rights reserved Agenda Physical Network Introduction to Virtual Network Teaming - Redundancy and Load Balancing VLAN Implementation
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
How to Create a Virtual Switch in VMware ESXi I am not responsible for your actions or their outcomes, in any way, while reading and/or implementing this tutorial. I will not provide support for the information
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
CHAPTER 13 Revised: April 27, 2011, OL-20458-01 This chapter describes how to configure iscsi multipath for multiple routes between a server and its storage devices. This chapter includes the following
Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check
VMware vsphere 5.0 Evaluation Guide Advanced Networking Features TECHNICAL WHITE PAPER Table of Contents About This Guide.... 4 System Requirements... 4 Hardware Requirements.... 4 Servers.... 4 Storage....
1 Architektur XenServer 2 XenServer Architectural Components 3 Architectural Changes The Open vswitch is now the default network stack for the product. Improvements to Distributed Virtual Switching include
ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
Set Up a VM-Series Firewall on an ESXi Server Palo Alto Networks VM-Series Deployment Guide PAN-OS 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara,
VMware vsphere Reference Architecture for Small Medium Business Dell Virtualization Business Ready Configuration b Dell Virtualization Solutions Engineering www.dell.com/virtualization/businessready Feedback:
Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
TECHNOLOGY BRIEF Intel Xeon Processor E5-2600 Product Family Intel Ethernet Converged Network Adapter Family ware vsphere* 5.1 Simplified, High-Performance 10GbE Networks Based on a Single Virtual Distributed
SN0054584-00 A Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Information
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
Windows Server 2012 R2 Hyper-V: Designing for the Real World Steve Evans @scevans www.loudsteve.com Nick Hawkins @nhawkins www.nickahawkins.com Is Hyper-V for real? Microsoft Fan Boys Reality VMware Hyper-V
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
VMware Virtual Networking Concepts I N F O R M A T I O N G U I D E Table of Contents Introduction... 3 ESX Server Networking Components... 3 How Virtual Ethernet Adapters Work... 4 How Virtual Switches
VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this
ARISTA DESIGN GUIDE NSX TM for vsphere with Arista CloudVision Version 1.0 August 2015 ARISTA DESIGN GUIDE NSX FOR VSPHERE WITH ARISTA CLOUDVISION Table of Contents 1 Executive Summary... 4 2 Extending
Where IT perceptions are reality Industry Brief Renaissance in VM Network Connectivity Featuring An approach to network design that starts with the server Document # INDUSTRY2015005 v4, July 2015 Copyright
ESX 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware
WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly
HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios Part number 603028-003 Third edition August 2010 Copyright 2009,2010 Hewlett-Packard Development Company, L.P.
Securely Architecting the Internal Cloud Rob Randell, CISSP Senior Security and Compliance Specialist VMware, Inc. Securely Building the Internal Cloud Virtualization is the Key How Virtualization Affects
To register or for more information call our office (208) 898-9036 or email email@example.com Vmware VSphere 6.0 Private Cloud Administration Class Duration 5 Days Introduction This fast paced,
Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1) September 17, 2010 Part Number: This document describes how to install software for the Cisco Nexus 1000V Virtual
QoS Queuing on Cisco Nexus 1000V Class-Based Weighted Fair Queuing for Virtualized Data Centers and Cloud Environments Intended Audience Virtualization architects, network engineers or any administrator
ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
VMware vsphere: Install, Configure, Manage [V5.0] Gain hands-on experience using VMware ESXi 5.0 and vcenter Server 5.0. In this hands-on, VMware -authorized course based on ESXi 5.0 and vcenter Server
Set Up a VM-Series Firewall on an ESXi Server Palo Alto Networks VM-Series Deployment Guide PAN-OS 6.1 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara,
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
The Drobo family of iscsi storage arrays allows organizations to effectively leverage the capabilities of a VMware infrastructure, including vmotion, Storage vmotion, Distributed Resource Scheduling (DRS),
QNAP in vsphere Environment HOW TO USE QNAP NAS AS A VMWARE DATASTORE VIA NFS Copyright 2009. QNAP Systems, Inc. All Rights Reserved. V1.8 How to use QNAP NAS as a VMware Datastore via NFS QNAP provides
Best Practices for High Performance NFS Storage with VMware Executive Summary The proliferation of large-scale, centralized pools of processing, networking, and storage resources is driving a virtualization
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
Across the globe, many companies choose a Cisco switching architecture to service their physical and virtual networks for enterprise and data center operations. When implementing a large-scale Cisco network,
Nutanix Tech Note Data Protection and Disaster Recovery Nutanix Virtual Computing Platform is engineered from the ground-up to provide enterprise-grade availability for critical virtual machines and data.
Cisco Nexus 1000V Virtual Switch Product Overview The Cisco Nexus 1000V virtual machine access switch is an intelligent software switch implementation for VMware ESX environments. Running inside of the
Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...
VMware vsphere 4.1 with ESXi and vcenter This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter. Assuming no prior virtualization
HP P4000 LeftHand SAN Solutions with VMware vsphere Best Practices Technical whitepaper Table of contents Executive summary...2 New Feature Challenge...3 Initial iscsi setup of vsphere 5...4 Networking
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
The Official VCP5 Certification Guide Bill Ferguson vmware* press Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris Madrid Cape Town Sydney Tokyo Singapore
VMware Host Profiles: T E C H N I C A L W H I T E P A P E R Table of Contents Introduction................................................................................ 3 Host Configuration Management..............................................................
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
CHAPTER 5 This chapter describes how a VSM and VEM can run on the same host. This chapter includes the following topics: Information About a VSM and VEM on the Same Host, page 5-1 Guidelines and Limitations,
Hosted Service Strategy Guide Prepared by Jason Gaudreau, Senior Technical Account Manager VMware Professional Services firstname.lastname@example.org Revision History Date Author Comments Reviewers 11/20/2014 Jason
Technical Note The vfabric Data Director worksheets contained in this technical note are intended to help you plan your Data Director deployment. The worksheets include the following: vsphere Deployment
ESX Server 3 Configuration Guide Update 2 and later for ESX Server 3.5 and VirtualCenter 2.5 This document supports the version of each product listed and supports all subsequent versions until the document
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.
Expert Reference Series of White Papers vcloud Director 5.1 Networking Concepts 1-800-COURSES www.globalknowledge.com vcloud Director 5.1 Networking Concepts Rebecca Fitzhugh, VMware Certified Instructor
VMware vsphere: Fast Track [V5.0] Experience the ultimate in vsphere 5 skills-building and VCP exam-preparation training. In this intensive, extended-hours course, you will focus on installing, configuring,
VX INSTALLATION 2 1. I need to adjust the disk allocated to the Silver Peak virtual appliance from its default. How should I do it? 2. After installation, how do I know if my hard disks meet Silver Peak