1 Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking concepts and how they can be used in a Nutanix environment to ensure optimal performance and availability. It also covers the recommended configuration settings for different vsphere networking options.
2 Table of Contents Executive Summary... 3 VMware vsphere Networking Overview... 3 Network Discovery Protocols... 4 Cisco Nexus 1000V / VMware NSX... 5 Network I/O Control (NIOC)... 5 Management Traffic... 6 vmotion Traffic... 6 Fault Tolerance Traffic... 6 Virtual Machine Traffic... 7 NFS Traffic... 7 Nutanix CVM Traffic... 7 iscsi Traffic... 8 vsphere Replication Traffic... 8 Other Defaults... 8 Load Balancing, Failover, and NIC Teaming... 9 NIC Team Load Balancing... 9 Recommendation for VSS - Route Based on Originating Virtual Port... 9 Recommendation for VDS - Route Based on Physical NIC Load (LBT)... 9 Network Failover Detection Notify Switches Failover Order Failback Summary Security Virtual Networking Configuration Examples Option 1 - Virtual Distributed Switch (VDS) with VSS for Nutanix Internal deployment Option 2 - Virtual Standard Switch (VSS) with VSS for Nutanix Internal Deployment Network Performance Optimization with Jumbo Frames Sample Jumbo Frame Configuration Recommended MTU Sizes for Traffic Types using Jumbo Frames Jumbo Frames Recommendations Conclusion Further Information... 18
3 Executive Summary The Nutanix Virtual Computing Platform is a highly resilient converged compute and storage platform, designed for supporting virtual environments such as VMware vsphere. The Nutanix architecture runs a storage controller in a VM, called the Nutanix Controller VM (CVM). This VM is run on every Nutanix server node in a Nutanix cluster to form a highly distributed, shared-nothing converged infrastructure. All CVMs actively work together to aggregate storage resources into a single global pool that can be leveraged by user virtual machines running on the Nutanix server nodes. The storage resources are managed by the Nutanix Distributed File System (NDFS) to ensure that data and system integrity is preserved in the event of node, disk, application, or hypervisor software failure. NDFS also delivers data protection and high availability functionality that keeps critical data and VMs protected. Networking and network design are critical parts of any distributed system. A resilient network design is important to ensure connectivity between Nutanix CVMs, for virtual machine traffic, and for vsphere management functions, such as ESXi management and vmotion. The current generation of Nutanix Virtual Computing Systems comes standard with redundant 10GbE and 1GbE NICs, which can be used by vsphere for resilient virtual networking. This Tech Note is intended to help the reader understand core networking concepts and configuration best practices for a Nutanix cluster running with VMware vsphere. Implementing the following best practices will enable Nutanix customers to get the most out of their storage, networking, and virtualization investments. VMware vsphere Networking Overview VMware vsphere supports two main types of virtual switches, the Virtual Standard Switch (VSS) and the Virtual Distributed Switch (VDS). The main differences between these virtual switch types are their functionality and how they are created and managed. VSS is available in all versions of VMware vsphere and is the default method to connect virtual machines on the same host to each other and to the external (or physical) network. VDS is available only with vsphere Enterprise Plus and has more advanced features and functionality. In general, VSS is simple to configure and maintain, but it lacks support for Network I/O Control (NIOC), has no automated network load balancing functionality, and it can t be centrally managed.
4 Bundled in vsphere Enterprise Plus, VDS supports NIOC with Network Resource Pools and can be centrally managed, allowing the VDS configuration to be applied to remote ESXi hosts easily. It also supports network discovery protocols, including Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP). The following sections include a discussion of the various features of VSS and VDS, along with configuration recommendations on the Nutanix platform. Network Discovery Protocols Network discovery protocols give vsphere administrators visibility on connectivity between the virtual and physical network. This visibility makes troubleshooting easier in the event of issues such as cabling problems or MTU / packet fragmentation. VMware vsphere offers three configuration settings for switch discovery Listen, Advertise, or Both. These configuration settings are used to determine the information sent and/or received by the discovery protocol. Nutanix recommends using the "Both" setting, which ensures information is collected and displayed from the physical switch, and also ensures that ESXi sends information about the virtual distributed switch (VDS) to the physical switch. The following information is visible in the vsphere client for advertising switches: 1. The physical switch interface the dvuplink is connected to 2. MTU size (i.e., If Jumbo Frames are enabled or not). Some switches will report maximum MTU of 9, The switch management IP address(es) 4. The switch name, description, software version, and capabilities The following are general recommendations for Discovery Protocol Configuration, but Nutanix suggests all customers careful consider the advantages and disadvantages of discovery protocols for their specific security and discovery requirements. Recommendations for Discovery Protocol Configuration Type Depending on your switching infrastructure, use either: 1) CDP - For Cisco-based environments 2) LLDP - For non-cisco environments Operation Both - Generally acceptable in most environments and provides maximum operational benefits. This allows both vsphere and the physical network to openly advertise information.
5 Cisco Nexus 1000V / VMware NSX The Cisco Nexus 1000V and VMware NSX solutions can be used with Nutanix solutions, however this is out of the scope for this document. You can engage on the topic of SDN with Nutanix on our community site Network I/O Control Network I/O control (NOIC) is a feature available since vsphere 4.1 with the Virtual Distributed Switch (VDS). NOIC uses network resource pools to determine the bandwidth that different network traffic types provide. When NIOC is enabled, distributed switch traffic is divided into custom and predefined network resource pools, including fault tolerance traffic, NFS traffic, iscsi traffic, vmotion traffic, ESXi management traffic, vsphere replication (VR) traffic, and VM traffic. The physical adapter shares assigned to a network resource pool determine what share of the total available bandwidth will be guaranteed to the traffic associated with that network resource pool in the event of contention. It is important to understand that NIOC has no impact on the network traffic, unless there is contention. So during times where the network is less than 100% utilized, NIOC will have no advantage or disadvantage. Bandwidth made available to a network resource pool is determined by the share assigned to that pool, compared to other network resource pools. For further information about shares, review the following best practice guide from VMware: Limits can also be applied to selected traffic types if required. Nutanix recommends not to set limits as they may unnecessarily impact performance for given workloads when there is available bandwidth. Using shares ensures burst workloads, such as vmotion, can complete their workload (in this example the migration of a VM to a different ESXi host) as fast as possible where bandwidth is available, and NIOC shares will prevent other workloads from being significantly impacted in the event available bandwidth is limited.
6 The following table shows recommended values for the Network Resource Pool Share: Network Resource Pool Share Value Physical Adapter Shares Host Limit - MB/s Management Traffic 25 Low Unlimited vmotion Traffic 50 Normal Unlimited Fault Tolerance (FT) Traffic 50 Normal Unlimited Virtual Machine Traffic 100 High Unlimited iscsi Traffic High Unlimited NFS Traffic 100 High Unlimited Nutanix Traffic High Unlimited vsphere Replication (VR) 50 Normal Unlimited Traffic 2 Other pools, including: 50 Normal Unlimited vsphere/virtual SAN Traffic 2 Notes: 1. This is a custom Network Resource Pool, which needs to be created manually. 2. These pools are generally not applicable or relevant in Nutanix deployments. Management Traffic Management traffic requires minimal bandwidth with a share value of 25 over two 10Gb interfaces. This configuration will ensure a minimum of approximately 1.5Gbps bandwidth. This is more than sufficient for ESXi management traffic and above the minimum requirement of 1Gbps. vmotion Traffic vmotion is a burst-type workload, which uses no bandwidth until DRS or a vsphere administrator starts a vmotion (or puts a host into maintenance mode). As such, it is unlikely to have any significant ongoing impact on the network traffic. Nutanix recommends a share value of 50 over two 10Gb interfaces. This will guarantee a minimum of approximately 3Gbps which is sufficient for vmotion activity to complete in a timely manner. This also ensures vmotion has well above the minimum bandwidth requirement of 1Gbps. Fault Tolerance Traffic Fault tolerance (FT) traffic is dependent on how many FT VMs per host (current maximum of 4 per host) and will generally be a sustained workload, as opposed to a burst workload such as vmotion. This is because FT needs to keep the primary and
7 secondary VMs in "lockstep". Generally, virtual machines using FT are critical and you need to ensure FT traffic (which is also sensitive to latency) is not impacted during periods of contention. Nutanix recommends using a share value of 50 and sharing two 10Gb interfaces. Based on this configuration, FT will be guaranteed a minimum of 3Gbps which is well above VMware's recommended minimum of 1Gbps. Virtual Machine Traffic VM traffic is why datacenters exist in the first place, so this traffic is always important if not critical. If the VM s network connectivity slows, it can quickly impact end users and reduce productivity. As such, it is important to ensure this traffic has a significant share of the available bandwidth during periods of contention. Therefore, Nutanix recommends a share value of 100 over two 10Gb interfaces. Based on this configuration, virtual machine traffic will be guaranteed a minimum of approximately 6Gbps. For most environments, this bandwidth will be more than what is required and ensure a good amount of headroom in case of unexpected burst activity. NFS Traffic NFS traffic is essential to the Nutanix Distributed File System and to virtual machine performance, so this traffic is always critical. If NFS performance is degraded, it will have an immediate impact on Nutanix CVM and VM performance. As such, it is important to ensure this traffic has a significant share of the available bandwidth during periods of contention. In normal operation, NFS traffic is serviced locally. So NFS traffic will not impact the physical network card unless the Nutanix Controller VM is offline in the event of maintenance or unavailability. Under normal circumstances, there will be no NFS traffic going across the physical NICs and the NIOC share value will have no impact on other traffic. For this reason, it is excluded from calculations. As a safety measure, to ensure that in the event of network contention, CVM maintenance, or unavailability, a share value of 100 is assigned to NFS traffic. This guarantees a minimum of 6Gbps bandwidth. Nutanix CVM Traffic For the Nutanix DFS to function, it requires connectivity to the other CVMs in the cluster. This connectivity is used for tasks such as write I/O synchronous replication and Nutanix cluster management. Under normal circumstances, there will be minimal to no read I/O traffic going across the physical NICs. This is because the Nutanix NDFS architecture was designed with the key concept of VM-data locality. However,
8 write I/O will always utilize the physical network due to synchronous replication for data fault tolerance and availability. For optimal NDFS performance, each CVM will be guaranteed a minimum of 6Gbps bandwidth. In most environments, this bandwidth will be more than what is required and ensure a good amount of headroom in case of unexpected burst activity. iscsi Traffic iscsi is not a normal protocol within a Nutanix environment. However, if iscsi is used within the environment, this traffic is also given a share value of 100. Note: NIOC does not cover In Guest iscsi traffic regulation by default. In the event In Guest iscsi is used, it is recommended to create a dvportgroup for In Guest iscsi traffic and assign it to a custom network resource pool called "In Guest iscsi" and give it a share value of 100 (High). vsphere Replication Traffic vsphere Replication (VR) traffic may be critical to your environment if you choose to use VR with or without VMware Site Recovery Manager (SRM). When using SRM, it is highly recommended to leverage the Nutanix Storage Replication Adaptor (SRA) as this is more efficient than using vsphere-based replication alone. If using VR without SRM, the default share value of 50 (Normal) should be suitable for most environments. This guarantees approximately 3Gbps of network bandwidth. Other Defaults Other default points are generally not relevant and can be disregarded with their share value of 50.
9 Load Balancing, Failover, and NIC Teaming vsphere provides a number of load balancing, failover, and NIC teaming options. Each option should be carefully understood and considered for a vsphere networking deployment. NIC Team Load Balancing The available options for NIC team load balancing include: 1. Route based on originating virtual port 2. Route based on IP hash 3. Route based on source MAC hash 4. Route based on physical NIC load called load-based teaming or LBT (VDS only) 5. Explicit failover order Recommendation for VSS - Route Based on Originating Virtual Port The route based on originating virtual port option is the default load balancing policy and has no requirement for advanced switching configuration such as LACP, Cisco EtherChannel, or HP teaming. These attributes make it simple to implement, maintain, and troubleshoot. Route based on originating virtual port requires 802.1q VLAN tagging for secure separation of traffic types. The main disadvantages are no load balancing based on network load, which results in traffic from a single VM always being sent to the same physical NIC, unless there is a failover event caused by a NIC or upstream link failure. This is less of an issue with the high throughput 10GbE network interfaces of the Nutanix Virtual Computing platform. Recommendation for VDS Route Based on Physical NIC Load (LBT) For environments using VDS, Nutanix recommends using route based on physical NIC load. With this option, LBT has no requirement for advanced switching configuration such as LACP, Cisco EtherChannel, or HP teaming. This option provides fully automated load balancing which takes effect when the utilization of one or more NICs reach and sustain 75% for a period of 30 seconds or more based on egress and ingress traffic. LBT requires 802.1q VLAN tagging for secure separation of traffic types. As a result, LBT is a very simple and effective solution to implement and maintain, and works very well in Nutanix deployments.
10 Network Failover Detection VMware ESXi uses one of two methods of network failover detection: beacon probing and link status. Beacon probing works by sending out and listening for beacon probes which are made up of broadcast frames. Beacon probing is dependent on having three network connections. As a result of this requirement, it is not recommended for the current generation of Nutanix Virtual Computing Platforms which currently have two network ports. Link status is dependent on the link status provided by the physical NIC. Link status can detect failures, such as a cable disconnection and/or physical switch power failures. Link status cannot detect configuration errors or upstream failures. To avoid the limitations of link status relating to upstream failures, enable "link state tracking" on physical switches that support this option. This enables the switch to pass upstream link state information back to ESXi, which will allow link status to trigger a link down on ESXi where appropriate. Notify Switches The purpose of the notify switches policy setting is to enable or disable communication by ESXi with the physical switch in the event of a failover. If configured as "Yes", ESXi will send a notification to the physical switch to update its lookup tables on a failover event. Nutanix recommends enabling this option to ensure that failover occurs in a timely manner with minimal interruption to network connectivity. Failover Order Using failover order allows the vsphere administrator to specify the order in which NICs failover in the event of an issue. This is configured by assigning a physical NIC to one of three groups: active adapters; standby adapters; or unused adapters. In the event all active adapters lose connectivity, the highest priority standby adapter will be used. Failover order is only required in a Nutanix environment when using Multi-NIC vmotion. When configuring Multi-NIC vmotion, the first dvportgroup used for vmotion must be configured to have one dvuplink active and the other standby, with the reverse configured for the second dvportgroup used for vmotion. For more information see: Multiple-NIC vmotion in vsphere 5 (KB )
11 Failback For customers not using the VDS and LBT, the failback feature can help rebalance network traffic across the original NIC. This can result in improved network performance. The only significant disadvantage of setting failback to Yes is in the unlikely event of network instability or network route flapping, since having network traffic fail back to the original NIC may result in intermittent or degraded network connectivity. Nutanix recommends setting failback to Yes when using VSS and No when using VDS. Summary The following table summarizes Nutanix recommendations for NIC Recommendation for Load Balancing, Failover, and NIC teaming Virtual Distributed Switch (VDS) Load Balancing Network Failover Detection Notify Switches Failback Virtual Standard Switch (VSS) Load Balancing Network Failover Detection Notify Switches Failback Route based on physical NIC load (LBT) Link Status Only (Ensure "Link State Tracking" or equivalent is enabled on Physical switches) Yes No Route based on originating virtual port Link Status Only (ensure "Link State Tracking" or equivalent is enabled on physical switches) Yes Yes Security When configuring a VSS or VDS, there are three configurable options under security: promiscuous mode; MAC address changes; and forged transmits. Each of these can be set to "Accept" or "Reject". In general, the most secure and appropriate setting for each of the three options is "Reject". There are several use cases which may require you to set a specific option to Accept. An example of use cases to consider configuring "Accept" on forged transmits and MAC address changes are: 1. Microsoft load balancing in Unicast mode 2. iscsi deployments on select storage types
12 For more information, see the Network Load Balancing Unicast Mode Configuration (KB ) in the VMware Knowledgebase. The following are general recommendations for Virtual Network Security settings, but Nutanix suggests all customers carefully consider their requirements for their specific applications. Recommendation for Virtual Networking Security Promiscuous Mode Reject MAC Address Changes Reject Forged Transmits Reject Virtual Networking Configuration Examples The following two virtual networking configuration examples cover the Nutanix recommended configurations for both VSS and VDS solutions. Each configuration discusses the advantages, disadvantages, and common use cases. All Nutanix deployments use an internal-only VSS for the NFS communication between the ESXi host and the Nutanix CVM. This VSS remains unmodified regardless of the virtual network configuration for ESXi management, virtual machine traffic, vmotion, and so on. Nutanix recommends that no changes be made to this internal-only VSS. In both of the following options, Nutanix recommends all vmnics be set as "Active" on the Portgroup and/or dvportgroup unless otherwise specified.
13 Option 1 - Virtual Distributed Switch (VDS) with VSS for Nutanix Internal deployment Option 1 is recommended for customers using VMware vsphere Enterprise Plus who would like to use Virtual Distributed Switches. Option 1 has a number of benefits, including: 1. The ability to leverage advanced networking features, such as NIOC and LBT (route based on physical NIC load) 2. It reduces cabling/switching requirements 3. It provides the ability for all traffic types to "burst" where required up to 10Gbps 4. It is a simple solution which only requires 802.1q configured on the physical network 5. It can be centrally configured and managed The following diagram shows a sample configuration for a VDS in a Nutanix environment. Note how the Nutanix internal VSS is unmodified. Distributed Virtual Switch: dvswitchnutanix VMKernel Port Management Network Vmk0 : ESXi Management IP Address VLAN: 10 Physical Adapters Vmnic Auto Vmnic Auto IPMI Out of Band Management Interface 1GB Network (LAN) 802.1q VLAN 10 VMKernel Port vmotion Vmk2 : vmotion IP Address VLAN: 11 VMKernel Port Fault Tolerance Vmk3 : Fault Tolerance IP Address VLAN: 12 Onboard Dual Port 1GB NIC UNUSED 1GB 802.1q Trunk 1GB 802.1q Trunk Virtual Machine Port Group Virtual Machine Traffic VLAN 15 VLAN: 15 Virtual Machine Port Group Virtual Machine Traffic VLAN 16 10Gb Adapter 1 Port 1 VLAN: 16 Virtual Machine Port Group Nutanix VLAN 10 10GB 802.1q Trunk 10Gb Adapter 1 Port 2 VLAN: 10 Virtual Switch: vswitch- Nutanix 10GB 802.1q Trunk Virtual Machine Port group Svm- iscsi- pg 1 virtual machine(s) NTNX- XXXXXXXXXXX- A- CVM Physical Adapters No adapters VMKernel Port Vmk- svm- iscsi- pg Vmk1 : Figure 1: Virtual Networking Option 1 with Virtual Distributed Switch
14 Option 2 - Virtual Standard Switch (VSS) with VSS for Nutanix Internal Deployment Option 2 is for customers not using VMware vsphere Enterprise Plus, or those who do not wish to use the VDS. Option 2 and has a number of benefits, including: 1. It reduces cabling/switching requirements (No requirement for 1Gb ports) 2. It is a simple solution which only requires 802.1q configured on the physical network. The following diagram shows a sample configuration for a VDS in a Nutanix environment: Virtual Switch: vswitchntnx VMKernel Port Management Network Vmk0 : ESXi Management IP Address VLAN: 10 Physical Adapters Vmnic Auto Vmnic Auto VMKernel Port vmotion Vmk2 : vmotion IP Address VLAN: 11 VMKernel Port Fault Tolerance Vmk3 : Fault Tolerance IP Address VLAN: 12 IPMI Out of Band Management Interface 1GB Network (LAN) 802.1q VLAN 10 Virtual Machine Port Group Virtual Machine Traffic VLAN 15 VLAN: 15 Virtual Machine Port Group Virtual Machine Traffic VLAN 16 VLAN: 16 Onboard Dual Port 1GB NIC UNUSED 1GB 802.1q Trunkc 1GB 802.1q Trunkc Virtual Machine Port Group Nutanix VLAN 10 VLAN: 10 Virtual Switch: vswitch- Nutanix 10Gb Adapter 1 Port 1 Virtual Machine Port group Svm- iscsi- pg 1 virtual machine(s) NTNX- XXXXXXXXXXX- A- CVM Physical Adapters No adapters 10GB 802.1q Trunkc 10Gb Adapter 1 Port 2 VMKernel Port Vmk- svm- iscsi- pg Vmk1 : GB 802.1q Trunk Figure 2: Virtual Networking Option 2 with Virtual Standard Switch
15 Network Performance Optimization with Jumbo Frames Jumbo Frames is a term given to an Ethernet frame that has a data payload and MTU size greater than 1,500 bytes. When configuring Jumbo Frames, the MTU size will typically be configured to 9,000, which is near the maximum size for an Ethernet frame. The idea behind Jumbo Frames is as the speed of networks increase, a 1,500-byte frame is unnecessarily small. With solutions such as IP-based storage (iscsi/nfs) leveraging converged networks, larger frames will assist with both throughput and reducing the overhead on the network. By default, the network will be moving large numbers of 1,500 byte frames. By configuring the network to use larger 9,000 byte frames, it will process six times fewer packets, therefore improving throughput by reducing the overhead on the network devices. Solutions such as VXLAN, require a payload greater than 1,500 bytes. As a result, VMware recommends using Jumbo Frames for VXLAN to avoid packet fragmentation. One of the most important factors when considering the use of Jumbo Frames is confirming that all network devices can provide end-to-end support for Jumbo Frames, and can be enabled on all switches globally. If this is the case, Jumbo Frames is recommended. It is also important to point out Jumbo Frames does not have to be enabled for all traffic types.
16 Sample Jumbo Frame Configuration The following diagram shows an MTU size of 9,216 bytes configured on the physical network with two ESXi hosts configured with Jumbo Frames where appropriate. This illustrates the recommended configuration in a Nutanix environment. A combination of Jumbo and non-jumbo Frames can be supported on the same deployment. (dv)portgroup Virtual Machine Traffic VMKernel NFS or iscsi (dv)portgroup Nutanix (dv)portgroup Virtual Machine Traffic VMKernel NFS or iscsi (dv)portgroup Nutanix MTU 1500 MTU 9000 MTU 9000 MTU 1500 MTU 9000 MTU 9000 ESXi Host MTU SIZE 9216 ESXi Host MTU 9000 VMKernel Fault Tolerance MTU 9000 VMKernel vmotion MTU 1500 VMKernel ESXi Management MTU 9000 VMKernel Fault Tolerance MTU 9000 VMKernel vmotion MTU 1500 VMKernel ESXi Management Figure 3: Using Jumbo Frames in ESXi environments Recommended MTU Sizes for Traffic Types using Jumbo Frames Note that performance is still excellent with the standard MTU of 1,500 bytes. The following table shows the various traffic types in a VMware vsphere / Nutanix environment, and Nutanix recommended Maximum Transmission Unit (MTU). Traffic Types Jumbo Frames 2 (MTU of 9,000 bytes) Jumbo Frames (MTU of 1,500 bytes) 1) Local and remote Nutanix CVM traffic 2) vmotion / Multi NIC vmotion 3) Fault tolerance 4) iscsi 5) NFS 6) VXLAN 1 1) ESXi management 2) Virtual machine traffic 3 Notes:
17 1 Minimum MTU supported in 1,524, but >=1,600 is recommended 2 Assumes traffic types are not routed 3 Jumbo Frames can be beneficial in selected use cases, but not always required. In summary, the benefits of Jumbo Frames include: 1. A reduced number of interrupts for the switches to process 2. Lower CPU utilization on the switches and Nutanix CVMs 3. Increased performance for some workloads, such as Nutanix CVM and vmotion/ft 4. Performance is either the same or better than without Jumbo Frames as long as they are properly configured on the end-to-end network path Jumbo Frames Recommendations The following are general recommendations for the configuration of Jumbo frames, but Nutanix suggests all customers carefully consider requirements for their specific applications and abilities of their network switching equipment. Recommendations for Jumbo Frames Recommendations when supported by the entire network stack 1) Configure MTU of 9,216 bytes on a) All switches and interfaces 2) Configure MTU of 9,000 on a) VMK for NFS / iscsi b) Nutanix CVM internal and external interfaces c) VMK(s) for vmotion / FT Note: VMware also recommends using Jumbo Frames for IP-based storage, as discussed in Performance Best Practices for vsphere 5.5 on the VMware Knowledge Base site. The Nutanix CVMs must be configured for Jumbo Frames on both the internal and external interfaces. The converged network also needs to be configured for Jumbo Frames. Most importantly, the configuration needs to be validated to ensure Jumbo Frames are properly implemented end-to-end. If Jumbo Frames are not properly implemented on the end-to-end network, packet fragmentation can occur. Packet fragmentation will result in degraded performance and higher overhead on the network. The following ping commands can help you to test that end-to-end communication can be achieved at Jumbo Frame MTU size without fragmentation occurring:
18 Windows: ping l f Linux: ping s 9000 M do ESXi: vmkping d s 8972 <ip_address> Note: In ESXi 5.1 and later, you can specify which vmkernel port to use for outgoing ICMP traffic with the I option. Conclusion The Nutanix Virtual Computing Platform is a highly resilient converged compute and storage platform designed for supporting virtual environments such as VMware vsphere. Understanding fundamental Nutanix and VMware networking configuration features and recommendations is key to designing a scalable and high performing solution which meets customer requirements. Leveraging the best practices outlined in this document will enable Nutanix and VMware customers to get the most out of their storage, compute, virtualization, and networking investments. Further Information You can continue the conversation on the Nutanix Next online community (next.nutanix.com). For more information relating to VMware vsphere or to review other Nutanix Tech Notes, please visit the Nutanix website at