Working with ESX(i) Log Files

Similar documents
Working with ESX(i) Log Files

Using ESXi with PowerChute Business Edition

How to install PowerChute Network Shutdown on VMware ESXi 3.5, 4.0 and 4.1

Dell EqualLogic Multipathing Extension Module

VMware for Bosch VMS. en Software Manual

Multipathing Configuration for Software iscsi Using Port Binding

CommandCenter Secure Gateway

Bosch Video Management System High availability with VMware

vsphere Management Assistant Guide vsphere 4.0 EN

Configuring iscsi Multipath

Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1)

How To Install Vsphere On An Ecx 4 On A Hyperconverged Powerline On A Microsoft Vspheon Vsphee 4 On An Ubuntu Vspheron V2.2.5 On A Powerline

Building a Penetration Testing Virtual Computer Laboratory

How to install software on VMware ESXi 4.0/4.1

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN

Installing and Using the vnios Trial

Migrating to ESXi: How To

Exinda How to Guide: Virtual Appliance. Exinda ExOS Version Exinda, Inc

Vmware VSphere 6.0 Private Cloud Administration

How To Set Up A Virtual Network On Vsphere (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)

VMware vsphere-6.0 Administration Training

F-Secure Messaging Security Gateway. Deployment Guide

TGL VMware Presentation. Guangzhou Macau Hong Kong Shanghai Beijing

vsphere Private Cloud RAZR s Edge Virtualization and Private Cloud Administration

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

Virtual Appliance Setup Guide

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN

VM-Series Firewall Deployment Tech Note PAN-OS 5.0

ESXi Configuration Guide

Install Guide for JunosV Wireless LAN Controller

Active Fabric Manager (AFM) Plug-in for VMware vcenter Virtual Distributed Switch (VDS) CLI Guide

Nutanix Tech Note. VMware vsphere Networking on Nutanix

ESX Configuration Guide

Set Up a VM-Series Firewall on an ESXi Server

PowerPanel Business Edition Installation Guide

NOC PS manual. Copyright Maxnet All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3

Cisco Nexus 5548UP. Switch Configuration Guide for Dell PS Series SANs. A Dell Deployment and Configuration Guide

VMware vsphere 4.1 with ESXi and vcenter

DELL REFERENCE CONFIGURATIONS: HIGH-AVAILABILITY VIRTUALIZATION WITH DELL POWEREDGE R720 SERVERS

Extreme Control Center, NAC, and Purview Virtual Appliance Installation Guide

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version Rev.

VMware vsphere Examples and Scenarios

How to Create VLANs Within a Virtual Switch in VMware ESXi

CommandCenter Secure Gateway

Network Troubleshooting & Configuration in vsphere VMware Inc. All rights reserved

PHD Virtual Backup for Hyper-V

vsphere Management Assistant Guide vsphere 4.1

vsphere Replication for Disaster Recovery to Cloud

How To Set Up A Firewall Enterprise, Multi Firewall Edition And Virtual Firewall

VMware vcenter Log Insight Administration Guide

Security Analytics Virtual Appliance

How to Setup and Configure ESXi 5.0 and ESXi 5.1 for OpenManage Essentials

How to Create a Virtual Switch in VMware ESXi

Set Up a VM-Series Firewall on an ESXi Server

Getting Started with ESXi Embedded

VMware vsphere 5.0 Evaluation Guide

VMware vsphere Reference Architecture for Small Medium Business

How to Configure an Initial Installation of the VMware ESXi Hypervisor

Install and Configure an ESXi 5.1 Host

ESX Server 3 Configuration Guide Update 2 and later for ESX Server 3.5 and VirtualCenter 2.5

Unitrends Virtual Backup Installation Guide Version 8.0

Altor Virtual Network Security Analyzer v1.0 Installation Guide

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

VMware vsphere Replication Administration

Managing Multi-Hypervisor Environments with vcenter Server

StarWind iscsi SAN Software: Using StarWind with VMware ESX Server

Installing and Configuring vcenter Support Assistant

PassTest. Bessere Qualität, bessere Dienstleistungen!

Setup for Failover Clustering and Microsoft Cluster Service

MIGRATING LEGACY PHYSICAL SERVERS TO VMWARE VSPHERE VIRTUAL MACHINES ON DELL POWEREDGE M610 BLADE SERVERS FEATURING THE INTEL XEON PROCESSOR 5500

VMware vsphere: Fast Track [V5.0]

Consolidated Monitoring, Analysis and Automated Remediation For Hybrid IT Infrastructures. Goliath Performance Monitor Installation Guide v11.

Installing and Administering VMware vsphere Update Manager

A PRINCIPLED TECHNOLOGIES TEST REPORT VIRTUAL MACHINE MIGRATION COMPARISON: VMWARE VSPHERE VS. MICROSOFT HYPER-V OCTOBER 2011

Data Center Automation with the VM-Series

VMware Auto Deploy Administrator s Guide

Virtual Web Appliance Setup Guide

In order to upload a VM you need to have a VM image in one of the following formats:

ASM Educational Center (ASM) Est VMS-ICM v5.5 - VMware vsphere: Install, Configure, Manage Training Program

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide

Installing the Cisco Nexus 1000V for Microsoft Hyper-V

vsphere Single Host Management

How to install/upgrade the LANDesk virtual Cloud service appliance (CSA)

F-Secure Internet Gatekeeper Virtual Appliance

ISERink Installation Guide

Virtual Appliance for VMware Server. Getting Started Guide. Revision Warning and Disclaimer

Technical Note. vsphere Deployment Worksheet on page 2. Express Configuration on page 3. Single VLAN Configuration on page 5

Virtual Managment Appliance Setup Guide

Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server

Online Backup Client User Manual

Configuring the Basic Settings of an ESXi Host with PowerCLI

SonicWALL SRA Virtual Appliance Getting Started Guide

Consolidated Monitoring, Analysis and Automated Remediation For Hybrid IT Infrastructures. Goliath Performance Monitor Installation Guide v11.

ESX System Analyzer Version 1.0 Installation Guide

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

VMware vsphere 5.1 Advanced Administration

ClearPass Policy Manager 6.3

vshield Quick Start Guide vshield Manager 4.1 vshield Edge 1.0 vshield App 1.0 vshield Endpoint 1.0

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server Version 1

Transcription:

Working with ESX(i) Log Files Working with ESX(i) log files is important when troubleshooting issues within the virtual environment. You can view and search log files in ESX(i) and in vcenter Server using a few different methods. Methods Using the vsphere client The direct console user interface (DCUI) A web browser A syslog or vma appliance An SSH connection to the host PowerCLI using the Get-Log command When using SSH, use the following commands to view and search the log files. Use more to page through the log files one page at a time Use tail to view the end of the log files Use grep to search Use pipe to link commands together Use pipe to grep to search through files Use cat to concatenate & use grep to search Use find print grep filename to search for a file Example Cat hostd.log grep search variable more vcenter log files vcenter log files are in a vpxd-xx.log format where xx is a numerical value that increases when each log file is 5MB in size. The log file numbers rotate when the vpxd service is started or when the log reaches 5MB in size. The log files are located in the c:programdatavmwarevmware virtual centerlogs

Other log files include Vpxd-alert-x.log Vpxd-profile-x.log Esx logs /var/log/vmkernel VMkernel messages /var/log/vmkwarning /var/log/vmksummary /var/log/vmware/hostd.log /var/log/messages service console /var/log/vmware/vpx/vpxa.log vsphere client agent /var/log/aam/vmware xxx.log- HA /var/log/vmkiscsid.log iscsi /var/log/boot-logs/sysboot.log boot log ESXi logs /var/log/messages combination of the VMkernel and vmkwarning /var/log/vmware/hostd.log host management service /var/log/vmware/vpx/vpxa.log vsphere client agent /var/log/sysboot.log boot log /var/log/vmware/aam/vmware xxx.log HA Log rotation Within ESX(i), rotation for most log files is controlled by /etc/logrotate.conf To view available options run man logrotate With both ESX and ESX(i) hostd.log rotation is controlled with /etc/vmware/hostd/config.xml Vpxa.log rotation is controlled with /etc/opt/vmware/vpxa/vpxa.cfg Should you wish to edit the rotation control files you can use Nano in ESX or vi in ESX and ESXi. I will focus on vi as I am more familiar with it and with the arrival of vsphere 5 there will no longer be ESX, and as such no native support for Nano. vi commands

a append i insert O/o open new line O is line above, o is line below r replace : search or save options / search wq write and quit x delete individual characters dd delete line $ go to the end of the line ESC break out of current mode Log bundles Log bundles can be accessed through the VMware folder on the start menu, by clicking generate vcenter server log bundle. This runs the vc-support windows scripting file located at c:program filesvmwarevirtual infrastructurevirtual centrescriptsvc-support.wsf and cscript. You can also download it through the vsphere client and by connecting to the ESX(I) server using scp with Veeam FastSCP or WinSCP. To do this you have to enable tech support mode first. An alternative way of generating log bundles is through the vm-support command run through an SSH connection to the COS or through the vma. Running vm-support will generate a tar compressed file. Procedure [root@esxhost]#/usr/bin/vm-support With ESXi it is possible to place log files on shared storage. To set this open the vsphere client connection to the host, click configuration>advanced settings>syslog select local and enter the path to the shared storage. Enter the log file location as [datastorename]/logfiles/hostname.log. vilogd

vilogd is a service that performs log collections. You can manage it with the vilogger commands. vilogger is used to enable and disable or configure the log collections with these commands. To use vilogger, first ensure that vi-fastpass is enabled using vifp list server to list out the current vi-fastpass enabled servers, if no servers are listed use vifp addserver servername and vifptarget -s servername to add again. Commands vilogger enable vilogger list vilogger update policy Control the vilogd service with etc/init.d/vmwarevilogd start stop restart vilogger has several parameters available, an example of which are numrotation number of files to collect maxfilesize specified in MB collectionperiod how often to poll, specified in seconds Example vilogger enable server servername numrotation 20 maxfilesize 10 collectionperiod 10 This command will collect the following logs from the ESXi host hostd.log messages.log vpxa.log To scroll through the log files one page at a time use the more command. Example more hostd.log

Configure vma as a Syslog Server You can configure the vma as a syslog receiver to collect log files from the ESX and ESXi server. Run the commands listed below to configure. vma #sudo service rsyslog stop #sudo nano /etc/sysconfig/rsyslog This will open nano so you can edit the following information change SYSLOGD_OPTIONS= -m 0 to SYSLOGD_OPTIONS= -r -m 0 Save and exit the file #sudo service rsyslog start #sudo iptables -I INPUT -i eth0 -p udp dport 514 -j ACCEPT #sudo nano /etc/rc.local Edit the file to add the iptables line below to the end of the rc.local file iptables -I INPUT -i eth0 -p udp dport 514 -j ACCEPT ESX To configure ESX to use vma as a syslog server add the IP address of the vma to the /etc/syslog.conf file. #vi /etc/syslog.conf Add the following lines to the bottom of the file # Send all syslog traffic to vma *.* @<IP_Address_Of_vMA> Open the firewall with #/usr/sbin/esxcfg-firewall -o 514,udp,out,syslog Finally restart the syslog service with #sbin/services syslog restart ESXi Use the vsphere client by going to configuration>advanced>syslog settings and enter into the

syslog.remote.hostname section the name of the vma. Alternatively assuming vi-faspass is enabled run #vifptarget -s [ESXihost] #vicfg-syslog -s [vma] #vifptarget -c Connecting to an iscsi SAN with Jumbo Frames enabled The best way to add iscsi storage is by utilizing dedicating NIC s to iscsi traffic, on dedicated VMkernel switches, with separate IP subnet address ranges and separate physical switches or VLAN s. Enable Jumbo Frames on a vswitch To enable Jumbo Frames on a vswitch, change the MTU configuration for that vswitch. It is best to start with a new switch when setting this up as you will need to delete the existing port groups in order to allow jumbo frames to pass through the port group. In order to run the necessary commands connect to the host using the vsphere CLI which can be downloaded from the VMware website. To run a vsphere CLI command on Windows Open a command prompt. Navigate to the directory in which the vsphere CLI is installed. cd C:Program FilesVMwareVMware vsphere CLIbin3 Run the command, passing in the connection options and any other options.

<command>.pl <conn_options> <params> The extension.pl is required for most commands, but not for esxcli. Example vicfg-nas.pl server my_vcserver username username password mypwd vihost my_esxhost list Procedure Create a new vswitch and assign the appropriate uplink. Open the vsphere CLI and run vicfg-vswitch server my_vcserver username username password mypwd vihost my_esxhost -m MTU vswitch command. This command sets the MTU for all physical NICs on that vswitch. The MTU size should be set to the largest MTU size among all NICs connected to the vswitch. Run the vicfg-vswitch -l command to display a list of vswitches on the host, and check that the configuration of the vswitch is correct. Create a Jumbo Frames-Enabled VMkernel Interface Use the vsphere CLI to create a VMkernel network interface that is enabled with Jumbo Frames. On the vsphere CLI, run the vicfg-vmknic command to create a VMkernel connection with Jumbo Frame support. Procedure vicfg-vmknic -a -I ip address -n netmask -m MTU port group name Check that the VMkernel interface is connected to a vswitch with Jumbo Frames enabled. Run the vicfg-vmknic -l command to display a list of VMkernel interfaces and check that the configuration of the Jumbo Frame-enabled interface is correct. Configure all physical switches and any physical or virtual machines to which this VMkernel interface connects to support Jumbo Frames.

Create Additional iscsi Ports for Multiple NICs Log in to the vsphere Client and select the host from the inventory panel. Click the Configuration tab and click Networking. Select the vswitch that you use for iscsi and click Properties. Connect additional network adapters to the vswitch. In the vswitch Properties dialog box, click the Network Adapters tab and click Add. Select one or more NICs from the list and click Next. with dependent hardware iscsi adapters, make sure to select only those NICs that have a corresponding iscsi component. Review the information on the Adapter Summary page, and click Finish. The list of network adapters reappears, showing the network adapters that the vswitch now claims. Create iscsi ports for all NICs that you connected. The number of iscsi ports must correspond to the number of NICs on the vswitch. Procedure In the vswitch Properties dialog box, click the Ports tab and click Add. Select VMkernel and click Next. Under Port Group Properties, enter a network label, for example iscsi, and click Next. Specify the IP settings and click Next. When you enter subnet mask, make sure that the NIC is set to the subnet of the storage system it connects to. Review the information and click Finish. CAUTION If the NIC you use with your iscsi adapter, either software or dependent hardware, is not in the same subnet as your iscsi target, your host is not able to establish sessions from this network adapter to the target. Map each iscsi port to just one active NIC.

By default, for each iscsi port on the vswitch, all network adapters appear as active. You must override this setup, so that each port maps to only one corresponding active NIC. For example, iscsi port vmk1 maps to vmnic1, port vmk2 maps to vmnic2, and so on. Procedure On the Ports tab, select an iscsi port and click Edit. Click the NIC Teaming tab and select Override vswitch failover order. Designate only one adapter as active and move all remaining adapters to the Unused Adapters category. Repeat the last step for each iscsi port on the vswitch. Configure iscsi binding to iscsi adapters Identify the name of the iscsi port assigned to the physical NIC. The vsphere Client displays the port s name below the network label. In the following graphic, the ports names are vmk1 and vmk2. Use the vsphere CLI command to bind the iscsi port to the iscsi adapter. esxcli swiscsi nic add -n port_name -d vmhba IMPORTANT For software iscsi, repeat this command for each iscsi port connecting all ports with the software iscsi adapter. With dependent hardware iscsi, make sure to bind each port to an appropriate corresponding adapter. Verify that the port was added to the iscsi adapter. esxcli swiscsi nic list -d vmhba Use the vsphere Client to rescan the iscsi adapter.

This example shows how to connect the iscsi ports vmk1 and vmk2 to the software iscsi adapter vmhba33. 1 Connect vmk1 to vmhba33: esxcli swiscsi nic add -n vmk1 -d vmhba33. 2 Connect vmk2 to vmhba33: esxcli swiscsi nic add -n vmk2 -d vmhba33. 3 Verify vmhba33 configuration: esxcli swiscsi nic list -d vmhba33. Both vmk1 and vmk2 should be listed. If you display the Paths view for the vmhba33 adapter through the vsphere Client, you see that the adapter uses two paths to access the same target. The runtime names of the paths are vmhba33:c1:t1:l0 and vmhba33:c2:t1:l0. C1 and C2 in this example indicate the two network adapters that are used for multipathing. The next thing is to configure the switch with the relevant settings. For this I have used two Dell Powerconnect 5448 switches and a Dell EqualLogic PS4000XV SAN, however the information is relevant for most Dell switch and SAN combination and most other brands too. The commands may differ slightly but the principals are the same. Configuring the iscsi SAN switches Turn on flow control on the switches: console> enable console# configure console (config)# interface range ethernet all console (config-if)# flowcontrol on Enable Spanning tree and portfast globally Console (config)# spanning-tree mode rstp Console (config)#interface range ethernet all console (config-if)# spanning-tree portfast Confirm Unicast Storm Control is disabled with

console# show ports storm-control Should return State Disabled as show in the image Check iscsi awareness is enabled using Console# config console(config)#iscsi enable Disable STP on ports that connect SAN end nodes console (config)# interface range ethernet g1,g3 console (config)# spanning-tree disable console (config-if)# exit Enable LAG between switches Disconnect switches from each other before doing following config on both. Then connect ports 5,6,7, and 8 console (config)# interface range ethernet g5,g6,g7,g8 console(config-if)# channel-group 1 mode on console(config-if)# exit console(config)# interface port-channel 1 console(config-if)# flowcontrol on console(config-if)# exit Enable jumbo frames on iscsi ports (This command will enable it on all ports) console (config)# port jumbo-frame This setting will take effect only after copying running

configuration to startup configuration and resetting the device. configure VLANS for vmotion console(config)# vlan database console(config-vlan)# vlan 2 console(config-vlan)# exit console(config)# interface vlan 2 console(config-if)# name vmotion console(config)# interface range ethernet g2,g4 console(config-if)# switchport mode general console(config-if)# switchport general pvid 2 console(config-if)# switchport general allowed vlan add 2 tagged console(config-if)# switchport general acceptable-frame-type tagged-only console(config-if)# exit console(config)# interface vlan 2 console(config-if)# ip address 10.10.10.1 255.255.255.0 console(config-if)# exit console(config)# exit console# copy running-config startup-config Overwrite file [startup-config]?[yes/press any key for no]. Console# reload Log into switch and set name and time synchronisation options. ESXi update guide This guide is written with ESXi 4.1 update 1 in mind, however it will work with any update version from 3.5 onwards. First off you will require vsphere CLI, this is a free

download available to everyone with a valid VMware login. If you don t have one you can easily register for a new one. Download from the VMware website Download the update package from the VMware website Power off all VM s or vmotion them to another host and place the host in maintenance mode. (Right click on the host and select Enter Maintenance Mode) The upgrade package contains two update bulletin parts. The esxupdate bulletin and the upgrade bulletin. These both need to be installed by running these commands on the computer with the vsphere CLI installed on it. Ensure these commands are run from this directory C:Program FilesVMwareVMware vsphere CLIbin> vihostupdate.pl server Hostname or IP address -i -b patch location and zip file name -B ESXi410-GA-esxupdate when prompted enter the root username and password vihostupdate.pl server Hostname or IP address -i -b patch location and zip file name ESXi410-GA If following the vsphere upgrade guide you may notice that this last command fails with this error message No matching bulletin or VIB was found in the metadata.no Bulletin or VIB found with ID ESXi410-GA. This is because it has an extra -B in it. If you run the command listed above it will work. Finally type the following to confirm successful installation. vihostupdate.pl server hostname or IP address query Reboot the host to complete the installation. Don t forget to take it out of maintenance mode!!

vmotion CPU Compatibility vmotion has quite a few requirements that need to be in place before it will work correctly. Here is a list of the key requirements for vmotion to work. Each host must be correctly licensed Each host must meet shared storage requirements Each host must meet the networking requirements Each compatible CPU must be from the same family When configuring vmotion between hosts I would recommend keeping to one brand of server per cluster, i.e. Dell, HP, IBM. Also always ensure that these servers are compatible with each other. You can confirm this by speaking to the server manufacturer. A very important item to consider is to always ensure you are using the latest BIOS version on each of your hosts. Ensuring that the CPU s are compatible with each other is essential for vmotion to work successfully, this is because the host that the virtual machine migrates to has to be capable of carrying on any instructions that the first host was running.

If a virtual machine is successfully running an application on one host and you migrate it to another host without these capabilities the application would most likely crash, possibly even the whole server would crash, hence why vmotion compatibility is required between hosts before you can migrate a running virtual machine. It is user-level instructions that bypass the virtualisation layer such as Streaming SIMD Extensions (SSE), SSE2 SSSE3, SSE4.1 and Advanced Encryption Standard (AES) Instruction Sets that can differ greatly between CPU models and families of processors, and so can cause application instability after the migration. Always ensure that all hardware is on the VMware compatibility guide. To confirm compatibility between same family CPU models check the charts below. This is a chart from Dell showing which Intel CPU s support vmotion.

This second chart also from Dell illustrates which AMD processors support vmotion

Further information on vmotion requirements between hosts can be found in the vsphere Datacenter Administration Guide VMFS Datastore Free Space Calculations As technology progresses, storage requirements grow. It seems to be a never ending pattern. I remember only a few years ago the maximum configurable LUN size of 2TB seemed huge. Now it is common to have many LUN carvings making up tens of Terabytes of SAN storage. The downside to all this extra storage is demand for larger virtual machine disks then you find that the VMFS datastores get filled up in no time. This is something we are all aware of, and it is something we

can avoid with enough planning done ahead of time. (Preventing it filling up not stopping demand for more space that is!) Before adding any additional virtual machine drives it is important to ensure that enough free space is available for the virtual machines already setup. In order to calculate the minimum free space required use the following formula courtesy of ProfessionalVMware. (Total Virtual machine VMDK disk sizes + Total Virtual Machine RAM Sizes * 1)*1 + 12GB This formula can be used to work out what size the VMFS datastore needs to be. Once you work that out you can deduct this from the total available space on the VMFS datastore to see how much space can be used for additional drives without resorting to just adding disks until the vsphere Server complains it is running out of free space. This will allow enough for the local installation of ESX(i) and an additional 10% for snapshots, plus an additional 10% for overhead. (12GB for an ESXi install is a little excessive but I would still recommend leaving this much space as it will be required before you know it.) ProfessionalVMware have provided this handy excel spreadsheet for working this out for you. This formula can prove useful when planning how much storage space is required when performing a P2V migration. This way you can be sure to manage expectations so that you are fully aware from the beginning how much free space you have available in the VMFS datastore. This is a recommended minimum, you may need to leave more free space depending on the requirements. ISO files, templates etc will also need to be taken into account. Following the calculations you may find that the requirements

for free space has been met but you are getting alarms in the vsphere Client saying you are running out of free space. The alarms within the vsphere client are set to send a warning alert you when 75% of the datastore is in use, and an error when 85% is in use. This can be adjusted if required by clicking the top level and selecting the alarms tab within the vsphere client. VMware NIC Trunking Design Having read various books, articles, white papers and best practice guides I have found it difficult to find consistently good advice on vnetwork and physical switch teaming design so I thought I would write my own based on what I have tested and configured myself. To begin with I must say I am no networking expert and may not cover some of the advanced features of switches, but I will provide links for further reference where appropriate. The basics

Each physical ESX(i) host has at least one physical NIC (pnic) which is called an uplink. Each uplink is known to the ESX(i) host as a vmnic. Each vmnic is connected to a virtual switch (vswitch). Each virtual machine on the ESX(i) host has at least one virtual NIC (vnic) which is connected to the vswitch. The virtual machine is only aware of the vnic, only the vswitch is aware of the uplink to vnic relationship. This setup offers a one to one relationship between the virtual machine (VM) connected to the vnic and the pnic connected to the physical switch port, as illustrated below. When adding another virtual machine a second vnic is added, this in turn is connected to the vswitch and they share that same pnic and the physical port the pnic is connected to on the physical switch (pswitch). When adding more physical NIC s we then have additional options with network teaming.

NIC Teaming NIC teaming offers us the option to use connection based load balancing, which is balanced by the number of connections and not on the amount of traffic flowing over the network. This load balancing can provide us resilience on our connections by monitoring the links and if a link goes down, whether it s the physical NIC or physical port on the switch, it will resend that traffic over the remaining uplinks so that no traffic is lost. It is also possible to use multiple physical switches provided they are all on the same broadcast range. What it will not do is to allow you to send traffic over multiple uplinks at once, unless you configure the physical switches correctly. There are four options with NIC teaming, although the fourth is not really a teaming option 2. 3. 4. Port-based NIC teaming MAC address-based NIC teaming IP hash-based NIC teaming Explicit failover Port-based NIC teaming Route based on the originating virtual port ID or port-based NIC teaming as it is commonly known as will do as it says and route the network traffic based on the virtual port on the vswitch that it came from. This type of teaming doesn t allow traffic to be spread across multiple uplinks. It will keep a one to one relationship between the virtual machine and the uplink port when sending and receiving to all network devices. This can lead to a problem where the amount of physical ports exceeds the number of virtual ports as you would then end up with uplinks that don t do anything. As such the only time I would recommend using this type of teaming is when the amount of virtual NIC s exceeds the number of physical uplinks.

MAC address-based NIC teaming Route based on MAC hash or MAC address-based NIC teaming sends the traffic out of the originating vnic s MAC address. This works in a similar way to the port-based NIC teaming in that it will send its network traffic over only one uplink. Again the only time I would recommend using this type of teaming is when the amount of virtual NIC s exceeds the number of physical uplinks. IP hash-based NIC teaming Route based on IP hash or IP hash-based NIC teaming works differently from the other types of teaming. It takes the source and destination IP address and creates a hash. It can work on multiple uplinks per VM and spread its traffic across multiple uplinks when sending data to multiple network destinations.

Although IP-hash based can utilise multiple uplinks it will only use one uplink per session. This means that if you are sending a lot of data between one virtual machine and another server that traffic will only transfer over one uplink. Using the IP hash based teaming we can then use teaming or trunking options on the physical switches. (Depending on the switch type) IP hash requires Ether Channel (again depending on switch type) which for all other purposes should be disabled. Explicit failover This allows you to override the default ordering of failover on the uplinks. The only time I can see this being useful is if the uplinks are connected to multiple physical switches and you wanted to use them in a particular order. Either that or you think a pnic In the ESX(i) host is not working correctly. If you use this setting it is best to configure those vmnics or adapters as standby adapters as any active adapters will be used from the highest in the order and then down. The other options

Network failover detection There are two options for failover detection. Link status only and beacon probing. Link status only will monitor the status of that link, to ensure that a connection is available on both ends of the network cable. If it becomes disconnected it will mark it as unusable and send the traffic over the remaining NIC s. Beacon probing will send a beacon up the network on all uplinks in the team. This includes checking that the port on the pswitch is available and is not being blocked by configuration or switch issues. Further information is available on page 44 of the ESXi configuration guide. Do not set to beacon probing if using route based on IP-hash. Notify switches

This should be left set to yes (default) to minimise route table reconfiguration time on the pswitches. Do not use this when configuring Microsoft NLB in unicast mode. Failback Failback will re-enable the failed uplink when it is working correctly and send the traffic over it that was sent over the standby uplink. Best practice is to leave this set to yes unless using IP based storage. This is because if the link were to go up and down quickly it could have a negative impact on iscsi traffic performance. Incoming traffic is controlled by the pswitch routing the traffic to the ESX(i) host, and hence the ESX(i) host has no control over which physical NIC traffic arrives. As multiple NIC s will be accepting traffic, the pswitch will use whichever one it wants. Load balancing on incoming traffic can be achieved by using and configuring a suitable pswitch. pswitch configuration The topics covered so far describe egress NIC teaming, with physical switches we have the added benefit of using ingress NIC teaming. Various vendors support teaming on the physical switches, however quite a few call trunking teaming and vice-versa. From the switches I have configured I would recommend the following. All Switches A lot of people recommend disabling Spanning Tree Protocol (STP) as vswitches don t require it as they know the MAC address of every vnic connected to it. I have found that the best practice is to enable STP and set it to Portfast.

Without Portfast enabled there can be a delay whereby the pswitch has to relearn the MAC addresses again during convergence which can take 30-50 seconds. Without STP enabled there is a chance of loops not being detected on the pswitch. 802.3ad & LACP Link aggregation control protocol (LACP) is a dynamic link aggregation protocol (LAG) which can dynamically make other switches aware of the multiple links and combine them into one single logical unit. It also monitors those links and if a failure is detected it will remove that link from the logical unit. VMware doesn t support LACP. However VMware does support IEEE 802.3ad which can be achieved by configuring a static LACP trunk group or a static trunk. The disadvantage of this is that if one of those links goes down, 802.3ad static will continue to send traffic down that link. Dell switches Set Portfast using Spanning-tree portfast To configure follow my Dell switch aggregation guide Further information on Dell switches is available through the product manuals. Cisco switches Set Portfast using Spanning-tree portfast (for an access port) Spanning-tree portfast trunk (for a trunk port)

Set etherchannel Further information is available through the Sample configuration of EtherChannel / Link aggregation with ESX and Cisco/HP switches HP switches Set Portfast using Spanning-tree portfast (for an access port) Spanning-tree portfast trunk (for a trunk port) Set static LACP trunk using trunk < port-list > < trk1 trk60 > < trunk lacp > Further information is available through the Sample configuration of EtherChannel / Link aggregation with ESX and Cisco/HP switches Powerchute Network Shutdown ESXi/vMA install Powerchute Network Shutdown version 3.0.0 software for ESXi is now free when you purchase a network shutdown management card from the APC website. Download VMA (vsphere Management Assistant)

Highlight VM Host> File> Deploy OVF Template> Browse to VMA Folder and Select the OVF> Next> Accept Licence> Next> Keep Default Disk Configuration> Next> Finish This will create a new VM on the host. Using the VIC attach a CD Drive to the VMA Virtual Machine Start the Virtual Machine Follow the Wizard (Default Option is show in Brackets) Step1) Configure IP Address, Subnet, Gateway/DHCP or Static Step2) Configure DNS Servers/DHCP or Static Step3) Configure Hostname for the VIMA VM ie VIMA.domainname.local Step 4) Confirm the settings The VM will now apply the settings and restart the VM Network Enter a password for the vi-admin account Open a terminal emulation application such as Putty and Connect to the VIMA Vm using its IP Address on port 22 (SSH) Login as vi-admin, using the password you created in the previous step When you are connected you will be presented with a terminal. Enter the following string into the terminal window Vifp addserver name of server or IP Address (name of server preferred) Enter the password for the host when prompted (Vmware Host Root User Password) Enter the following string into the terminal window Vifp listservers This should return the IP Address and name of the Vmware Host that you just added Enter the following string into the terminal window to enable FastPass to the Host Vifptarget -s SERVERNAME Vmhost Server To confirm the above step has worked type Vicfg-nics l This should return a list of NICS

Install the UPS, configure the Network management card and configure your settings with the UPS Management Console (browser) Insert your media into the VMHost and attach it to the VMA Virtual Machine (ie CD) Connect to the VMA management console via Terminal Emulator Login to the Management Console Create a mount point : sudo mkdir /mnt/cdrom Change the permissions on the mount point: sudo chmod 666 /mnt/cdrom Type: sudo mount t iso9660 /dev/cdrom /mnt/cdrom Type: cd /mnt/cdrom/esxi Type: sudo cp /etc/vma-release /etc/vima-release Type: sudo./install.sh Accept the licence agreement Press enter to keep default PowerChute Instance Press enter to keep default installation directory Confirm the installation This will install the Java Runtime Type: CD /opt /APC/PowerChute/group1Enter Type: sudo./pcnsconfig.sh Enter your root Password Select your UPS Configuration Option Enter the Management Card IP Address Select yes at the do you want to register these settings Select Yes to starting the Powerchute Network Shutdown Service You will then be show a configuration Download VMA (vsphere Management Assistant) Highlight VM Host> File> Deploy OVF Template> Browse to VMA Folder and Select the OVF> Next> Accept Licence> Next> Keep Default Disk Configuration> Next> Finish This will create a new VM on the host. Using the VIC attach a CD Drive to the VMA Virtual

Machine Start the Virtual Machine Follow the Wizard (Default Option is show in Brackets) Step1) Configure IP Address, Subnet, Gateway/DHCP or Static Step2) Configure DNS Servers/DHCP or Static Step3) Configure Hostname for the VIMA VM ie VIMA.domainname.local Step 4) Confirm the settings The VM will now apply the settings and restart the VM Network Enter a password for the vi-admin account Open a terminal emulation application such as Putty and Connect to the VIMA Vm using its IP Address on port 22 (SSH) Login as vi-admin, using the password you created in the previous step When you are connected you will be presented with a terminal. Enter the following string into the terminal window Vifp addserver name of server or IP Address (name of server preferred) Enter the password for the host when prompted (Vmware Host Root User Password) Enter the following string into the terminal window Vifp listservers This should return the IP Address and name of the Vmware Host that you just added

Enter the following string into the terminal window to enable FastPass to the Host Vifptarget -s SERVERNAME Vmhost Server To confirm the above step has worked type Vicfg-nics l This should return a list of NICS Install the UPS, configure the Network management card and configure your settings with the UPS Management Console (browser) Insert your media into the VMHost and attach it to the VMA Virtual Machine (ie CD) Connect to the VMA management console via Terminal Emulator Login to the Management Console Create a mount point : sudo mkdir /mnt/cdrom Change the permissions on the mount point: sudo chmod 666 /mnt/cdrom Type: sudo mount t iso9660 /dev/cdrom /mnt/cdrom Type: cd /mnt/cdrom/esxi Type: sudo cp /etc/vma-release /etc/vima-release Type: sudo./install.sh Accept the licence agreement Press enter to keep default PowerChute Instance Press enter to keep default installation directory Confirm the installation This will install the Java Runtime

Type: CD /opt /APC/PowerChute/group1 Enter Type: sudo./pcnsconfig.sh Enter your root Password Select your UPS Configuration Option Enter the Management Card IP Address Select yes at the do you want to Register these settings Select Yes to starting the Powerchute Network Shutdown Service You will then be show a configuration