Hyper-V Networking Aidan Finn
About Aidan Finn Technical Sales Lead at MicroWarehouse (Dublin) Working in IT since 1996 MVP (Virtual Machine) Experienced with Windows Server/Desktop, System Center, virtualisation, and IT infrastructure @joe_elway http://www.aidanfinn.com http://www.petri.co.il/author/aidan-finn Published author/contributor of several books
Books System Center 2012 VMM Windows Server 2012 Hyper-V
Networking Basics
Hyper-V Networking Basics Management OS Virtual Machines VLAN ID = 101 VLAN ID = 102 VLAN Trunk 6
Virtual NICs Generation 1 VMs can have: (Synthetic) network adapter Requires drivers (Hyper-V integration components/services) Does not do PXE boot Best performance Legacy network adapter Emulated - does not require Hyper-V drivers Does offer PXE Bad performance Generation 2 VMs have synthetic network adapters with PXE 7
Hyper-V Extensible Switch Hyper-V Extensible Switch Replaces Virtual Network Handles network traffic between: Virtual machines The physical network The management OS NIC = network adapter Layer-2 virtual interface Programmatically managed Extensible 8
Virtual Switch Types External: Allow VMs to talk to each other physical network and host Normally used Internal Allow VMs to talk to each other and host VMs cannot communicate to VMs on another host Normally only ever seen in a lab Private Allow VMs to talk to each other VMs cannot communicate to VMs on another host Sometimes seen but replaced by Hyper-V network virtualization or VLANs 9
Switch Extensibility Extension Types Capturing Monitoring Example: InMon sflow Filtering Packet monitoring/security Example: 5nine Security Forwarding Does all the above & more Example: Cisco Nexus 1000V
NIC Teaming
Provides load balancing and failover (LBFO) Load balancing: Spread traffic across multiple physical NICs. This provides link aggregation not necessarily a single virtual pipe. Failover: NIC Teaming If one physical path (NIC or top-of-rack switch) fails then traffic automatically moved to another NIC in the team. Built-in and fully supported for Hyper-V and Failover Clustering since WS2012
Microsoft supported no more calls to NIC vendors for teaming support or getting told to turn off teaming Vendor agnostic can mix NIC manufacturers in a single team Up to: NIC Teaming Features 32 NICs at same speed in physical machines 2 virtual NICs at same speed in a VM Configure teams to meet server needs Team management is easy! Server Manger, LBFOADMIN.EXE, VMM, or
Terminology Team Interfaces, Team NICs, or tnics Team Team members --or-- Network Adapters
Switch Independent mode Doesn t require any configuration of a switch Protects against adjacent switch failures Allows Standby NIC Switch dependent modes 1. Static Teaming Configured on switch 2. LACP Teaming Connection Modes Also known as IEEE 802.1ax or 802.3ad Requires configuration of the adjacent switch Switch independent team Switch dependent team
1. Address Hash comes in 3 flavors 4-tuple hash: (Default distribution mode) uses the RSS hash if available, otherwise hashes the TCP/UDP ports and the IP addresses. If ports not available, uses 2-tuple instead. 2-tuple hash: hashes the IP addresses. If not IP traffic uses MACaddress hash instead. MAC address hash: hashes the MAC addresses. 2. Hyper-V port Load Distribution Modes Hashes the port number on the Hyper-V switch that the traffic is coming from. Normally this equates to per-vm traffic. Best if using DVMQ. 3. Dynamic (Added in WS2012 R2) Spread a single stream of data across team members using flowlets. The default option in WS2012 R2.
NIC Teaming Virtual Switch Choose the team connection mode that is required by your switches Choose either Hyper-V Port or Dynamic (WS2012 R2) load distribution Hyper-V Port provides predictable incoming paths and DVMQ acceleration. Dynamic enables a single virtual NIC to spread traffic across multiple team members at once. NIC Team
NIC Teaming Physical NICs Choose the team connection mode that is required by your switches Choose either Address Hash or Dynamic load distribution Address Hash will isolate a single stream of traffic on one physical NIC. Dynamic enables a since virtual NIC to spread traffic across multiple team members at once. Networking Stack NIC Team
NIC Teaming Virtual Machines Can be configured in guest OS of a WS2012 or later VM. Teams the VM s virtual NICs. Configuration is locked. You must allow NIC teaming in the advanced properties of the virtual NIC in the VM settings. Set-VMNetworkAdapter VM01 AllowTeaming On/Off Virtual Machine NIC Team
Demo: NIC Teaming
Hardware Offloads
100% utilized RSS Logical Processors { 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Cores { Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 8 Core 9 Core 10 Core 11 Core 12 Processors (Hyperthreading) { CPU 0 CPU 1 Management OS Management Backup SMB 3.0 Cluster Live Migration Virtual Machine NIC Team rnic1 rnic2
100% utilized DVMQ Logical Processors { 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Cores { Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 8 Core 9 Core 10 Core 11 Core 12 Processors (Hyperthreading) { CPU 0 CPU 1 Management OS Management Backup SMB 3.0 Cluster Live Migration Virtual Machine NIC Team rnic1 rnic2
RSS and DVMQ Consult your network card/server manufacturer Can use Get- Set- NetAdapterRSS to configure. Don t change anything unless you need to RSS and DVMQ are incompatible on the same NIC so design hosts accordingly
Added in WS2012 R2 vrss RSS provides extra processing capacity for inbound traffic to a physical server Using cores beyond Core 0. vrss does the same thing in the guest OS of a VMM Using additional virtual processors. Allows inbound networking to VMM to scale out. Obviously requires VMs with additional virtual processors. The physical NICs used by the virtual switch must support DVMQ. Enable RSS in the advanced NIC properties in the VM s guest OS
SMB 3.0 100% utilized vrss CPU 0 CPU 1 CPU 2 CPU 3 CPU 4 CPU 5 CPU 6 CPU 7 Management OS Management Backup Cluster Live Migration Virtual Machine NIC Team rnic1 rnic2
Demo: vrss
Single-Root I/O (SR-IOV) Virtual function on capable NIC presented directly to VM Bypasses user mode in Management OS Network stack Virtual Switch (logical connection present) Cannot team NICs in Management OS can team NICs in VM Super low latency virtual networking, less h/w usage Requires SR-IOV ready: Motherboard BIOS NIC Windows Server 2012/Hyper-V Server 2012 (or later) host Can Live Migrate to/from capable/incapable hosts
SR-IOV Illustrated Host Host Root Partition Virtual Machine Root Partition Virtual Machine Hyper-V Switch Virtual NIC Hyper-V Switch Virtual Function Routing VLAN Filtering Data Copy Routing VLAN Filtering Data Copy Physical NIC Network I/O path without SRIOV SR-IOV Physical NIC Network I/O path with SRIOV
Implementing SR-IOV All management OS networking features are bypassed You must create SR-IOV virtual switches to begin with: New-VMSwitch IOVSwitch1 - NetAdapterName pnic1 EnableIOV $True Install Virtual Function driver in guest OS To get teaming: Create 2 virtual switches Enable guest OS teaming in vnic advanced settings Team in the guest OS SR-IOV Enabled Virtual Switch 1 NIC Team Virtual NIC 1 Virtual NIC 2 SR-IOV Enabled Virtual Switch 2 Physical NIC 1 Physical NIC 2
The Real World: SR-IOV Not cloud or admin friendly: Requires customization in the guest OS How many hosting or end users can you trust with admin rights over in-guest NIC teams? In reality: SR-IOV is intended for huge hosts or few VMs with low latency requirements You might never implement SR-IOV outside of a lab
IPsec Task Offload (IPSecTO) IPsec encrypts/decrypts traffic between a client and server. Done automatically based on some rule. Can be implemented by a tenant independently of the cloud administrators It uses processor resources in a cloud this could have a significant impact. Using IPSecOffloadV2 enabled NICs, Hyper-V can offload IPsec processing from VMs to the host s NIC(s).
Consistent Device Naming (CDN) Every Windows admin hates Local Area Connection, Local Area Connection 2, etc. Network devices randomly named based on order of PNP discovery Modern servers (Dell 12 th gen, HP Gen8) can store network port device names WS2012 and later can detect these names Uses device name to name network connections: Port 1 Port 2 Slot 1 1 Slot 1 1
Converging Networks Not a new concept from hardware vendors Introduces as a software solution in WS2012 Will cover this topic in the High Availability session
SMB 3.0 No longer just a file & print protocol Learn more in the SMB 3.0 and Scale-Out File Server session
Thank You! Aidan Finn @joe_elway www.aidanfinn.com Petri IT Knowledgebase