Windows Server 2012 R2 Hyper-V: Designing for the Real World Steve Evans @scevans www.loudsteve.com Nick Hawkins @nhawkins www.nickahawkins.com
Is Hyper-V for real?
Microsoft Fan Boys Reality
VMware Hyper-V
Networking
Legacy Network Configuration (Server 2008 R2) Hyper-V Host vnic vnic vnic Management Live Migration Cluster VM
Problems with Legacy Network Configuration Hyper-V Host vnic vnic vnic Management Live Migration Cluster VM
Server 2012 Introduces Converged Fabric Hyper-V Host vnic vnic vnic vnic vnic vnic 10Gb 10Gb 10Gb 10Gb
Benefits of Converged Fabric Hyper-V Host vnic vnic vnic vnic vnic vnic 10G 10G 10G 10G
Wait! Where s my 40Gbps of bandwidth?!?!? Hyper-V Host
NIC Teaming: LACP (Switch Dependent) & Address Hashing Hyper-V Host Data transfer to VM 7 vnic vnic vnic vnic vnic vnic 10G 10G 10G 10G
NIC Teaming: Switch Independent & Hyper-V Port Load Balancing Hyper-V Host vnic vnic vnic vnic vnic vnic 10G 10G 10G 10G
NIC Teaming: Server 2012 R2 Switch Independent & Dynamic Load Balancing Hyper-V Host Similar to Hyper-V port but outbound traffic flows are not limited to a single NIC. Inbound will still be determined by the teaming mode specified with the switch. vnic vnic vnic vnic vnic vnic 10G 10G 10G 10G
Demo: Building Converged Fabric with PowerShell
Converged Fabric Diagram Hyper-V Host vnic vnic vnic vnic vnic vnic 10Gb 10Gb 10Gb 10Gb
QoS Three ways to create rules Hyper-V Host vnic vnic vnic vnic vnic
What is the difference? QoS Enabled Virtual Switch Very simple, especially when using converged fabrics (all interfaces are vnics). Each rule is applied to a virtual NIC (VM or Management OS). You can define a default bucket for unspecified. QoS Packet Scheduler Used for creating per protocol rules handled by the Management OS. Data Center Bridging (DCB) DCB is handled by hardware. DCB offers the best performance. Requires DCB capable hardware.
QoS Packet Scheduler Built in Filters Workload Built-in Filter (PowerShell Parameter) Filter Implementation iscsi -iscsi Match TCP or UDP port 3260 NFS -NFS Match TCP or UDP port 2049 SMB -SMB Match TCP or UDP port 445 Live Migration -LiveMigration Match TCP port 660 Wild Card -Default Any traffic that is not otherwise classified
Hyper-V QoS (Virtual Switch) > Omit this slide! I will just talk about it Guarantee a minimum level of service to a vnic, protocol, or an IP port Bits per second-based rules Gives exact speed but what if VMs are moved to host with different speed NICs? Weight-based rules More flexible as its based on a share of bandwidth with no consideration of actual speed Minimum Bandwidth Guarantee a minimum share of the host s bandwidth to a vnic or protocol Maximum Bandwidth Limit the bandwidth consumption of a vnic or protocol The most flexible and common option is to implement Minimum- Bandwidth rules based on Weight. This solution makes no assumptions about hardware capacity and it is elastic allowing vnics and protocols to exceed the minimum guarantee when there is not bandwidth contention.
QoS Guaranteeing a % of Bandwidth Live Migration vnic Management vnic Cluster vnic Unspecified 35 % Cluster 40% Management 5% Live Migration 20%
QoS Example Configuration # Apply a default weight to the vswitch (any vnic which is unspecified will fall into this bucket) Set-VMSwitch Converged-vSwitch -DefaultFlowMinimumBandwidthWeight 35 # Assign a weight to Management OS vnics Set-VMNetworkAdapter -ManagementOS -Name "Cluster" - MinimumBandwidthWeight 40 Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" - MinimumBandwidthWeight 20 Set-VMNetworkAdapter -ManagementOS -Name "Management" - MinimumBandwidthWeight 5 # Calculate the percentage of bandwidth based on your specified weights Get-VMNetworkAdapter -ManagementOS -Name * ft Name, BandwidthPercentage, IsManagementOS -AutoSize
QoS Setting a Maximum Bandwidth Limitation
iscsi within the Converged Fabric Hyper-V Host vnic vnic vnic vnic vnic 10G 10G 10G 10G Storage Controller A Storage Controller B
Converged Fabric / iscsi Fault Domains Hyper-V Host vnic vnic vnic vnic vnic vnic vnic 10G 10G 10G 10G Storage Controller A Storage Controller B
Testing Fabric Test throughput Use consume.exe to consume the memory on a VM with 64GB of ram > Then Live Migrate the VM! Use iperf to test the throughput from vnic to vnic Notice the throughput difference on a Live Migration when Jumbo Frames are not configured Use Jumbo Frames! Test disk throughput with SQL-IO Verify you have Jumbo Frames configured end to end ping l 8000 f 192.168.1.50 Failing Components Network interfaces Switches Storage connectivity All of the above during throughput tests
Real World Testing Example with SQLIO Unbalanced iscsi Interface utilization Balanced iscsi Interface utilization
Failover in Action!
Storage
Cluster Shared Volumes v1 (WS 2008 R2) A B
Cluster Shared Volumes v2 (WS 2012) A B
CSV Cache Uses system memory Up to 20% in Windows Server 2012 Up to 80% in Windows Server 2012 R2 Improves Read Performance VDI Environment Improved boot times from 211 seconds to 29 seconds (average)
Software Defined Storage SMB3 Scale Out iscsi Target Failover Cluster Storage Spaces (SAS Enclosure) File Server
Storage Spaces Storage Pool Storage Pool Hot Spares Virtual Disks Simple Mirror 2/3 Way Parity
Scale Out File Services
Block Level Disk for VMs Guest iscsi vnic vnic vnic Hyper-V Host vnic vnic vnic vnic vnic 10G 10G 10G 10G
Clustering
Quorum Windows Server 2008 (R2)
Quorum Windows Server 2008 (R2)
Dynamic Quorum Windows Server 2012 Requirements 1. The cluster has already achieved quorum 2. Sequential failures of nodes have occurred
Cluster Size Rack #1
Cluster Size Rack #1 Rack #2 1 node = 100gb RAM & 10 Cores 8 node cluster - 2 redundant nodes 6 nodes 6 nodes x 96gb RAM = 576gb RAM 6 nodes x 10 Cores = 60 cores 60 cores x 2:1 ratio = 120 vcpu s 60 cores x 3:1 ratio = 180 vcpu s 60 cores x 8:1 ratio = 480 vcpu s
Cluster Size Rack #1 Rack #2 Rack #3
Cluster Aware Updating
Cluster Aware Updating
Hotfixes 2 very useful TechNet Wiki pages for checking Hyper-V related updates: Hyper-V: Update List for Windows Server 2012 List of Failover Cluster Hotfixes for Windows Server 2012
Hypervisor
Scale enhancements System Resource Maximum number Windows 2008 R2 Windows Server 2012 Improvement factor Logical processors on hardware 64 320 5 Host Physical memory 1 terabyte 4 terabytes 4 Virtual processors per host 512 1,024 2 Virtual processors per virtual machine 4 64 16 Memory per virtual machine 64 GB 1 terabyte 16 Virtual machine Cluster Active virtual machines 384 1,024 2.7 Virtual disk size 2 terabytes 64 terabytes 32 Nodes 16 64 4 Virtual machines 1,000 4,000 4
VM Migrations
Shared Nothing Live Migration Constrained Delegation Delegation Settings SOFS01.techdays.com SOFS.techdays.com SOFS01.techdays.com HVSTORAGE01.techdays.com - MVSMS: HV01.techdays.com - MVSMS: HV02.techdays.com - CIFS: SOFS.techdays.com - CIFS: HV01.techdays.com - CIFS: HV02.techdays.com HV01.techdays.com - MVSMS: HVSTORAGE.techdays.com - MVSMS: HV02.techdays.com - CIFS: SOFS.techdays.com - CIFS: HV02.techdays.com - CIFS: HVSTORAGE.techdays.com HVSTORAGE01.techdays.com HV01.techdays.com HV02.techdays.com HV02.techdays.com - MVSMS: HV01.techdays.com - MVSMS: HVSTORAGE01.techdays.com - CIFS: SOFS.techdays.com - CIFS: HV01.techdays.com - CIFS: HVSTORAGE01.techdays.com
Hyper-V Replica
Guest iscsi iscsi C: drive for VM1 iscsi VM1 SAN / SOFS C: drive for VM2 iscsi iscsi VM2
Shared VHDX iscsi SAN / SOFS C: drive for VM1 C: drive for VM2 S: drive for SQL VM1 VM2 iscsi
Dynamic Memory Strategy 8gb SQL 4gb OS 4gb Dynamic Startup: 12gb Minimum: 12gb Maximum: 16gb 4gb OS 8gb Dynamic Startup: 6gb Minimum: 4gb Maximum: 12gb 7.5gb Dynamic Startup: 1gb Minimum: 512mb Maximum: 8gb 512mb Minimum
Server Core vs Minimal Shell vs GUI Mode
WS 2012 R2 Hyper-V Enhancements Live Migration Compressed Cross-Version SMB Capable Linux Support Improvements Dynamic Memory Online backup Supports Kdump/kexec Automatic Guest Activation Built-in NVGRE Gateway Dynamic Mode NIC Teaming Enhanced Session Mode Your clipboard will work!! Live resizing of VHDX!!! Live Virtual Machine Cloning Shared VHDX Storage Spaces Tiered Storage
Summary Pay attention to network design Think through failure points Pay attention to network design
VMware Hyper-V
Questions? Steve Evans @scevans www.loudsteve.com Nick Hawkins @nhawkins www.nickahawkins.com