INF-VSP1800 vsphere Performance Best Practices Peter Boone, VMware, Inc. #vmworldinf
Disclaimer This session may contain product features that are currently under development. This session/overview of the new technology represents no commitment from VMware to deliver these features in any generally available product. Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. Technical feasibility and market demand will affect final delivery. Pricing and packaging for any new technologies or features discussed or presented have not been determined. 2
Global Support Services and Customer Advocacy Burlington, Canada Cork, Ireland Palo Alto, CA Broomfield, CO Tokyo, Japan Bangalore, India Support offices Local language support Spanish, Portuguese, French, German, Japanese, Chinese Global Coverage 24x7, 365 days/year 6 Support Centers 1000+ Support Engineers Follow-the-sun Support for Severity 1 Issues Support Relationships with 100% of the Fortune 100; 99% of Fortune 500 3
Customer Support Day Events Coming to a location near you: sharing of VMware best practices! Support Days are a collaboration between VMware Support, Sales and customers you learn directly from the experts Topics are driven by customer input, and typically include: Best practices Tips/tricks Top issues Product roadmaps/demos Certification offerings http://www.vmware.com/go/supportdays 4
Overview What a performance problem sounds like: My VM is running slow and I don t know what to do! I tried adding more memory and CPUs but the problem got worse! ` My VM is slow on one host but fast on another! What to look for? Where to start? We will explore some of the most common performance-related issues that our support centers receive cases for 5
A word about performance. Troubleshooting methodology must define: How to find root cause How to fix the problem Must answer these questions: 1. How do we know when we are done? 2. Where do we start looking for problems? 3. How do we know what to look for to identify a problem? 4. How do we find the root-cause of a problem we have identified? 5. What do we change to fix the root-cause? 6. Where do we look next if no problem is found? 6
Agenda Benchmarking & Tools Best Practices and Troubleshooting The 4 food groups Memory CPU Storage Network 7
BENCHMARKING & TOOLS 2012 VMware Inc. All rights reserved
Benchmarking Consistent and reproducible results Important to have base level of acceptable performance Expectation vs. Acceptable Determine baseline of performance prior to deployment Benchmark on a physical system if applicable Avoid subjective metrics, stay quantitative The system seems slower This worked better last year 9
Benchmarking Benchmarking should be done at the application layer Use application-specific benchmarking tools and load generators Check with the application vendor Isolate variables, benchmark optimum situation before introducing load Understand dependencies Human interaction Other food groups Compare apples-to-apples 10
Tools vcenter Operations Slide 11 Aggregates thousands of metrics into Workload, Capacity, Health scores Self-learns normal conditions using patented analytics Smart alerts of impending performance and capacity degradation Identifies potential performance problems before they start 11
Tools vcenter Operations Slide 12 12
Tools esxtop Valuable tool built in to vsphere hosts View or capture real-time data View or playback data later Import data in 3 rd party tools vsphere Client performance graphs get their data from esxtop data Presentation/unit may be different (e.g. %RDY) Little overhead impact on the host 13
MEMORY 2012 VMware Inc. All rights reserved
Memory Allocation A VM s RAM is not necessarily physical RAM vram + overhead = maximum physical RAM Whether or not that memory is physical or virtual depends on Host configuration Shares Limits Reservations Host load Idle/Active VMs 15
Memory Overhead Source: vsphere 5.0 Resource Management Guide 16
Memory Host Memory Management Occurs when memory is under contention Transparent Page Sharing Ballooning Compression Swapping 17
Memory Transparent Page Sharing 18
Memory Ballooning 19
Memory Compression 20
Memory Swapping 21
Memory Swapping 22
Memory VM Resource Allocation 23
Memory Resource Pool Allocation 24
Memory Ballooning vs. Swapping Ballooning is better than swapping Guest can surrender unused/free pages Guest chooses what to swap, can avoid swapping hot pages Idle memory tax uses ballooning 25
Memory Rightsizing Generally, it is better to OVER-commit than UNDER-commit If the running VMs are consuming too much host/pool memory Some VMs may not get physical memory Ballooning or host swapping Higher disk IO All VMs slow down 26
Memory Rightsizing If a VM has too little vram Applications suffer from lack of RAM The guest OS swaps Increased disk traffic, thrashing SAN slow down as a result of increased disk traffic If a VM has too much vram Higher overhead memory Possible decreased failover capacity Longer vmotion time Larger VSWP file Wasted resources 27
Memory Troubleshooting Wrong resource allocation May not notice a limit, e.g. VM or template with a limit gets cloned Custom share values Ballooning or swapping at the host level Ballooning is a warning sign, not a problem Swapping is a performance issue if seen over an extended period Swapping/paging at the guest level Under-provisioned guest memory Missing balloon driver (Tools) 28
Memory Best Practices Avoid high active host memory over-commitment No host swapping occurs when total memory demand is less than the physical memory (Assuming no limits) Right-size guest memory Avoid guest OS swapping Ensure there is enough vram to cover demand peaks Use a fully automated DRS cluster Test that vmotion works Use Resource Pools with High/Normal/Low shares Avoid using custom shares 29
CPU 2012 VMware Inc. All rights reserved
CPU Overview Raw processing power of a given host or VM Hosts provide CPU resources VMs and Resource Pools consume CPU resources CPU cores/threads need to be shared between VMs Fair scheduling vcpu time Hardware interrupts for a VM Parallel processing for SMP VMs I/O 31
CPU esxtop 32
CPU esxtop Interpret the esxtop columns correctly %USED Physical CPU usage %SYS Percentage of time in the VMkernel %RUN Percentage of total scheduled time to run %WAIT Percentage of time in blocked or busy wait states %IDLE %WAIT- %IDLE can be used to estimate I/O wait time 33
CPU Performance Overhead & Utilization Different workloads have different overhead costs (%SYS) even for the same utilization (%USED) CPU virtualization adds varying amounts of system overhead Direct execution vs. privileged execution Non-paravirtual adapters vs. emulated adaptors Virtual hardware (Interrupts!) Network and storage I/O 34
CPU vsmp Relaxed Co-Scheduling: vcpus can run out-of-sync Idle vcpus incur a scheduling penalty configure only as many vcpus as needed Impose unnecessary scheduling constraints Use Uniprocessor VMs for single-threaded applications 35
CPU Scheduling Over committing physical CPUs VMkernel CPU Scheduler 36
CPU Scheduling Over committing physical CPUs X X VMkernel CPU Scheduler 37
CPU Scheduling Over committing physical CPUs X X X X VMkernel CPU Scheduler 38
CPU Ready Time The percentage of time that a vcpu is ready to execute, but waiting for physical CPU time Does not necessarily indicate a problem Indicates possible CPU contention or limits 39
CPU NUMA nodes Non-Uniform Memory Access system architecture Each node consists of CPU cores and memory A CPU core in one NUMA node can access memory in another node, but at a small performance cost NUMA node 1 NUMA node 2 40
CPU NUMA nodes The VMkernel will try to keep a VM s vcpus local to its memory Internal NUMA migrations can occur to balance load Manual CPU affinity can affect performance vcpus inadvertently spread across NUMA nodes Not possible with fully automated DRS VMs with more vcpus than cores available in a single NUMA node may see decreased performance 41
CPU Troubleshooting vcpu to pcpu over allocation HyperThreading does not double CPU capacity! Limits or too many reservations can create artificial limits. Expecting the same consolidation ratios with different workloads Virtualizing easy systems first, then expanding to heavier systems Compare Apples to Apples Frequency, turbo, cache sizes, cache sharing, core count, instruction set 42
CPU Best Practices Right-size vsmp VMs Keep heavy-hitters separated Fully automated DRS should do this for you Use anti-affinity rules if necessary Use a fully automated DRS cluster Test that vmotion works Use Resource Pools with High/Normal/Low shares Avoid using custom shares 43
STORAGE 2012 VMware Inc. All rights reserved
Storage esxtop Counters Different esxtop storage views Adapter (d) VM (v) Disk Device (u) Key Fields: DAVG + KAVG = GAVG QUED/USD Command Queue Depth CMDS/s Commands Per Second MBREADS/s MBWRTN/s 45
Storage Troubleshooting with esxtop High DAVG: issue beyond the adapter bad/overloaded zoning, over utilized storage processors, too few platters in the RAID set, etc. High KAVG: issue in the kernel storage stack Driver issue Full queue Aborts: GAVG exceeding 5000 ms Command will be repeated, storage delay for the VM 46
Storage Benchmarking with iometer 47
Storage Storage I/O Control Allows the use of Shares per VMDK Throttling occurs when datastore reaches latency threshold Higher share VMDKs perform IO first vcenter monitors latency across all hosts Not effective if datastore shared with other vcenters 48
Storage Storage DRS Datastore clusters Maintenance mode Anti-affinity rules vcenter monitors for latency and disk space Migrate VMDKs for better performance or utilization Not effective with automated tiering SANs Check HCL to confirm these features are compatible 49
Storage Troubleshooting Snapshots Excessive traffic down one HBA / Switch / SP can cause latency Consider using Round Robin in conjunction with ALUA Always be paranoid when it comes to monitoring storage I/O Consider your I/O patterns Peak time for storage IO? Virus scans, database maintenance, user logins Always consult with array vendor They know the best practices for their array! 50
Storage Best Practices Use different tiers of storage for different VM workloads Slower storage for OS VMDKs Faster storage for databases or other high-io applications Use the Paravirtual SCSI adapter Reduced overhead, higher throughput Use path balancing where possible, either through plugins (Powerpath) / Round Robin and ALUA, if supported. Use Storage DRS with SIOC Balance for both free space and latency Simplified datastore management 51
NETWORK 2012 VMware Inc. All rights reserved
Network Load Balancing Load balancing defines which uplink is used Route based on Port ID Route based on IP hash Route based on MAC hash Route based on NIC load Probability of high-bandwidth VMs being on the same physical NIC Traffic will stay on elected uplink until an event occurs NIC link state change, adding/removing NIC from a team, beacon probe timeout 53
Network Troubleshooting Check counters for NICs and VMs Network load imbalance 10 Gbps NICs can incur a significant CPU load when running at 100% Ensure hardware supports TSO Use latest drivers and firmware for your NIC on the host For multi-tier VM applications, use DRS affinity rules to keep VMs on same host Same vswitch / VLAN, rules out physical network If using Jumbo Frames, ensure it is enabled end-to-end 54
Network Best Practices Use the vmxnet3 virtual adapter Less CPU overhead 10 Gbps connection to vswitch Use the latest driver/firmware for the NICs on the host Use network shares Requires Virtual Distributed Switch 4.1 Isolate vmotion and iscsi traffic from regular VM traffic Separate vswitches with dedicated NIC(s) Most applicable with Gigabit NICs 55
56 In conclusion
Key Takeaways Performance Best Practices Understand your environment Hardware, storage, networking VMs & applications Advanced configuration values do not need to be tweaked or modified In almost all situations Use fully automated DRS Use Paravirtual virtual hardware 57
Important Links 58
Important Links 59
FILL OUT A SURVEY EVERY COMPLETE SURVEY IS ENTERED INTO DRAWING FOR A $25 VMWARE COMPANY STORE GIFT CERTIFICATE
INF-VSP1800 vsphere Performance Best Practices Peter Boone, VMware, Inc. #vmworldinf