Performance Management in the Virtual Data Center, Part II Memory Management



Similar documents
Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

White Paper. Recording Server Virtualization

Monitoring Databases on VMware

BridgeWays Management Pack for VMware ESX

Database Virtualization

Understanding Memory Resource Management in VMware vsphere 5.0

VI Performance Monitoring

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Hyper-V R2: What's New?

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Hyper-V vs ESX at the datacenter

Balancing CPU, Storage

Enabling Technologies for Distributed and Cloud Computing

What s New with VMware Virtual Infrastructure

Enabling Technologies for Distributed Computing

Avoiding Performance Bottlenecks in Hyper-V

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE

FOR SERVERS 2.2: FEATURE matrix

Windows Server 2008 R2 Hyper V. Public FAQ

COS 318: Operating Systems. Virtual Machine Monitors

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Full and Para Virtualization

Red Hat enterprise virtualization 3.0 feature comparison

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1

Cloud Optimize Your IT

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER

VMware vsphere 4.1 with ESXi and vcenter

In-Guest Monitoring With Microsoft System Center

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Windows Server 2008 R2 Hyper-V Live Migration

vsphere 6.0 Advantages Over Hyper-V

VMware vsphere 5.0 Boot Camp

VMware vsphere 5.1 Advanced Administration

Chapter 14 Virtual Machines

vsphere Resource Management

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

How To Use Vsphere On Windows Server 2012 (Vsphere) Vsphervisor Vsphereserver Vspheer51 (Vse) Vse.Org (Vserve) Vspehere 5.1 (V

Memory Resource Management in VMware ESX Server

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

Best Practices for Virtualised SharePoint

A Comparison of VMware and {Virtual Server}

VMware vsphere Design. 2nd Edition

VIRTUALIZATION 101. Brainstorm Conference 2013 PRESENTER INTRODUCTIONS

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Oracle Database Scalability in VMware ESX VMware ESX 3.5

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Virtualization. Dr. Yingwu Zhu

Technology Insight Series

Merge Healthcare Virtualization

Scaling in a Hypervisor Environment

Understanding Data Locality in VMware Virtual SAN

Networking for Caribbean Development

How Customers Are Cutting Costs and Building Value with Microsoft Virtualization

vsphere Monitoring and Performance

Understanding Memory Resource Management in VMware ESX 4.1

Virtualization of the MS Exchange Server Environment

Unifying Information Security

MS Exchange Server Acceleration

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

EMC BACKUP-AS-A-SERVICE

Windows Server 2008 R2 Hyper-V Live Migration

VXLAN: Scaling Data Center Capacity. White Paper

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

Windows Server 2012 授 權 說 明

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

System and Storage Virtualization For ios (AS/400) Environment

Veeam ONE What s New in v9?

Kronos Workforce Central on VMware Virtual Infrastructure

Computing in High- Energy-Physics: How Virtualization meets the Grid

WHITE PAPER Optimizing Virtual Platform Disk Performance

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

CON9577 Performance Optimizations for Cloud Infrastructure as a Service

Virtualization. Explain how today s virtualization movement is actually a reinvention

KVM & Memory Management Updates

IOS110. Virtualization 5/27/2014 1

VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER

Rackspace Cloud Databases and Container-based Virtualization

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Squeezing The Most Performance from your VMware-based SQL Server

Performance Analysis Methods ESX Server 3

Evaluating Intel Virtualization Technology FlexMigration with Multi-generation Intel Multi-core and Intel Dual-core Xeon Processors.

Windows Server 2012 R2 Hyper-V: Designing for the Real World

vsphere Monitoring and Performance

COS 318: Operating Systems. Virtual Machine Monitors

Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors

VegaStream Tutorial - The Advantages & Disadvantages of Using Virtual Machines

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

vsphere Monitoring and Performance

Microsoft Exchange Solutions on VMware

SQL Server Virtualization

Transcription:

Performance Management in the Virtual Data Center, Part II Memory Management Mark B. Friedman Demand Technology Software, 2013 markf@demandtech.com

The Vision: Virtualization technology and delivery of IT Infrastructure as a Service (IaaS) Elastic, multi-tenant computing capacity encompassing virtualized servers, storage and shared network bandwidth VMware Virtual Memory Management 2

IaaS Advertised Benefits Cost savings Economies of scale Capacity on demand Rapid provisioning High Availability Management automation Dynamic load balancing etc. VMware Virtual Memory Management 3

IaaS Pooling Clustering Dynamic Failover Automatic Provisioning Blade Servers FC Switches SAN Disk Arrays Storage Director Storage Director Network Switches VLANs, Virtual IP switches, Jumbo Frames VM Host Guest VMs Cloud storage services, Virtual Disks VMware Virtual Memory Management 4

e.g., Intel hardware assistance: VMware Virtual Memory Management 5

e.g., VMware vsphere Virtualizes the entire IT infrastructure, including servers, storage, and networks. Pools these resources and presents a uniform set of elements that are subject to policy-based management. Source: http://pubs.vmware.com/vsphere-50/ VMware Virtual Memory Management 6

e.g., VMware vsphere Resources that can be pooled: CPU RAM Power Storage Network bandwidth Policy-based settings for Resource allocation Priority-based (High, Normal, Low 4:2:1) Reservation Allocation Upper Limits VMware Virtual Memory Management 7

Vision vs. Reality VMware is both complex and costly Growing numbers of VMware specialists required to support this environment. VMware Virtual Memory Management 8

Vision vs. Reality For VMware to manage a virtualized datacenter effectively, all the major functions of the Server OS have to migrate into the Hypervisor OS Security Identity TCP/IP protocol stack Thread scheduling (SMP, NUMA, etc.) IO Device Drivers Virtual memory management Clustering and HA Failover Windows Server OS VMware Virtual Memory Management 9

Vision vs. Reality: Performance VMware s vision is ambitious Cost justification is mainly via aggressive server consolidation Common Performance concerns include: Can more efficient use of computer resources be achieved through pooling? Can disruptions due to automated operations be minimized or controlled? Can application scalability be achieved through JIT auto-provisioning? Are VMware s performance tools adequate to monitor these complex configurations? VMware Virtual Memory Management 10

Vision vs. Reality: Performance Automation: When a resource is severely over-committed, vmotion will initiate re-balancing the hardware. Not clear precisely what sequence of events trigger vmotion VMware implements very basic policies for resource sharing e.g., High Normal Low Critical Infrastructure Production Test VMware Virtual Memory Management 11

Vision vs. Reality: Performance VMware load balancing is anonymous! VMware has little or no knowledge of what applications the guest is running what service levels that application is delivering (or requires) how resources code, data, memory pages, files, etc. are allocated across running processes VMware Virtual Memory Management 12

Vision vs. Reality: Performance Massive over-provisioning Typical Blade Servers contain far more power than typical Application servers require Additional spare capacity/redundancy also supports rapid recovery from failures Large-scale aggregation (pooling) reduces the impact of spikes in demand e.g., in a large scale storage processor, disk space consumed grows at a glacial pace On massively over-provisioned Hosts, guest machine performance native performance VMware Virtual Memory Management 13

Be prepared to go here! Be prepared to go here! Multiple layers of performance data Multiple layers add complexity How many layers will you need to drill into to solve a performance problem? Native Hardware Virtualized Hardware (LDEVs) VMware recommends starting here! Virtual Pools App Clusters HyperVisor Virtual Machine (Windows) Apps Be prepared to go here! Be prepared to go here! VMware Virtual Memory Management 14

Multiple layers of performance data VMware adds two new layers Fortunately the remaining layers are (largely) unchanged. But, see impact on Windows clocks! Native Hardware Virtualized Hardware (LDEVs) App Clusters Virtual Machine (Windows) Apps Virtual Pools HyperVisor VMware Virtual Memory Management 15

VMware measurements Monitoring the virtualized datacenter requires pooling incomplete and frequently incompatible measurement data from across the datacenter. Blade Servers FC Switches SAN Disk Arrays Storage Director Storage Director Network Switches Hardware Resource Pools VM Host Guest VMs Applications VMware Virtual Memory Management 16

VMware measurements Metrics for performance monitoring VMware Host resource consumption CPU RAM Power DataStores and Disks Network Guest machine resource consumption Resource pools data limited to CPU & Memory vmotion actions (directly or indirectly) VMware Virtual Memory Management 17

VMware measurements Substantial amounts of detailed information on VMware performance and operations available VMware Virtual Memory Management 18

VMware measurements Only scant documentation on many of these data fields VMware Virtual Memory Management 19

VMware Performance Basics Identify over-committed resources (CPU, RAM, Disk, Network) Available measurements Resource utilization Device latency Device queuing (contention due to over-commitment or underconfiguration) Augment the VMware data with external SAN and disk hardware measurements measurements from inside the guest OS Especially, over-commitment of RAM application service level measurements taken from outside the virtual infrastructure VMware Virtual Memory Management 20

VMware Virtual Memory Management Extended Page Tables (hardware supported) Page Sharing (when VMs contain identical pages) Significantly reduces memory footprint when homogenous VMs are co-located in the same VM Host RAM over-commitment Physical RAM is over-committed when VM virtual memory granted > sizeof (RAM) The VM Host cannot see inside the guest OS After RAM is granted to the guest VM, the VMware hypervisor has little understanding of how it is used VMware Virtual Memory Management 21

Extended Page Tables Linear (virtual) address space, one per process 0x0000 1x0000 2x0000 3x0000 4x0000 5x0000 0x0000 1x0000 2x0000 3x0000 4x0000 5x0000 0x0000 1x0000 2x0000 3x0000 4x0000 5x0000 Guest OS Page Tables map virtual addresses to (virtual) physical addresses 0x0000: CD8Ax0000 1x0000: 6774x0000 2x0000: 3298x0000 6x0000: CD8Ax0000 3x0000: 54CDx0000 7x0000: 6774x0000 4x0000: 2D8Bx0000 8x0000: 3298x0000 5x0000: 1286x0000 9x0000: 54CDx0000 Ax0000: 2D8Bx0000 Bx0000: 4462x0000 Virtual Address Translation Hardware VMware Shadow Page Tables map virtualized physical addresses to (real) physical addresses CD8Ax0000:8362x0000 6774x0000:98ABx0000 3298x0000:5684x0000 CD8Ax0000:8362x0000 54CDx0000:2334x0000 6774x0000:98ABx0000 2D8Bx0000:7756x0000 3298x0000:5684x0000 1286x0000:688Cx0000 54CDx0000:2334x0000 2D8Bx0000:7756x0000 4462x0000:329Dx0000 Guest OS Page Tables Hypervisor Page Tables TLB 0x0000: CD8Ax0000 1x0000: 6774x0000 2x0000:298x0000 CD8Ax0000:8362x0000 6774x0000:98ABx0000 3298x0000:5684x0000 VMware Virtual Memory Management 22

VMware Virtual Memory Management VMware allocates RAM on behalf of guest machines on demand aka, virtual memory management VMware traditionally maintained shadow page tables, but relied on the guest OS for page replacement based on the age of physical pages Hardware began to emerge to lessen the burden of shadow page table maintenance VMware Virtual Memory Management 23

VMware Virtual Memory Management Meanwhile, VMware retained most of its original approach to virtual memory management If VMware Host is forced to trim guest OS working set pages, it does so blindly. Candidates for page replacement (VMware swapping) are selected randomly Recommendation: Watch for Memory State transitions Keep swapping to a minimum VMware Virtual Memory Management 24

Virtual Memory Management VMware uses over-commitment of RAM to support aggressive server consolidation Granting memory on demand is much more efficient, compared to static partitioning virtual memory Many guest machines are not actively using all the physical RAM that is granted to them Physical RAM is over-committed when VM virtual memory granted > sizeof (RAM) which is typical in virtual memory management VMware Virtual Memory Management 25

RAM over-commitment Over-commitment of Physical RAM when VM virtual memory granted > sizeof (RAM) This works because the Guest OS seldom uses all the virtual memory that it is granted. But, beware: Guest VM spin-up Improved LRU in recent versions of Windows Windows processes (e.g., SQL Server) that will allocate vm dynamically up to the limit of physical RAM Limited disk bandwidth available for paging operations Applies to both the VM Host swap files and its VM guest machines and their page files VMware Virtual Memory Management 26

RAM over-commitment VMs are presented with a full & consistent physical memory configuration All allocation requests for RAM from the virtual machine are handled dynamically by VMware Virtual address translation hardware provides a reliable mechanism for detecting what RAM the guest VM actually accessed Note: the VMware Host ultimately resolves all page faults But, it has a very limited understanding of actual page usage patterns, based on the initial page access only Memory Granted: RAM allocated to the guest vm on demand VMware Virtual Memory Management 27

RAM over-commitment Case Study ESX Server: a standalone Dell Optiplex 990 with an Intel i7 quad-core processor and 16GB of RAM Hyper-Threading is disabled through the BIOS). 4 X Windows Server 2012 Guests defined at 8 GB Running identical test workloads; a.net Framework app that allocates and exercises large amounts of virtual memory See how VMware responds to a workload where Memory Granted = 2 * sizeof(ram) VMware Virtual Memory Management 28

RAM over-commitment Page replacement is problematic because no comparable hardware mechanism exists for detecting what RAM the guest VM has freed! Memory Active: RAM the guest vm has referenced during the last measurement interval Estimated based on sampling: Each interval, flag some percentage of vm guest pages as invalid Use the (soft) page fault rate to estimate the vm working set VMware Virtual Memory Management 29

RAM over-commitment VMware Virtual Memory Management 30

RAM over-commitment While Memory Active < sizeof(ram) VMware balances RAM usage across guest machines subject to resource allocation policies set by the customer Shares Reservation: sets a lower limit on the size of the guest vm working set Limit: sets an upper limit on the size of the guest vm working set VMware Virtual Memory Management 31

RAM over-commitment VMware Page replacement policy But when Memory Active sizeof(ram) First, ballooning attempts to induce page replacement in the virtual guest If ballooning is not sufficient to relieve the memory contention, VMware then resorts to more drastic means swapping, which selects swap candidates from the guest working set at random dedicated swap files are allocated per machine VMware Virtual Memory Management 32

RAM over-commitment VMware Virtual Memory Management 33

RAM over-commitment VMware Virtual Memory Management 34

RAM over-commitment VMware Virtual Memory Management 35

VMware Performance Basics: RAM Memory Overhead (per MB of Guest machine VM mapped) vcpus 1 2 4 8 256 20 24 32 48 1024 26 30 38 53 4096 49 53 61 77 16384 140 144 152 169 Scalability 1:100 (> = 4 GB) see http://pubs.vmware.com/vsphere-50/index.jsp VMware Virtual Memory Management 36

RAM over-commitment Transparent memory sharing supports aggressive over-commitment many opportunities to share pages whenever guest VMs are running the same OS, or the same application code shared data pages are also possible VMware scans pages continuously, looking for sharing candidates a Copy on Write mechanism exists whenever a shared page is changed and can no longer be shared VMware Virtual Memory Management 37

Memory sharing Computes a hash from the contents of a page Checks for a collision in a Hash table that contains the hash codes for all active pages If found, compare the two pages byte by byte If equal, share the page! Note: tuning parameters exist for controlling the rate at which the background thread that scans memory runs VMware Virtual Memory Management 38

Memory Sharing VMware Virtual Memory Management 39

Memory sharing Costs associated with memory sharing Cost of memory scanning Runs as a background thread Note: tuning parameters exist for controlling the rate at which memory is scanned Cost of additional page fault handling in shadow page tables, shared pages are flagged as Read Only 1 st Write to a shared page generates a page fault to handle the page fault, the updated, shared page must be copied somewhere else in physical RAM VMware Virtual Memory Management 40

RAM Allocation metrics Memory Granted Guest VM virtual memory mapped to VM Host virtual memory Memory Usage % Guest VM: Active Memory / Memory Granted Host: Active Memory / Host machine physical memory size Memory Shared Guest physical memory that is duplicated across other virtual machines and thus eligible to be shared Double counted for each VM where the identical pages reside includes Memory Zero pages (the Zero list inside Windows) Memory Overhead VMware must build and maintain Shadow Page Tables for each VM VMware Virtual Memory Management 41

Page replacement in VMware Triggered by thresholds Ballooning induce the VMware guest OS to perform page replacement Swapping random page replacement immediate relief Compression: stolen pages written to the Compression Cache Idle Machine Tax an LRU-like mechanism directs increased paging stealing from inactive guest machines VMware Virtual Memory Management 42

Page replacement in VMware Memory State State No. Condition Reclamation Action High 0 Free memory >= 6% of machine memory None Soft 1 Free memory < 6% Ballooning Hard 2 Free memory < 4% Low 3 Free memory < 2% Pages Compressed or Swapped to Disk* Blocks execution of active VMs > target allocations* * Note: CPU Swap time can be accumulated due to swapping pages to disk VMware Virtual Memory Management 43

Memory State S t a t e VMware Virtual Memory Management 44

RAM over-commitment Active Memory is computed using sampling! VMware Virtual Memory Management 45

Ballooning When RAM is over-committed, ballooning tries to force guest VMs to utilize LRU to trim their pages Only the Guest VM understands its memory usage patterns and can utilize LRU effectively Available Bytes + Standby Cache Bytes in Windows Well-behaved apps listen for a Windows VMM low memory event to release excess pages e.g., SQL Server, the Exchange Store process, and.net Framework apps VMware Virtual Memory Management 46

Ballooning When memory is under pressure, attempt to induce page stealing in the guest OS Windows Guest Balloon Driver VMware Host signal Balloon driver Notify a Balloon driver via a back channel inside the guest VM to inflate Memory State Transition <4% Free Memory < 6% VMware Virtual Memory Management 47

Ballooning Windows Guest Balloon Driver: inflates & pins memory VMware Host List<empty page> Empty pages are immediate candidates for stealing! Memory State Transition <4% Free Memory < 6% MmAllocatePagesForMdlEx VMware Virtual Memory Management 48

Ballooning vmmemctl.sys is the VMware balloon driver inside Windows that inflates MmAllocatePagesForMdlEx MmProbeAndLockPages Allocates from the non-paged Pool VMware Virtual Memory Management 49

Ballooning VMware Virtual Memory Management 50

Ballooning VMware Virtual Memory Management 51

Ballooning When the Balloon driver inflates, it allocates pages from the non-paged pool. These empty pages can be stolen immediately by the VMware Host Note: the size of the non-paged pool is very restricted in earlier versions of 32-bit Windows Windows also currently supports pinning normally allocated pages in memory When the Balloon driver inflates, it may drive paging, within the guesy OS, due to increased memory contention VMware Virtual Memory Management 52

Ballooning When the Balloon driver inflates, it may drive paging due to memory contention Telltale signs inside the guest OS: spike in size of the non-paged Pool spike in Demand Zero paging increase in overall paging applications that listen for low memory notifications will attempt to release older pages SQL Server buffers.net Framework garbage collection (GC) VMware Virtual Memory Management 53

Ballooning: the view from inside VMware Virtual Memory Management 54

Ballooning: the view from inside VMware Virtual Memory Management 55

Ballooning: the view from inside VMware Virtual Memory Management 56

Ballooning Telltale signs of ballooning from inside the guest OS: the size of the non-paged Pool spikes! but not near as large as the VMware counters suggest spike in Demand Zero paging? increase in overall paging to disk? limited by the speed of the paging file (virtual disk) applications that listen for low memory notifications will attempt to release older pages SQL Server buffers.net Framework garbage collection (GC) VMware Virtual Memory Management 57

Ballooning: the view from inside Opportunities to share zeroed pages diminishes! VMware Virtual Memory Management 58

Ballooning: the view from inside VMware Virtual Memory Management 59

Ballooning: the view from inside VMware Virtual Memory Management 60

Ballooning: the view from inside VMware Virtual Memory Management 61

Swapping Swapping removes pages from the vm working set at random Random page replacement algorithms perform surprisingly well in simulations; However, some worst-case scenarios are possible e.g., VMware might select a Guest OS page containing active page table entries In general, LRU-based page replacement is usually far superior. VMware could move towards LRU-based page replacement in the future VMware Swapping is opaque to the underlying guest machine: Can have a major performance impact that is often not apparent from internal performance measurements! VMware Virtual Memory Management 62

Memory Swapping VMware Virtual Memory Management 63

Swapping If Swapping occurs, there are some useful mitigation strategies VM Host disk configuration VM swapping to a memory-resident cache using compression But, generally, VMware Swapping should be avoided, where possible. While swapping is bad, thrashing is even worse: swap in rate swap out rate VMware Virtual Memory Management 64

Swapping VMware Virtual Memory Management 65

RAM Best Practices Track Memory State transitions Limit use of Memory maximum Limits, but Protect key VMs using minimum Reservation sizes One VM that hogs virtual memory can impact the performance of all VMs residing in the same VM Host Try to avoid swapping For ballooning, monitor key Windows virtual memory contention indicators: Pool non-paged Bytes Demand Zero paging, Pages/sec (to disk) Configure homogenous VMs to run on the same VM Host to maximize opportunities for content-based page sharing VMware Virtual Memory Management 66

Vision vs. Reality: Performance How well does VMware today address common performance concerns? Can more efficient use of computer resources be achieved through pooling? Yes, definitely! Can disruptions due to automated operations be minimized or controlled? Probably not. Can application scalability be achieved through JIT auto-provisioning? Not without visibility into apps! Are VMware s performance tools adequate to monitor these complex configurations? Not yet! VMware Virtual Memory Management 67

Vision vs. Reality: Performance Virtual data center performance challenges: Resolving performance problems can require marshaling data and expertise from across many IT data center management disciplines. VMware knowledge tends to be silo-ed in the IT technical support organization. VMware has been secretive about its proprietary technology, including licensing that restricts customers from publishing performance benchmarks, etc. Primarily due to competitive pressure from Microsoft, which is aggressively bundling virtualization into its OS. VMware Virtual Memory Management 68

Supplementary material Impact of virtualized clocks on Windows performance counters Anytime you need to drill down from VMware to Windows performance counters, you need to understand whether VMware has perturbed the counter data Windows Physical Memory usage counters VMware Virtual Memory Management 69

Hardware Protection Layers Virtualization of Clocks Windows Guest Windows Guest Windows Guest OS Kernel OS Kernel OS Kernel VM Host VMware Virtual Memory Management 70

Windows Guest machine clocks Windows OS Scheduler wakes up 64 times per second Update the System clock Perform CPU accounting Clock interrupts are virtualized rdtsc instruction is also virtualized How does this effect measurements made from inside the guest OS? Physical CPU Ready Queue VM Guest VM Guest VM Guest VMware Virtual Memory Management 71

Windows Guest machine clocks All guest machine clocks and timers are impacted e.g., OS Scheduler periodic timer interrupts are not received at regular 15.6 ms quantum intervals some intervals may be delayed when the Guest is accumulating Ready time some interrupts may be dropped if the Guest machine is delayed long enough VMware responds to guest machine timer requests using apparent time Ensures monotonically increasing clocks with values approaching the real time Provides consistent clock values across multiple logical CPUs Physical CPU Ready Queue VM Guest VM Guest VM Guest VMware Virtual Memory Management 72

Windows Guest performance monitoring Virtualization is transparent to the OS (by design) Guest machine scheduling delays are only visible from outside the guest machine VMware Host Performance counters All Guest machine clocks and timers are impacted When benchmarking, run a dedicated virtual machine configured with logical processors < physical CPUs VMware Virtual Memory Management 73

Windows Guest performance monitoring % Processor Time measurements Legacy measurements based on sampling Potentially, fewer CPU time usage samples are gathered each performance monitoring interval Time between samples is no longer guaranteed to be uniform These impact accuracy, but they are (probably) not showstoppers. In theory, VMware Host measurements of Guest machine Ready time delays could be used to correct internal measurement data that does not reflect the amount of time the VM guest spends queued, waiting for processor(s). VMware Virtual Memory Management 74

All rate/sec counters are subject to errors. e.g., Physical Disk\Transfers/sec, TCP\Segments/sec, etc. These calculations are all based on underlying event counters that are reliable: counter t1 counter t0 duration However, the denominator, the interval duration, is suspect. But, Guest machine Ready time delays can be used to correct duration values to reflect any time the VM guest spent queued. Internal event counts for disk, network activity, etc., can be reconciled against VM Host measurements over the same interval. VMware Virtual Memory Management 75

Page replacement in Windows is also triggered by thresholds (Available Bytes) VMware Virtual Memory Management 76