WHITE PAPER Optimizing Virtual Platform Disk Performance



Similar documents
WHITE PAPER Best Practices for Using Diskeeper on Storage Area Networks (SANs)

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Maximizing VMware ESX Performance Through Defragmentation of Guest Systems. Presented by

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Condusiv s V-locity 4 Boosts Virtual Machine Performance Over 50% Without Additional Hardware

Maximizing SQL Server Virtualization Performance

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

White Paper. Recording Server Virtualization

Best Practices for Defragmenting Thin Provisioned Storage

Partition Alignment Dramatically Increases System Performance

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

How to test Diskeeper or V-locity with the Performance Monitor

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

WHITE PAPER Keeping Your SQL Server Databases Defragmented with Diskeeper

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Performance of Virtualized SQL Server Based VMware vcenter Database

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Nimble Storage for VMware View VDI

How To Improve Write Speed On An Nand Flash Memory Flash Drive

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Using Iometer to Show Acceleration Benefits for VMware vsphere 5.5 with FlashSoft Software 3.7

Monitoring Databases on VMware

Virtual server management: Top tips on managing storage in virtual server environments

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Executive Summary: Test methodology:

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

RAID 5 rebuild performance in ProLiant

VMware Virtual Machine File System: Technical Overview and Best Practices

Microsoft Exchange Solutions on VMware

The Impact of Disk Fragmentation on Servers. By David Chernicoff

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Benchmarking Hadoop & HBase on Violin

Optimizing LTO Backup Performance

MS Exchange Server Acceleration

Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Benchmarking Microsoft SQL Server Using VMware ESX Server 3.5

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer

Storage I/O Control: Proportional Allocation of Shared Storage Resources

How To Fix A Fault Fault Fault Management In A Vsphere 5 Vsphe5 Vsphee5 V2.5.5 (Vmfs) Vspheron 5 (Vsphere5) (Vmf5) V

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels

Using Data Domain Storage with Symantec Enterprise Vault 8. White Paper. Michael McLaughlin Data Domain Technical Marketing

Drobo How-To Guide. Use a Drobo iscsi Array as a Target for Veeam Backups

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2

Top Ten Private Cloud Risks. Potential downtime and data loss causes

Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage

VMware vsphere Design. 2nd Edition

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Virtualization of the MS Exchange Server Environment

SQL Server Version. Supported for SC2012 RTM*** Not supported for SC2012 SP1*** SQL Server 2008 SP1, SP2, SP3

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Accelerating Microsoft Exchange Servers with I/O Caching

Deploying and Optimizing SQL Server for Virtual Machines

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

AlphaTrust PRONTO - Hardware Requirements

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

BEST PRACTICES GUIDE: VMware on Nimble Storage

VERITAS Database Edition for Oracle on HP-UX 11i. Performance Report

Q & A From Hitachi Data Systems WebTech Presentation:

Confidence in a connected world. Veritas NetBackup 6.5 for VMware 3.x Best Practices

VMware Best Practice and Integration Guide

SQL Server Virtualization

SAN Conceptual and Design Basics

Windows Server 2008 R2 Hyper-V Live Migration

Virtual SAN Design and Deployment Guide

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Azure VM Performance Considerations Running SQL Server

Accelerating Server Storage Performance on Lenovo ThinkServer

Best Practices for Virtualised SharePoint

Virtually Effortless Backup for VMware Environments

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Virtuoso and Database Scalability

Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms

Transcription:

WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com

Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower operating costs has been driving the phenomenal growth of virtualization in the past decade, with no signs of slowing. At present, many corporations run more virtualized servers than physical servers. While virtualization provides opportunity for consolidation and better hardware utilization, it s critically important to recognize and never exceed hardware capacities. The importance of ensuring sufficient CPU and memory are well understood, with many processes and management tools available to help plan and properly provision VMs for these critical resources. I/O traffic, network and disk, are more complicated to account for in virtual environments as they tend to be more unpredictable. In order to better accommodate disk I/O, most virtualization platforms will implement a Storage Area Network (SAN) which can offer greater data throughput, and a dynamic environment to address fluctuations in I/O demands. While a storage infrastructure can be built out to meet expected demands, there are uncontrollable behaviors that will still impede performance. File Fragmentation As files are written to a general-purpose local disk file system, such as Windows NTFS, a natural byproduct is file fragmentation. File fragmentation is a state in which the data stream of a file is stored as non-contiguous clusters in the file system. Fragmented files Stored in non-contiguous clusters in file system Fragmentation occurs on logical volumes, and by device drivers is translated to logical blocks, and eventually to physical sectors residing on a storage device. It can be demonstrated as pieces of a file located in a non-contiguous manner. The effect of this file fragmentation is increased I/O overhead, leading to slower system performance for the operating system.

Optimizing Virtual Platform Disk Performance 2 In the case of virtual platforms, a guest operating system is stored as a file (i.e., set of files) on the virtual platform s file system as a virtual disk. A virtual disk is essentially a container file, housing all the files that constitute the OS and user data of a VM. A virtual disk file can fragment just as any other file can, resulting in what amounts to a logically fragmented virtual hard disk, which still has typical file fragmentation contained within it. The picture represented above would appear as VirtualServer1.vmdk, in 4 pieces. Fragmented virtual disk file Stored in non-contiguous This situation equates to hierarchical fragmentation or, more simply, fragmentation- clusters in Host file system within-fragmentation. Given the relatively static nature and large size of virtual disks, and large allocation unit size of VMFS (typically 1MB), fragmentation of these files is unlikely to cause performance issues in most cases. The focus and solution to fragmentation should be directed at the guest operating system. Fragmentation within a Windows VM will cause Windows to generate additional unnecessary I/O. This added I/O traffic can be discovered using Windows Performance Monitor, where it is one of the principal causes for Split I/O. Fragmentation prevention and defragmentation technologies exist to eliminate unnecessary I/O overhead, and improve system performance. Fragmentation prevention solves fragmentation at the source, by actively causing files to be written contiguously via advanced file system drivers. Defragmentation is the action in which file fragments are re-aligned within the file system, into a single extent, so that only the minimal amount of disk I/Os are required to access the file, thereby increasing access speed. Partition Alignment Depending on your storage protocol and virtual disk type, misaligned partitions can cause additional unnecessary I/O 1. 1 VMware guide to proper partition alignment: http://www.vmware.com/pdf/esx3_partition_align.pdf

Optimizing Virtual Platform Disk Performance 3 NTFS VMFS SAN LUN In the example above in which the ESX and SAN volumes are not properly aligned, a Word file spanning four NTFS clusters can cause additional unnecessary I/O in both VMFS and the SAN LUN. Similarities Between Partition Alignment and Fragmentation Much like misaligned partitions can cause additional I/O at multiple layers, so does fragmentation. While partitions can be properly aligned once and never require further corrective action, fragmentation will continue to occur, and needs to be regularly addressed. In the example below, which assumes proper partition alignment, a file in eight fragments in the guest OS causes additional I/Os to be generated at the virtualization platform layer 2 and at the LUN. NTFS (64KB Cluster VMFS (1MB Block) SAN LUN (128KB Stripe Defragmentation in the guest operating system (of this file), eliminates excess I/O when accessing the file, as Windows must only generate one I/O. This reduction in I/O traffic translates to the host file system and SAN LUN, ensuring efficiencies at each layer. NTFS (64KB Cluster VMFS (1MB Block) SAN LUN (128KB Stripe 2 It should be noted that VMFS, in the example above, only needs to read the actual amount of data requested in multiples of 512 byte sectors, and does not need to read an entire 1MB block.

Optimizing Virtual Platform Disk Performance 4 Best Practices Defragmentation of Windows file system is a VMware recommended performance solution. The VMware Knowledge Base article 1004004 3 states: Defragmenting a disk is required to address problems encountered with an operating system as a result of file system fragmentation. Fragmentation problems result in slow operating system performance. In order to validate the VMware statement, tests were performed. Test Environment Configuration Host OS: ESX Server 4.1 with VMFS (1MB blocks) Guest OS: Windows Server 208r2 x64 (3GB RAM, 1 vcpu) Benchmarking Software: Iometer (http://www.iometer.org/) Fragmentation Program: FragmentFile.exe (used to fragment a specified file) Defragmentation Software: V-locity (http://www.condusiv.com/business/ v-locity/) Storage: 10GB test volume in a 40GB virtual disk VMFS Datastore of 410GB HP Smart Array P400 controller RAID 5 (4x 136GB SCSI at 10K RPM) Stripe size of 64KB with a 64KB offset (properly aligned) 3 http://kb.vmware.com/selfservice/microsites/search.do?language=en_us&cmd=displaykc&externalid=1004004

Optimizing Virtual Platform Disk Performance 5 Load Generation The industry standard benchmarking tool, Iometer, was used to generate I/O load for these experiments. Iometer configuration options used as variables in these experiments: Transfer request sizes: 1KB, 4KB, 8KB, 16KB, 32KB, 64KB, 72KB, and 128KB Percent random or sequential distribution: for each transfer request size, 0 percent and 100 percent random accesses were selected Percent read or write distribution: for each transfer request size, 0 percent and 100 percent read accesses were selected Iometer parameters that were held constant for all tests: Size of volume: 10GB Size of Iometer test file (iobw.tst): 8,131,204 KB (~7.75GB) Number of outstanding I/O operations: 16 Runtime: 4 minutes Ramp-up time: 60 seconds Number of workers to spawn automatically: 1 The following is excerpted from a VMware white paper, 4 and helps to explain why the Iometer parameters were used. Servers typically run a mix of workloads consisting of different access patterns and I/O data sizes. Within a workload there may be several data transfer sizes and more than one access pattern. There are a few applications in which access is either purely sequential or purely random. For example, database logs are written sequentially. Reading this data back during database recovery is done by means of a sequential read operation. Typically, online transaction processing (OLTP) database access is predominantly random in nature. The size of the data transfer depends on the application and is often a range rather than a single value. For Microsoft Exchange, the I/O size is generally small (from 4KB to 16KB), Microsoft SQL Server database random read and write accesses are 8KB, Oracle accesses are typically 8KB, and Lotus Domino uses 4KB. On the Windows platform, the I/O transfer size of an application can be determined using Perfmon. In summary, I/O characteristics of a workload are defined in terms of the ratio of read operations to write operations, the ratio of sequential accesses to random accesses, and the data transfer size. Often, a range of data transfer sizes may be specified instead of a single value. 4 http://www.vmware.com/pdf/esx3_partition_align.pdf

Optimizing Virtual Platform Disk Performance 6 Create Fragmentation Condusiv s FragmentFile.exe tool was used to fragment the Iometer test file (iobw.tst) into 568,572, a mid-range amount of fragmentation for a production server. The statistics collected from an analysis of the volume (shown below) were performed with V-locity. Statistics Volume Files Volume size 10,240 MB Cluster size 4 KB Used space 8,023 MB Free space 2,216 MB Percent free space 21 % Free Space Fragmentation Percent low performing free space 0 % Total free space extents 3 Largest free space extent 911 MB Average free space extent size 739 MB Low-Performing Files Percentage % of entire volume 77 % % of used space 98 % File Fragmentation Total files 11 Average file size 724 MB Total fragmented files 1 Total excess fragments 568,572 Average fragments per file 51,689.36 Files with performance loss 1 Fragments File Size Most Fragmented Files 568,572 7,941 MB \iobw.tst Test Procedure The primary objective was to characterize the performance of fragmented versus defragmented virtual machines for a range of data sizes across a variety of access patterns. The data sizes selected were 1KB, 4KB, 8KB, 16KB, 32KB, 64KB, 72KB, and 128KB. The access patterns were restricted to a combination of 100 percent read or write, and 100 percent random or sequential. Each of these four workloads was tested for eight data sizes, for a total of 32 data points per workload. In order to isolate the impact of fragmentation, only the test VM was powered on and active for the duration of the tests. For the initial run, Iometer created a non-fragmented file and performance data was collected. FragmentFile.exe was then used to fragment the Iometer test file, the VM rebooted, and the test procedure re-run. This resulted in data sets for both non-fragmented and fragmented scenarios. The results are graphed below.

Optimizing Virtual Platform Disk Performance 7 Performance Results As the graphs show, all workloads show an increase in throughput when the volume (file) is defragmented (i.e., not fragmented). It also becomes clear that as the I/O read/write size increases, the fragmentation-induced I/O latency increases dramatically. The greatest improvements of a contiguous file are found with file reads; both random and sequential. Conclusion Fragmentation demonstratively impedes performance of Windows guest operating systems. While the tests depicted were executed on a single VM, the issue becomes exponentially worse in a multi-vm environment wherein each VM suffers from file fragmentation. As server virtualization establishes a symbiotic relationship, it is important to remember that generating disk I/O in one virtual machine affects I/O requests from other virtual systems. Therefore latencies in one VM will artificially inflate latency in co-located virtual machines (VMs that share a common platform). Condusiv Technologies 7590 N. Glenoaks Blvd. Burbank, CA 91504 800-829-6468 www.condusiv.com Fragmentation artificially inflates the amount of disk I/O requests which, on a virtual machine platform, compounds the disk bottleneck even more so than on conventional systems. Eliminating fragmentation in VMs, and the corresponding unnecessary disk I/O traffic, is vital to platform-wide performance and enhances the ability to host more VMs on a shared infrastructure. 2012 Condusiv Technologies Corporation. All Rights Reserved. Condusiv, the Condusiv Technologies Corporation logo, V-locity, and Diskeeper are registered trademarks of Condusiv Technologies Corporation. All other trademarks are the property of their respective owners.