1 Performance Study Performance Characteristics of and RDM VMware ESX Server VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System (), virtual raw device mapping (RDM), and physical raw device mapping. It is very important to understand the I/O characteristics of these disk access management systems in order to choose the right access type for a particular application. Choosing the right disk access management method can be a key factor in achieving high system performance for enterprise class applications. This study provides performance characterization of the various disk access management methods supported by VMware ESX Server. The goal is to provide data on performance and system resource utilization at various load levels for different types of work loads. This information offers you an idea of relative throughput, I/O rate, and CPU efficiency for each of the options so you can select the appropriate disk access method for your application. This study covers the following topics: Technology Overview on page 2 Executive Summary on page 2 Test Environment on page 2 Performance Results on page 5 Conclusion on page 10 Configuration on page 10 Resources on page 11 Copyright 2007 VMware, Inc. All rights reserved. 1
2 Performance Characteristics of and RDM Technology Overview is a special high performance file system offered by VMware to store ESX Server virtual machines. Available as part of ESX Server, it is a distributed file system that allows concurrent access by multiple hosts to files on a shared volume. offers high I/O capabilities for virtual machines. It is optimized for storing and accessing large files such as virtual disks and the memory images of suspended virtual machines. RDM is a mapping file in a volume that acts as a proxy for a raw physical device. The RDM file contains metadata used to manage and redirect disk accesses to the physical device. This technique provides advantages of direct access to physical device in addition to some of the advantages of a virtual disk on storage. In brief, it offers manageability with the raw device access required by certain applications. You can configure RDM in two ways: Virtual compatibility mode This mode fully virtualizes the mapped device, which appears to the guest operating system as a virtual disk file on a volume. Virtual mode provides such benefits of as advanced file locking for data protection and use of snapshots. Physical compatibility mode This mode provides minimal SCSI virtualization of the mapped device. VMkernel passes all SCSI commands to the device, with one exception, thereby exposing all the physical characteristics of the underlying hardware. Both and RDM provide such distributed file system features as file locking, permissions, persistent naming, and VMotion capabilities. is the preferred option for most enterprise applications, including databases, ERP, CRM, VMware Consolidated Backup, Web servers, and file servers. Although is recommended for most virtual disk storage, raw disk access is needed in a few cases. RDM is recommended for those cases. Some of the common uses of RDM are in cluster data and quorum disks for configurations using clustering between virtual machines or between physical and virtual machines or for running SAN snapshot or other layered applications in a virtual machine. For more information on and RDM, refer to the Server Configuration Guide. Executive Summary The main conclusions that can be drawn from the tests described in this study are: For random reads and writes, and RDM yield similar I/O performance. For sequential reads and writes, RDM provides slightly better I/O performance at smaller I/O block sizes, but and RDM provide similar I/O performance for larger I/O block sizes (greater than 32KB). CPU cycles required per I/O operation for both and RDM are similar for random reads and writes at smaller I/O block sizes. As the I/O block size increases, CPU cycles per I/O operation increases for compared to RDM. For sequential reads and writes, RDM requires relatively fewer CPU cycles per I/O operation. Test Environment The tests described in this study characterize the performance of various options for disk access management VMware offers with ESX Server. We ran the tests with a uniprocessor virtual machine using Windows Server 2003 Enterprise Edition with SP2 as the guest operating system. The virtual machine ran on an ESX Server system installed on a local SCSI disk. We attached two disks to the virtual machine one virtual disk for the operating system and a separate test disk, which was the target for the I/O operations. We used a logical volume created on five local SCSI disks configured as RAID 0 as the test disk. We generated I/O load using Iometer, a very popular tool for evaluating I/O performance (see Resources on page 11 for a link to more information). See Configuration on page 10 for a detailed list of the hardware and software configuration we used for the tests. Copyright 2007 VMware, Inc. All rights reserved. 2
3 Performance Characteristics of and RDM Disk Layout In this study, disk layout refers to the configuration, location, and type of disk used for tests. We used a single server with six physical SAS disks and a RAID controller for the tests. We created a logical drive on a single physical disk. We installed ESX Server on this drive. On the same drive, we also created a partition and used it to store the virtual disk on which we installed the Windows Server 2003 guest operating system. We configured the remaining five physical disks as RAID 0 and created a 10GB logical volume, referred to here as the test disk. We used this test disk only for the I/O stress test. We created a virtual disk on the test disk and attached that virtual disk to the Windows virtual machine. To the guest operating system, the virtual disk appears as a physical drive. In the tests, we implemented the virtual disk (seen as a physical disk by the guest operating system) as a.vmdk file stored on a partition created on the test disk. In the RDM tests, we created an RDM file on the volume (the volume that held the virtual machine configuration files and the virtual disk where we installed the guest operating system) and mapped the RDM file to the test disk. We configured the test virtual disk so it was connected to an LSI SCSI host bus adapter. Figure 1. Disk layout for and RDM tests ESX Server ESX Server Virtual machine Virtual machine.vmdk Operating system disk volume.vmdk Test disk Test disk volume volume.vmdk RDM mapping file pointer Operating system disk volume Raw device mapping Test disk Raw device test RDM test From the perspective of the guest operating system, the test disks were raw disks with no partition or file system (such as NTFS) created on them. Iometer can read and write raw, unformatted disks directly. We used this capability so we could compare the performance of the underlying storage implementation without involving any operating system file system. Copyright 2007 VMware, Inc. All rights reserved. 3
4 Performance Characteristics of and RDM Software Configuration We configured the guest operating system to use the LSI Logic SCSI driver. On volumes, we created virtual disks with the thick option. This option provides the best performing disk allocation scheme for a virtual disk. All the space allocated during disk creation is available for the guest operating system immediately after the creation. Any old data that might be present on the allocated space is not zeroed out during virtual machine write operations. Unless stated otherwise, we left all ESX Server and guest operating system parameters at their default settings. NOTE The thick option is not the default when you create a virtual disk using the VI Client. To create virtual disks with the thick option, we used a command line program (vmkfstools). For details on using vmkfstools, see the VMware Infrastructure 3 Server Configuration Guide. I/O Workload Characteristics Test Cases Enterprise applications typically generate I/O with mixed access patterns. The size of data blocks transferred between the server hosting the application and the storage also changes. Designing an appropriate disk and file system layout is very important to achieve optimum performance for a given workload. A few applications have a single access pattern. One example is backup and its pattern of sequential reads. Online transaction processing (OLTP) database access, on the other hand, is highly random. The nature of the application also affects the size of data blocks transferred. Often, the data block size is not a single value but a range. For Microsoft Exchange, the I/O operations are generally small from 4KB to 16KB. OLTP database accesses using Microsoft SQL Server and Oracle databases use a mix of random 8KB read and write operations. The I/O characteristics of a workload can be defined in terms of the ratio of read to write operations, the ratio of sequential to random I/O access, and the data transfer size. A range of data transfer sizes can also be mentioned instead of a single value. In this study, we characterize the performance of and RDM for a range of data transfer sizes across various access patterns. The data transfer sizes we selected were 4KB, 8KB, 16KB, 32KB, and 64KB. The access patterns we chose were random reads, random writes, sequential reads, sequential writes, or a mix of random reads and writes. The test cases are summarized in Table 1. Table 1. Test cases 100% Sequential 100% Random 100% Read 4KB, 8KB, 16KB, 32KB, 64KB 4KB, 8KB, 16KB, 32KB, 64KB 100% Write 4KB, 8KB, 16KB, 32KB, 64KB 4KB, 8KB, 16KB, 32KB, 64KB 50% Read + 50% Write 4KB, 8KB, 16KB, 32KB, 64KB Copyright 2007 VMware, Inc. All rights reserved. 4
5 Performance Characteristics of and RDM Load Generation We used the Iometer benchmarking tool, originally developed at Intel and widely used in I/O subsystem performance testing, to generate I/O load and measure the I/O performance. For a link to more information, see Resources on page 11. Iometer provides options to create and execute a well designed set of I/O workloads. Because we designed our tests to characterize the relative performance of virtual disks on raw devices and, we used only basic load emulation features in the tests. Iometer configuration options used as variables in the tests: Transfer request sizes: 4KB, 8KB, 16KB, 32KB, and 64KB. Percent random or sequential distribution: for each transfer request size, we selected 0 percent random access (equivalent to 100 percent sequential access) and 100 percent random accesses. Percent read or write distribution: for each transfer request size, we selected 0 percent read access (equivalent to 100 percent write access), 50 percent read access (only for random access) and 100 percent read accesses. Iometer parameters constant for all test cases: Number of outstanding I/O operations: 8 Runtime: 5 minutes Ramp up time: 60 seconds Number of workers to spawn automatically: 1 Performance Results Metrics This section presents data and analysis of storage subsystem performance in a uniprocessor virtual machine. The metrics we used to compare the performance of and RDM are I/O rate (measured as number of I/O operations per second), throughput rate (measured as MB per second), and CPU efficiency. In this study, we report the I/O rate and throughput rate as measured by Iometer. We use an efficiency metric called MHz per I/Ops to compare the efficiencies of and RDM. This metric is defined as the CPU cost (in processor cycles) per unit I/O and is calculated as follows: Average CPU utilization CPU rating in MHz Number of cores MHz per I/Ops = Number of I/O operations per second We collected I/O and CPU utilization statistics from Iometer and esxtop as follows: Iometer collected I/O operations per second and throughput in MBps esxtop collected the average CPU utilization of physical CPUs For links to additional information on how to collect I/O statistics using Iometer and how to collect CPU statistics using esxtop, see Resources on page 11. Copyright 2007 VMware, Inc. All rights reserved. 5
6 Performance Characteristics of and RDM Performance This section compares the performance characteristics of each type of disk access management. The metrics used are I/O rate, throughput, and CPU efficiency. Random Workload Most of the enterprise applications that generate random workloads typically transfer small amounts of data in each I/O operation. For a small I/O data transfer, the time it takes for the disk head to physically locate the data on the disk and move towards it is much greater than the time to actually read or write the data. Any delay caused by the overhead in the application and file system layer becomes negligible compared to the seek delay. In our tests for random workloads, and RDM produced similar I/O performance as evident from Figure 2, Figure 3, and Figure 4. Figure 2. Random mixed /IO per second (higher is better) 1600 I/O operations per second Figure 3. Random read I/O operations per second (higher is better) I/O operations per second Copyright 2007 VMware, Inc. All rights reserved. 6
7 Performance Characteristics of and RDM Figure 4. Random write I/O operations per second (higher is better) 3000 I/O operations per second Sequential Workload For sequential workloads, applications typically access consecutive data blocks. The disk seek time is reduced because the disk head has to move very little to access the next block. Any I/O queuing delay that occurs in the application and file system layer can affect the overall throughput. Accessing file metadata and data structures to resolve file names to physical data blocks on disks adds a slight delay to the overall time required to complete the I/O operation. In order to saturate the I/O bandwidth, which typically is a few hundred megabytes per second, applications have to issue more I/O requests if the I/O block size is small. As the number of I/O requests increases, the delay caused by file system overhead increases. Thus raw disks yield slightly higher throughput compared to file systems at smaller data transfer sizes. In the sequential workload tests with a small I/O block size, RDM provided 3 to 10 percent higher I/O throughput, as shown in Figure 5 and Figure 6. However, as I/O block size increases, the number of I/O requests decreases because the I/O bandwidth is saturated, and the delay caused by file system overhead becomes less significant. In this case, both RDM and yield similar performance. Figure 5. Sequential read I/O operations per second (higher is better) I/O operations per second Copyright 2007 VMware, Inc. All rights reserved. 7
8 Performance Characteristics of and RDM Figure 6. Sequential write I/O operations per second (higher is better) I/O operations per second Table 2 and Table 3 show the throughput rates in megabytes per second corresponding to the above I/O operations per second for and RDM. The throughput rates (I/O operations per second * data size) are consistent with the I/O rates shown above and display behavior similar to that explained for the I/O rates. Table 2. Throughput rates for random workloads in megabytes per second Random Mix Random Read Random Write Data Size (KB) RDM (V) RDM (P) RDM (V) RDM (P) RDM (V) RDM (P) Table 3. Throughput rates for sequential workloads in megabytes per second Sequential Read Sequential Write Data Size (KB) RDM (V) RDM (P) RDM (V) RDM (P) CPU Efficiency CPU efficiency can be computed in terms of CPU cycles required per unit of I/O or unit of throughput (byte). We obtained a figure for CPU cycles used by the virtual machine for managing the workload, including the virtualization overhead, by multiplying the average CPU utilization of all the processors seen by ESX Server, the CPU rating in MHz, and the total number of cores in the system (four in this case). In this study, we measured CPU efficiency as CPU cycles per unit of I/O operations per second. CPU efficiency for various workloads is shown in figures 7 through 11. For random workloads with smaller I/O block sizes (4KB and 8KB) CPU efficiency observed with both and RDM is very similar. For I/O block sizes greater than 8KB, RDM offers some improvement (7 to 12 percent). For sequential workloads, RDM offers an 8 to 12 percent improvement. As with any file system, maintains data structures that map file names Copyright 2007 VMware, Inc. All rights reserved. 8
9 Performance Characteristics of and RDM to physical blocks on the disk. Each file I/O requires accessing the metadata to resolve file names to actual data blocks before reading data from or writing data to a file. The name resolution requires a few extra CPU cycles every time there is an I/O access. In addition, maintaining the metadata also requires additional CPU cycles. RDM does not require any underlying file system to manage its data. Data is accessed directly from the disk, without any file system overhead, resulting in a lower CPU cycle consumption and higher CPU efficiency. Figure 7. CPU efficiency for random mix (lower is better) Efficiency (MHz per I/Ops) Figure 8. CPU efficiency for random read (lower is better) Efficiency (MHz per I/Ops) Figure 9. CPU efficiency for random write (lower is better) Efficiency (MHz per I/Ops) Figure 10. CPU efficiency for sequential read (lower is better) Efficiency (MHz per I/Ops) Copyright 2007 VMware, Inc. All rights reserved. 9
10 Performance Characteristics of and RDM Figure 11. CPU efficiency for sequential write (lower is better) Efficiency (MHz per I/Ops) Conclusion VMware ESX Server offers three options for disk access management,, and RDM (Physical). All the options provide distributed file system features like user friendly persistent names, distributed file locking, and file permissions. Both and RDM allow you to migrate a virtual machine using VMotion. This study compares the three options and finds similar performance for all three. For random workloads, and RDM produce similar I/O throughput. For sequential workloads with small I/O block sizes, RDM provides a small increase in throughput compared to. However, the performance gap decreases as the I/O block size increases. and RDM operate at similar CPU efficiency levels for random workloads at smaller I/O block sizes. For sequential workloads, RDM shows improved CPU efficiency. The performance tests described in this study show that and RDM provide similar I/O throughput for most of the workloads we tested. Most enterprise applications, such as Web server, file server, databases, ERP, and CRM, exhibit I/O characteristics similar to the workloads we used in our tests and hence can use either or RDM for configuring virtual disks when run in a virtual machine. However, there are few cases that require use of raw disks. Backup applications that use such inherent SAN features as snapshots or clustering applications (for both data and quorum disks) require raw disks. RDM is recommended for these cases. We recommend use of RDM for these cases not for performance reasons but because these applications require lower level disk control. Configuration This section describes the hardware and software configurations we used in the tests described in this study. Server Hardware Software Processors: 2 dual core Intel Xeon processor 5160, 3.00GHz, 4MB L2 cache (4 cores total) Memory: 8GB Local disks: 1 Seagate 146GB 10K RPM SCSI (for ESX Server and the guest operating system) 5 Seagate 146GB 10K RPM SCSI in RAID 0 configuration (for the test disk) Virtualization software: ESX Server (build 32039) Guest Operating System Configuration Operating system: Windows Server 2003 R2 Enterprise Edition 32 bit, Service Pack 2, 512MB of RAM, 1 CPU Test disk: 10GB unformatted disk Copyright 2007 VMware, Inc. All rights reserved. 10
11 Performance Characteristics of and RDM Iometer Configuration Resources Number of outstanding I/Os: 8 Ramp up time: 60 seconds Run time: 5 minutes Number of workers (threads): 1 Access patterns: random/mix, random/read, random/write, sequential/read, sequential/write Transfer request sizes: 4KB, 8KB, 16KB, 32KB, 64KB To obtain Iometer, go to For more information on how to gather I/O statistics using Iometer, see the Iometer user s guide at To learn more about how to collect CPU statistics using esxtop, see the chapter Using the esxtop Utility in the VMware Infrastructure 3 Resource Management Guide at For a detailed description of and RDM and how to configure them, see chapters 5 and 8 of the VMware Infrastructure 3 Server Configuration Guide at VMware Infrastructure 3 Server Configuration Guide VMware, Inc Hillview Ave., Palo Alto, CA Copyright 2007 VMware, Inc. All rights reserved. Protected by one or more of U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886, 6,789,156, 6,795,966, 6,880,022, 6,944,699, 6,961,806, 6,961,941, 7,069,413, 7,082,598, 7,089,377, 7,111,086, 7,111,145, 7,117,481, 7,149, 843, 7,155,558, 7,222,221, 7,260,815, 7,260,820, 7,269,683, 7,275,136, 7,277,998, 7,277,999, 7,278,030, 7,281,102, and 7,290,253; patents pending. VMware, the VMware boxes logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective companies. Revision Item: PS-035-PRD
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
Storage I/O Performance on VMware vsphere 5.1 over 16 Gigabit Fibre Channel Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Setup... 3 Workload... 4 Results...
Performance Study PVSCSI Storage Performance VMware ESX 4.0 VMware vsphere 4 offers Paravirtualized SCSI (PVSCSI), a new virtual storage adapter. This document provides a performance comparison of PVSCSI
Achieving a Million I/O Operations per Second from a Single VMware vsphere 5.0 Host Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Software and Hardware...
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
Virtualized SQL Server Database Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Factors Affecting Storage vmotion... 3 Workload... 4 Experiments and Results...
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
Performance Study 1Gbps Networking Performance VMware ESX 3.5 Update 1 With increasing number of CPU cores in today s computers and with high consolidation ratios combined with the high bandwidth requirements
W H I T E P A P E R Optimized Backup and Recovery for VMware Infrastructure with EMC Avamar Contents Introduction...1 VMware Infrastructure Overview...1 VMware Consolidated Backup...2 EMC Avamar Overview...3
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Performance Evaluation of AMD RVI Hardware Assist VMware ESX 3.5 Introduction For the majority of common workloads, performance in a virtualized environment is close to that in a native environment. Virtualization
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
VMWARE PERFORMANCE STUDY VMware ESX Server 3 Ready Time Observations VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified
VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:
Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Table of Contents About this Document.... 3 Introduction... 4 Baseline Existing Desktop Environment... 4 Estimate VDI Hardware Needed.... 5
Technical Note Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 This technical note discusses using ESX Server hosts with an IBM System Storage SAN Volume Controller
Technical Note Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2 This technical note discusses using ESX Server hosts with a Hitachi Data Systems (HDS) NSC or USP SAN
VMware vsphere 6 and Oracle Database Scalability Study Scaling Monster Virtual Machines TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Test Environment... 3 Virtual Machine
Performance Study VMware vcenter 4.0 Database Performance for Microsoft SQL Server 2008 VMware vsphere 4.0 VMware vcenter Server uses a database to store metadata on the state of a VMware vsphere environment.
VMWARE PERFORMANCE STUDY VMware ESX Server 3. Improving Scalability for Citrix Presentation Server Citrix Presentation Server administrators have often opted for many small servers (with one or two CPUs)
Performance Study Performance of Virtualized SQL Server Based VMware vcenter Database VMware vsphere 4.1 VMware vsphere is a sound platform on which to virtualize SQL Server databases. One overlooked database
Topic Configuration s VMware Infrastructure 3: Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 When you are selecting and configuring your virtual and physical equipment,
Maximizing VMware ESX Performance Through Defragmentation of Guest Systems Presented by July, 2010 Table of Contents EXECUTIVE OVERVIEW 3 TEST EQUIPMENT AND METHODS 4 TESTING OVERVIEW 5 Fragmentation in
VMware Virtual SAN Performance with Microsoft SQL Server Performance Study TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Experiment Setup... 3 Hardware Configuration...
N8103-149/150/151/160 RAID Controller N8103-156 MegaRAID CacheCade Feature Overview April 2012 Rev.1.0 NEC Corporation Contents 1 Introduction... 3 2 Types of RAID Controllers... 3 3 New Features of RAID
ESXi 4.1 Embedded vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Improved Virtualization Performance with 9th Generation Servers David J. Morse Dell, Inc. August 2006 Contents Introduction... 4 VMware ESX Server 3.0... 4 SPECjbb2005... 4 BEA JRockit... 4 Hardware/Software
Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...
vrealize Business System Requirements Guide vrealize Business Advanced and Enterprise 8.2.1 This document supports the version of each product listed and supports all subsequent versions until the document
Intel RAID Controllers RS2BL8 and RS2PI8 Performance Brief Revision 1. 1 Jul 29 1 Legal Disclaimers Performance tests and ratings are measured using specific computer systems and/or components and reflect
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
W H I T E P A P E R Performance and Scalability of Microsoft SQL Server on VMware vsphere 4 Table of Contents Introduction................................................................... 3 Highlights.....................................................................
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Parallels Desktop 4 for Windows and Linux Read Me Welcome to Parallels Desktop for Windows and Linux build 4.0.6576. This document contains the information you should know to successfully install Parallels
Technical Note ACE Management Server Deployment Guide VMware ACE 2.0 This technical note provides guidelines for the deployment of VMware ACE Management Servers, including capacity planning and best practices.
ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/
LAMP Performance Characterization and Sizing on VMware ESX Server 2.5.1 and Dell PowerEdge 1855 Blade Servers By Amresh Singh, J. Craig Lowery, Ph.D., C S Prasanna Nanda, Rudramuni B, and Scott Stanford
Performance Study Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1 VMware vsphere 4.1 One of the key benefits of virtualization is the ability to consolidate multiple applications
Citrix XenApp Server Deployment on VMware ESX at a Large Multi-National Insurance Company June 2010 TECHNICAL CASE STUDY Table of Contents Executive Summary...1 Customer Overview...1 Business Challenges...1
Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks Executive summary...2 Target audience...2 Introduction...2 Disclaimer...3
Dell Migration Manager for Email Archives 7.3 SQL Best Practices 2016 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Dell and
IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark
Performance Study Virtualizing Performance-Critical Database Applications in VMware vsphere VMware vsphere 4.0 with ESX 4.0 VMware vsphere 4.0 with ESX 4.0 makes it easier than ever to virtualize demanding
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array Evaluation report prepared under contract with Lenovo Executive Summary Virtualization is a key strategy to reduce the
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic
VMWARE TEC NOTE VirtualCenter Monitoring and Performance Statistics VMware Infrastructure includes a system for gathering extensive metrics on performance, resource utilization, and basic statistics of
IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
VMWARE TECHNICAL NOTE VMware ACE Virtual Machine Encryption Basics VMware ACE gives administrators the option of enhancing the security of virtual machines they distribute to end users by encrypting key
W H I T E P A P E R Contents Introduction...1 What is VMware Consolidated Backup?...1 Detailed Architecture...3 VMware Consolidated Backup Operation...6 Configuring VMware Consolidated Backup...6 Backing
VI Performance Monitoring Preetham Gopalaswamy Group Product Manager Ravi Soundararajan Staff Engineer September 15, 2008 Agenda Introduction to performance monitoring in VI Common customer/partner questions
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
Installing and Administering VMware vsphere Update Manager Update 1 vsphere Update Manager 5.1 This document supports the version of each product listed and supports all subsequent versions until the document
ESX Server 3.0 and VirtualCenter 2.0 Revision: 20060615 Item: VI-ENG-Q206-221 You can find the most up-to-date technical documentation at: http://www.vmware.com/support/pubs The VMware Web site also provides
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
System Requirements Document Table Of Contents Overview... 2 ADVANTAGE 2009... 3 Server Hardware... 3 Proprietary Navision Database... 4 Microsoft SQL Server 2005 /2008 Database... 5 SQL Server Hardware...
What s New in VMware vsphere 4.1 Storage VMware vsphere 4.1 W H I T E P A P E R Introduction VMware vsphere 4.1 brings many new capabilities to further extend the benefits of vsphere 4.0. These new features
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Using Iometer to Show Acceleration Benefits for VMware vsphere 5.5 with FlashSoft Software 3.7 WHITE PAPER Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
April 23 11 Aviation Parkway, Suite 4 Morrisville, NC 2756 919-38-28 Fax 919-38-2899 32 B Lakeside Drive Foster City, CA 9444 65-513-8 Fax 65-513-899 www.veritest.com email@example.com Microsoft Windows
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Technical Note VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server This document discusses ways to maintain the VirtualCenter database for increased performance and manageability.
WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance
IT Business Management System Requirements Guide IT Business Management 8.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,