How To Create A Server Virtualization Solution For A Large-Scale Data Center



Similar documents
EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

FLASH 15 MINUTE GUIDE DELIVER MORE VALUE AT LOWER COST WITH XTREMIO ALL- FLASH ARRAY Unparal eled performance with in- line data services al the time

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 3.0

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Storage Solutions to Maximize Success in VDI Environments

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

OPTIMIZING MICROSOFT EXCHANGE AND SHAREPOINT WITH EMC XTREMIO

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

ORACLE 11g AND 12c DATABASE CONSOLIDATION AND WORKLOAD SCALABILITY WITH EMC XTREMIO 4.0

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Private cloud computing advances

Microsoft SMB File Sharing Best Practices Guide

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC SOLUTION FOR SPLUNK

CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY

WHITE PAPER 1

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Technology Insight Series

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC BACKUP-AS-A-SERVICE

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

EMC - XtremIO. All-Flash Array evolution - Much more than high speed. Systems Engineer Team Lead EMC SouthCone. Carlos Marconi.

Efficient Storage Strategies for Virtualized Data Centers

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

VMware Best Practice and Integration Guide

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

EMC Integrated Infrastructure for VMware

How To Get The Most Out Of An Ecm Xtremio Flash Array

Nimble Storage for VMware View VDI

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

Building the Virtual Information Infrastructure

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC VFCACHE ACCELERATES ORACLE

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

EMC Integrated Infrastructure for VMware

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

A virtual SAN for distributed multi-site environments

Maxta Storage Platform Enterprise Storage Re-defined

SQL Server Virtualization

EMC VSPEX PRIVATE CLOUD

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

Consolidate and Virtualize Your Windows Environment with NetApp and VMware

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure Components of the VMware infrastructure...2

SolidFire and NetApp All-Flash FAS Architectural Comparison

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS

EMC VSPEX PRIVATE CLOUD

EMC VSPEX ORACLE COMPUTING Oracle Database Virtualization with VMware vsphere and EMC XtremIO

Kaminario K2 All-Flash Array

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Virtualized Exchange 2007 Archiving with EMC Xtender/DiskXtender to EMC Centera

White Paper. Recording Server Virtualization

MS Exchange Server Acceleration

VMware Virtual Machine File System: Technical Overview and Best Practices

INTRODUCTION TO THE EMC XtremIO STORAGE ARRAY (Ver. 4.0)

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server

SMB Direct for SQL Server and Private Cloud

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

Windows Server 2008 R2 Hyper-V Live Migration

IBM XIV Gen3 Storage System Storage built for VMware vsphere infrastructures

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Best Practices for Architecting Storage in Virtualized Environments

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

EMC VNX-F ALL FLASH ARRAY

SAN Conceptual and Design Basics

CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe

Expert Reference Series of White Papers. Visions of My Datacenter Virtualized

FOR SERVERS 2.2: FEATURE matrix

Pivot3 Reference Architecture for VMware View Version 1.03

Transcription:

SERVER VIRTUALIZATION WITH EMC XTREMIO ALL-FLASH ARRAY AND VMWARE VSPHERE 5.5 EMC Solutions Abstract This white paper highlights the performance and operational advantages of server virtualization based on EMC XtremIO all-flash array technology. This document describes the reference architecture of the EMC Proven Infrastructure for a validated server virtualization solution enabled by the XtremIO all-flash array and VMware vsphere version 5.5.

Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published October 2013. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Part Number H12389 2 Server Virtualization with EMC XtremIO all-flash Array and VMware vsphere 5.5

Contents Contents Chapter 1 Introduction 7 Executive summary... 8 Target audience... 8 Document purpose... 8 Business requirements... 9 Chapter 2 Solution Functionality and Features 11 Introduction... 12 Server virtualization... 12 Data center demands... 12 Performance... 13 Workload portability... 13 Scalability... 14 Virtual machine provisioning... 15 Deduplication... 15 Thin provisioning... 16 Data protection... 16 VAAI integration... 16 Summary... 17 Chapter 3 Solution Technology 18 Overview... 19 Key components... 20 Virtualization... 20 Overview... 20 VMware vsphere 5.5... 20 VMware vsphere HA... 21 VMware vcenter... 21 EMC XtremIO VAAI support... 21 EMC XtremIO storage... 21 Cluster design... 21 Deduplication capacity savings... 22 Thin provisioning... 22 Fault protection... 22 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 3

Contents Scalability... 23 In-memory metadata operations... 23 Chapter 4 Solution Architecture 24 Overview... 25 Reference architecture... 25 Overview... 25 Logical architecture... 25 Key components... 26 Hardware resources... 28 Software resources... 29 Storage configuration guidelines... 30 Overview... 30 XtremIO X-Brick scalability... 30 XtremIO server virtualization validated maximums... 31 External data considerations... 34 Sizing guidelines... 36 Overview... 36 Reference workload... 36 Defining the reference workload... 36 Applying the reference workload... 37 Overview... 37 Example 1: Custom-built application... 37 Example 2: Point-of-sale system... 38 Example 3: Web server... 38 Example 4: Decision-support database... 38 Summary... 39 Use cases for server virtualization with XtremIO... 39 Overview... 39 Use Case 1: Software build environments... 39 Use Case 2: Virtual classrooms... 40 Use Case 3: Service provider for cloud deployment... 40 XtremIO test results... 41 Overview... 41 Capacity savings from deduplication... 41 Thin provisioning... 42 Steady-state response and array utilization at high scale points... 43 VAAI... 45 Simple provisioning and monitoring... 47 4 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Contents Chapter 5 Conclusion 51 Summary... 51 Findings... 51 Appendix A References 53 References... 54 EMC documentation... 54 Other documentation... 54 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 5

Contents Figures Figure 1. I/O randomization brought by server virtualization... 13 Figure 2. Figure 3. Storage vmotion is highly dependent on array I/O and cloning performance... 14 Array-based virtual machine cloning affects storage I/O performance... 15 Figure 4. Server virtualization components... 19 Figure 5. Logical architecture... 26 Figure 6. XtremIO scalability... 31 Figure 7. EMC XtremIO volume configuration and mapping... 32 Figure 8. Logical architecture with optional VNX array for external data... 36 Figure 9. Resource pool flexibility... 39 Figure 10. Capacity savings from deduplication... 42 Figure 11. Capacity savings from thin provisioning... 43 Figure 12. IOPS and array latency scale results... 44 Figure 13. CPU utilization scale test results... 45 Figure 14. Average I/O throughput of Storage vmotion migrations with VAAIenabled virtual machines... 46 Figure 15. XtremIO I/O throughput during virtual machine migration with Storage vmotion... 46 Figure 16. Total and average virtual machine deployment duration... 47 Figure 17. XtremIO implementation of the VAAI XCOPY... 48 Figure 18. Average IOPS per SSD during virtual machine deployment... 49 Figure 19. Performance dashboard of the XtremIO array... 49 Tables Table 1. Solution hardware... 28 Table 2. Solution software... 29 Table 3. Validated profile characteristics... 31 Table 4. Validated test metrics for 2,800 virtual machines... 33 Table 5. Example of a virtualized Exchange 2013 server... 34 Table 6. Virtual machine test configuration... 37 6 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 1: Introduction Chapter 1 Introduction This chapter presents the following topics: Executive summary... 8 Target audience... 8 Document purpose... 8 Business requirements... 9 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 7

Chapter 1: Introduction Executive summary Target audience Document purpose Server virtualization has been a driving force in data center efficiency gains for the past decade. However, mixing multiple virtual machine workloads on a single physical server creates a randomization of input/output (I/O) for the storage array, stalling virtualization of I/O-intensive workloads. EMC XtremIO all-flash arrays not only cost-effectively address this performance challenge, but they also provide new levels of speed and provisioning agility to virtualized environments. This document is a comprehensive guide to the technical aspects of this server virtualization solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces. You can select the server and networking hardware that meets or exceeds the tested minimums. This white paper is intended for EMC employees, partners, and customers, including storage and VMware administrators who want to understand how EMC XtremIO storage and VMware vsphere can provide an easy-to-use, high-performance storage solution for server virtualization. We 1 assume that readers are familiar with the following products: EMC XtremIO and EMC storage systems VMware vsphere and VMware vcenter This document describes the reference architecture of the EMC Infrastructure for server virtualization with the EMC XtremIO all-flash array and VMware vsphere 5.5. The EMC Solutions Group tested and validated this solution. This architecture provides a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the VMware vsphere virtualization layer backed by the EMC XtremIO all-flash array. The redundant compute and network components, which are chosen by customers, are sufficiently powerful to handle the processing and data needs of the virtual machine environment. Because not every virtual machine has the same requirements, this white paper contains methods and sizing guidance to adjust the system for cost-effective deployment. 1 In this paper, we represents the EMC Solutions team that tested and validated this solution. 8 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 1: Introduction Business requirements Business applications are moving into consolidated compute, network, and storage environments. EMC and VMware server virtualization reduces the complexity of configuring every component of a traditional deployment model. Virtualization reduces the complexity of integration management while maintaining application design and implementation options. Administration is unified, while process separation is adequately controlled and monitored. This solution addresses the complexity of deploying and managing complex, consolidated environments by providing the following: End-to-end virtualization using the capabilities of the unified infrastructure components A server virtualization solution for VMware that efficiently virtualizes thousands of virtual machines for varied customer use cases for a reliable, flexible, and scalable reference design A simplified management interface for a data center environment Better support of service-level agreements and compliance initiatives Lower operational and maintenance costs Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 9

Chapter 1: Introduction 10 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 2: Solution Functionality and Features Chapter 2 Solution Functionality and Features This chapter presents the following topics: Introduction... 12 Server virtualization... 12 Data center demands... 12 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 11

Chapter 2: Solution Functionality and Features Introduction This chapter describes the storage needs and challenges of data centers and how the unique benefits of the XtremIO all-flash array and vsphere address these challenges. Server virtualization Server virtualization solves the problems of underutilized storage assets, overburdened power and cooling resources, and high administrator-to-asset ratios. Virtualized server resources are the foundation for delivering cloud infrastructure services. Consolidating underutilized physical server resources into highly available virtual machines can lead to considerable cost savings and improved data center efficiency. And while virtualization has enhanced the efficiency of resources in the data center, it has also magnified the challenges associated with expanding virtualization beyond specific IT zones (or noncritical applications) across the entire data center. The complexity of virtualized infrastructures, including concerns about storage cost, performance, availability, and agility have caused enterprises to delay the roll out of an end-to-end shared, virtualized IT infrastructure. Large-scale deployment of virtualized environments requires increased density of virtual machines for each physical server, the virtualizing of tier-one, I/O-intensive applications, fast-growing datastores, and rapid virtual machine provisioning. In a dynamic large-scale environment, manual storage administration requires time, slows deployments, and takes resources away from proactive business management. If you factor in the effort required to scale the environment when more capacity or performance is necessary, the unique challenges of a virtualized data center become clear. These environments require new solutions to maintain operational efficiency, performance, and cost targets. Data center demands Performance, application provisioning, and data management requirements were easy to meet when discrete applications used physical servers and dedicated storage systems. However, moving those applications into large-scale, agile VMware virtual environments places new demands on the infrastructure. These environments require high performance and support for a high density of virtualized applications with unpredictable workloads, and rapid virtual machine provisioning and cloning. The promise of flash storage arrays meeting large-scale virtualization requirements looms large. However, the reality is that all-flash arrays must have an optimized architecture for both storage input/output (I/O) performance and storage efficiency to effectively address these challenges. Because acquisition and operational costs of storage infrastructure are among the top challenges of cloud-based virtual server environments, storage efficiency has an important role to play. Storage efficiency is a combination of maximizing both available storage capacity and processing resources, which often are competing efforts. Storage efficiency is key to enabling the promise of elastic scalability, 12 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 2: Solution Functionality and Features pay-as-you-grow efficiency, and a predictable cost structure, all while increasing productivity and innovation. Performance CPUs historically have gained power through increases in transistor count and clock speed. More recently, a shift has been made to multicore CPUs and multithreading. These advances combined with server virtualization technology allow massive consolidation of applications onto a single physical server. The result is intensive randomization of the workload for the storage array. Imagine a dual socket server with six cores per socket and two threads per core. With virtualization technology, this server can easily present shared storage with a workload of 24 unique, intermixed data streams. Now imagine numerous servers on a SAN sharing the same storage array. The array workload very quickly becomes an I/O blender of completely random I/O from hundreds or thousands of intermixed sources, as shown in Figure 1. Flash arrays are ideal for handling high volumes of random I/O that have traditionally been too expensive for large-scale virtualization deployments. Figure 1. I/O randomization brought by server virtualization Workload portability Being able to move active virtual machines as quickly and seamlessly as possible from one physical server to another with no service interruption is a key element of a large-scale virtualized infrastructure. VMware vsphere vmotion enables the live migration of virtual machines from one VMware vsphere host to another, with no perceivable impact for users. vmotion is an important enabler for a number of key VMware technologies, including vsphere Distributed Resource Scheduler (DRS) and vsphere Distributed Power Management (DPM). vmotion requires virtual-machine physical memory (as large as 1 TB) to be transferred during a virtual machine migration while leveraging vsphere suspend and resume functionality. This functionality momentarily quiesces the virtual machine on the source vsphere host, then copies the last set of memory changes to the target vsphere host, and then resumes the virtual machine on the target. The suspend and resume phase is the most likely phase to affect guest performance, during which an Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 13

Chapter 2: Solution Functionality and Features abrupt, temporary increase of latency can occur. The impact depends on a variety of factors including the performance of the storage I/O. Large-scale virtual environments commonly use VMware Storage vmotion for live, nondisruptive migration of virtual machine files within and across storage arrays for performing proactive storage migrations, improving virtual machine performance, and optimizing storage utilization. You can use the vstorage APIs for Array Integration (VAAI) Extended Copy (XCOPY) to accelerate Storage vmotion with compliant storage arrays, which enable the host to offload specific virtual machine and storage management operations to the storage array. The host issues the XCOPY command to the array from the source LUN to the destination LUN, or to the same source LUN, if required. The choice depends on the configuration of the virtual machine file system (VMFS) datastores on the relevant LUNs. The array uses internal mechanisms to complete the cloning operation and, depending on the efficiency of the array to implement the VAAI XCOPY support, can accelerate the performance of Storage vmotion. Figure 2 shows how array-enabled vmotion and Storage vmotion operations are managed. vmotion vsphere vsphere Array VAAI LUN01 LUN02 Figure 2. Storage vmotion is highly dependent on array I/O and cloning performance Scalability An agile, virtualized infrastructure must also scale in the multiple dimensions of performance, capacity, and operations. It must have the ability to scale efficiently, without sacrificing performance and resiliency, and without scaling the number of people that manage the environment. However, deploying traditional discrete dualcontroller flash appliances to address scalability challenges can lead to system 14 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 2: Solution Functionality and Features sprawl, performance bottlenecks, and suboptimal availability, which increases storage administration time. Virtual machine provisioning The promise of increased agility is a major reason why organizations choose to virtualize their infrastructures. However, IT responsiveness often exponentially slows as virtual environments grow. Resources typically are unable to be provisioned quickly enough to meet rapidly changing business requirements. Bottlenecks occur because organizations do not have the proper tools to quickly determine the capacity and health of the physical and virtual resources. Standard virtual machine provisioning or cloning methods, commonly implemented in flash arrays, can be expensive, because full copies of virtual machines can require 50 GB or more storage for each copy. In a large-scale cloud data center, when shared storage is cloning up to hundreds of virtual machines each hour while concurrently delivering I/O to active virtual machines, cloning can become a major bottleneck for optimal data center performance and operational efficiency, as shown in Figure 3. Figure 3. Array-based virtual machine cloning affects storage I/O performance Deduplication Storage arrays can accumulate duplicate data over time, which increases costs and management overhead. In particular, large-scale virtual server environments create large amounts of duplicate data when virtual machines are deployed by cloning existing virtual machines or have the same operating system and applications installed. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 15

Chapter 2: Solution Functionality and Features Traditionally, deduplication eliminates duplicate data by replacing it with a pointer to a unique data block. This post-processing operation writes incoming data to disk and then the array deduplicates the data, both of which affect array performance. The XtremIO all-flash array uses inline deduplication, which eliminates performance hits incurred by traditional deduplication mechanisms. Thin provisioning Thin provisioning is a popular technique that improves array utilization. The storage capacity is consumed only when data is written instead of when storage volumes are provisioned. For administrators of large-scale virtualized environments, thin provisioning removes the need for overprovisioning storage up front to meet anticipated future capacity demands and, instead, allows virtual machine storage to be allocated on-demand from an available storage pool. Most storage arrays are designed to be statically installed and run, yet virtualized application environments are naturally dynamic and variable. Change and growth in virtualized workloads causes organizations to actively redistribute workloads across storage array resources (or use other features such as VMware DRS) for load balancing to avoid running out of space or reducing performance. Unfortunately, this ongoing load balancing is a manual, iterative task that is often costly and timeconsuming. As a result, storage arrays that support large-scale virtualization environments require optimal and inherent data placement to ensure maximum utilization of both capacity and performance without any planning demands. XtremIO s thin provisioning support can result in significant savings by reducing unused storage. Data protection While storage arrays have traditionally supported several RAID data-protection levels, the arrays required storage administrators to choose between data protection and performance for specific workloads. The challenge for large-scale virtual environments is the shared storage system that stores data for hundreds or thousands of virtual machines with different workloads. Some storage systems allow live migrations between RAID levels, requiring repeatedly proactive administration as workloads evolve. Optimal data protection for virtualized environments requires that arrays support data-protection schemes, which combine the best attributes of existing RAID levels while avoiding the drawbacks. Because flash endurance is a special consideration in an all-flash array, the scheme maximizes the service life of the array s solid-state drives (SSDs) while complementing the high I/O performance of flash media. XtremIO provides a robust toolset of data-protection and management methods. Tight integration with VMware extends benefits into the virtualized environment. VAAI integration In contrast to a custom integration of virtualized environments and storage arrays, VAAI is a set of APIs that enables VMware hosts to offload common storage operations to the array. VAAI reduces resource overhead on VMware hosts and can significantly improve performance for storage-intensive operations such as storage cloning for virtual machine provisioning. While VAAI removes the involvement of vsphere hosts in storage-intensive operations, the actual performance benefits of VAAI-enabled flash arrays are highly 16 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 2: Solution Functionality and Features dependent on the array architecture. For example, the performance of VAAI-enabled XCOPY for copying virtual disk files (up to hundreds of GBs) for cloning or storage vmotion is highly dependent on the efficiency of deduplication and metadata models supported by the array. If the XCOPY operation requires read and write of data blocks to and from the SSDs as compared to only creating metadata pointers to deduplicated data blocks on SSDs, the performance can vary widely for both the copy operation and I/O of the live virtual machines. Summary To meet the multiple demands arising from a large-scale virtualization data center, you need a storage array that can provide the following: Superb performance and capacity scale-out for infrastructure growth Built-in data deduplication Thin provisioning for capacity efficiency and cost mitigation Flash-optimized data protection techniques Near-instantaneous virtual machine provisioning and cloning Inherent load balancing Automated virtual machine disk (VMDK) provisioning The EMC XtremIO all-flash array is built to unlock the full performance potential of flash storage and to deliver array-based data management capabilities that make it an optimal storage solution for large-scale virtualization. The next chapter includes more details about how to apply XtremIO features for optimal performance. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 17

Chapter 3: Solution Technology Chapter 3 Solution Technology This chapter presents the following topics: Overview... 19 Key components... 20 Virtualization... 20 EMC XtremIO storage... 21 18 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 3: Solution Technology Overview This solution uses the EMC XtremIO all-flash array and VMware vsphere 5.5 to provide storage and server hardware consolidation for a private cloud or server virtualization. The new virtualized infrastructure is centrally managed to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 4 depicts the solution components. Figure 4. Server virtualization components Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 19

Chapter 3: Solution Technology Key components Virtualization Key components of this solution are as follows: Virtualization The virtualization layer decouples the physical implementation of resources from the applications that use them, so the application view of the available resources is not directly tied to the hardware. Compute The compute layer provides memory and processing resources for the virtualization layer software and for the applications running in the virtual servers. This solution defines the minimum amount of required compute layer resources and enables you to implement the solution by using any server hardware that meets these requirements. Network The network layer connects the users of the virtual servers to the resources, and the storage layer to the compute layer. This solution defines the minimum number of required network ports, provides general guidance on network architecture, and enables you to implement the solution by using any network hardware that meets these requirements. Sto rage The storage layer is critical for the implementation of the server virtualization. The EMC XtremIO all-flash array used in this solution provides extremely high performance and supports a number of capacity-efficient and data-service capabilities. Reference architecture on page 25 provides details about the components in the reference architecture. Overview VMware vsphere 5.5 The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance and changes to the physical system without affecting the hosted applications. In a server virtualization or private cloud use case, this layer enables multiple independent virtual machines to share the same physical hardware, instead of being directly implemented with dedicated hardware. VMware vsphere transforms the physical resources of a computer by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications like physical computers. The High Availability (HA) features of VMware vsphere such as vmotion and Storage vmotion enable seamless migration of virtual machines and stored files from one vsphere server to another, or from one data storage area to another, with minimal or no performance impact. Coupled with vsphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources. 20 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 3: Solution Technology VMware vsphere HA VMware vsphere HA enables the virtual layer to automatically restart virtual machines in various failure conditions. If an operating system error occurs on the virtual machine, the virtual machine automatically restarts on the same hardware. If an error occurs on the physical hardware, the impacted virtual machines can automatically restart on other servers in the cluster. Note: To restart virtual machines on other servers in the cluster, the servers must have available resources. With VMware vsphere HA, you can configure policies to determine which machines automatically restart and under what conditions to try these operations. VMware vcenter VMware vcenter is a centralized management system for a VMware virtual infrastructure. This system provides you with a single interface that you can access from multiple devices for all aspects of monitoring, managing, and maintaining a virtual infrastructure. VMware vcenter also manages some advanced features of a VMware virtual infrastructure, such as VMware vsphere HA and DRS, vmotion, and Update Manager. EMC XtremIO VAAI support XtremIO is fully integrated with VMware vsphere through VAAI for virtual machine provisioning and cloning, VMDK provisioning, and overall seamless deployment of large-scale virtualization. It delivers high-performance, low-latency response times, and low provisioning times, for all storage provisioning choices at the VMDK level. XtremIO supports the VAAI block zero primitive and has a unique way to write zero blocks that removes the performance drawbacks of provisioning eager zero thick (EZT) volumes for virtual disks. EMC XtremIO storage Cluster design XtremIO includes a scale-out cluster design that adds capacity and performance in balance together to meet any storage requirement. Each cluster building block has high-availability, fully active/active storage servers with no single point of failure. As you expand a cluster, XtremIO automatically balances the workloads of all hosts and clusters to maintain performance. The XtremIO operating system (XIOS) manages storage clusters and provides the following functionality: Ensures that all SSDs in the cluster are evenly loaded to provide the highest possible performance and endurance required for high-demand workloads throughout the life of the array. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 21

Chapter 3: Solution Technology Eliminates the need to perform complex configuration steps for traditional arrays. It negates the need to set RAID levels, determine drive-group sizes, set stripe widths, set caching policies, build aggregates, or do any other manual configuration. Automatically and optimally configures volumes and ensures that I/O performance on existing volumes and data sets automatically increases when a cluster scales out. Manages the process of expanding clusters and ensures that data remains balanced across new X-Bricks (the fundamental building block of XtremIO clusters). It also ensures that I/O performance of existing volumes and data sets automatically increases when the cluster scales out. It eliminates the need to restripe data if application requirements change. Every volume receives the full performance potential of the entire XtremIO cluster. Deduplication capacity savings The XtremIO all-flash array performs in-line data deduplication based on an algorithm that checks to ensure that duplicate data blocks are not stored on SSDs. The result is that every storage I/O is deduplicated in real-time on ingest with only unique blocks ever being written to the flash storage. Moreover, deduplication on XtremIO aids performance, as all metadata is in memory. The better the data deduplicates, the better XtremIO performs. This ensures the maximum host I/O performance. The XtremIO array also provides deduplication-aware caching where blocks that are held in cache can be served for any logical reference to those blocks. Deduplicationaware caching combined with inline deduplication dramatically decreases latencies for handling challenging situations, such as multiple concurrent virtual machine boots, providing consistent sub-millisecond data access. Thin provisioning Fault protection Along with delivering high performance, the XtremIO array natively offers thin provisioning capabilities to allocate on-demand capacity as applications need it without any impact on array-storage I/O performance. XtremIO thin provisioning is also granular in that capacity is allocated in 4 KB blocks to ensure thrifty use of flash capacity, which is consistent with how VMware vsphere uses I/O block sizes. The EMC XtremIO array delivers the utmost in reliability and availability with completely redundant components and the ability to tolerate any component failure without loss of service. XtremIO includes the following fault protection: Dual power supplies in controllers and disk array enclosures (DAEs) to support loss of a power supply while keeping the controller/dae in service Redundant active/active controllers to support controller failures Redundant Serial Attached SCSI (SAS) interconnect modules in the DAEs Redundant inter-controller communication links Multiple host connections with multipath capabilities to survive path failures XtremIO Data Protection (XDP) to tolerate SSD failures Multiple techniques to ensure initial and ongoing data integrity 22 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 3: Solution Technology Scalability In-memory metadata operations XtremIO storage clusters support a fully distributed, scale-out design that allows linear increases in both capacity and performance for infrastructure agility. XtremIO uses a building-block approach in which the array is scaled with additional X-Bricks. It provides host access using N-way active/active controllers for linear scaling of performance and capacity for simplified support of growing virtualized environments. As a result, as capacity in the array grows, performance grows in lockstep with the addition of storage controllers. The XtremIO cluster distributes metadata evenly across all storage controllers, maintaining metadata in memory during runtime. Metadata is hardened to SSD to allow the array to tolerate failures and power loss, but during normal operations all metadata lookups are memory-based. This is possible only by segmenting the metadata tables and spreading them evenly across all storage controllers. In contrast, a dual-controller design could not contain enough RAM to store all metadata in memory and would require de-staging large amounts of metadata to flash, with several associated performance drawbacks. The XtremIO in-memory metadata and unique in-line data reduction model combine to deliver new, unprecedented capabilities in virtualized data centers. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 23

Chapter 4: Solution Architecture Chapter 4 Solution Architecture This chapter presents the following topics: Overview... 25 Reference architecture... 25 Storage configuration guidelines... 30 Sizing guidelines... 36 Applying the reference workload... 37 Use cases for server virtualization with XtremIO... 39 XtremIO test results... 41 24 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Overview This chapter is a comprehensive guide to the major aspects of this solution. It presents server capacity for the required minimum CPU, memory, and network resources. Users can select the server and networking hardware that meets or exceeds the stated minimums. This EMC Proven Infrastructure validated the specified storage architecture for providing high levels of performance while delivering a highly available architecture for your private cloud deployment. Each Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fits a predefined idea of a virtual machine. In any discussion about virtual infrastructures, defining a reference workload first is important. Not all servers perform the same tasks, and building a reference that takes into account every possible combination of workload characteristics is impractical. Reference architecture Overview This section provides a summary and characterization of the tests performed to validate the EMC Proven Infrastructure for Server Virtualization enabled by the EMC XtremIO all-flash array and VMware vsphere 5.5. It involves building a 2,800-virtualmachine environment on XtremIO and integrating the new features of this platform to provide a high-performance, compelling, and cost-effective server virtualization platform. The defined configuration forms the basis for creating a custom solution. Note: This solution uses the concept of a reference workload to describe and define a virtual machine. One physical or virtual server in an existing environment might not be equal to one virtual machine in this solution. Evaluate your workload for an appropriate point of scale. Applying the reference workload on page 37 describes the process. Logical architecture Figure 5 shows the logical architecture of the solution and characterizes the validated infrastructure, where an 8 Gb Fibre Channel (FC) or 10 Gb iscsi SAN carries storage traffic and 10 GbE carries management and application traffic. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 25

Chapter 4: Solution Architecture Figure 5. Logical architecture Key components This architecture includes the following key components: VMware vsphere Provides a common virtualization layer to host a server environment. vsphere provides the highly available infrastructure through features such as the following: vmotion Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption Sto rage vmotion Provides live migration of VMDK files within and across storage arrays with no virtual machine downtime or service disruption vsphere HA Detects and provides rapid recovery for a failed virtual machine in a cluster Distributed Resource Scheduler (DRS) Provides load balancing of computing capacity in a cluster Storage Distributed Resource Scheduler (SDRS) Provides load balancing across multiple datastores based on space usage and I/O latency VMware vcenter Server Provides a scalable and extensible system that forms the foundation for virtualization management for the VMware vsphere cluster. vcenter manages all vsphere hosts and their virtual machines. SQL Server VMware vcenter Server requires a database service to store configuration and monitoring details. This solution uses Microsoft SQL Server 2008 R2. 26 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture DNS server Uses DNS services for name resolution of various solution components. This solution uses the Microsoft DNS Service running on a Windows 2012 server. Active Directory (AD) server Uses AD services for various solution components to function correctly. Microsoft AD runs on Windows Server 2012. IP network Performs as a standard Ethernet network with redundant cabling and switching. It is a shared IP network that carries and manages all of the data traffic. Storage network The storage network is isolated to provide host access to the array with the following options: 8 Gb FC Performs high-speed serial data transfer with a set of standard protocols. FC provides a standard data transport frame for servers and shared storage devices. 10 Gb Ethernet (iscsi) Enables the transport of SCSI blocks over a TCP/IP network. iscsi works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. XtremIO all-flash array The XtremIO all-flash array includes the following components: X-Brick Represents a building block that contains two active storage controllers, as the fundamental scaling unit of the array, and a shelf with 25 enterprise multi-level cell (emlc) SSDs. When the XtremIO cluster scales, the array clusters together multiple X-Bricks with redundant Infiniband back-end switches. Sto rage controller Represents a physical computer (1U in size) that acts as storage controllers in the cluster, providing block data that supports FC and iscsi protocols. Storage controllers can access all SSDs in the same X-Brick over the SAS connection to the X-Brick DAE. Or they can access SSDs in remote X-Bricks, leveraging remote direct memory access (RDMA) transfer over the Infiniband inter-controller network. Battery backup unit (BBU) Provides enough power to each storage controller to ensure that any data in flight de-stages to SSD in the event of a power failure. The first X-Brick has two BBUs for redundancy. Each additional 1U X- Brick requires only a single BBU. DAE Houses the flash drives that the array uses and is 2U in size. I n finiband switch Connects multiple X-Bricks together and is 1U in size. The array uses two separate switches so that even the fabric that ties the controllers together is high availability. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 27

Chapter 4: Solution Architecture Hardware resources Table 1 lists the hardware used in this solution. Table 1. Solution hardware Component VMware vsphere servers CPU Memory Network Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 3,600 virtual machines: 3,600 vcpus Minimum of 900 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host For 3,600 virtual machines: Minimum of 7200 GB RAM Add 2 GB for each physical server 2 x 10 GbE NICs per server 2 HBAs per server Note: You must add to the infrastructure at least one additional server beyond the minimum requirements to implement VMware vsphere HA functionality and to meet the listed minimums. Network infrastructure Minimum switching capacity 2 physical switches 2 x 10 GbE ports per VMware vsphere server for management 2 ports per VMware vsphere server for the storage network (FC or iscsi) 2 ports per storage controller for storage data (FC or iscsi) EMC XtremIO all-flash array Shared infrastructure One X-Brick with 25 x 400 GB emlc SSD drives In most cases, a customer environment already has infrastructure services such as AD and DNS configured. The setup of these services is beyond the scope of this document. If you implement the solution without the existing infrastructure, the minimum requirements are as follows: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: You can migrate the services into this solution post-deployment. However, the services must exist before the solution is deployed. 28 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Software resources Table 2 lists the software used in this solution. Table 2. Solution software Software Configuration VMware vsphere vsphere Server Enterprise Edition 5.5 vcenter Server Enterprise Edition 5.5 Operating system for vcenter Server Note: You can use any operating system that is supported for vcenter. Microsoft SQL Server Note: You can use any database that is supported for vcenter. Microsoft Windows Server 2008 R2 SP1 Standard Edition Version 2008 R2 Standard Edition E MC PowerPath EMC PowerPath/VE Release 5.9 EMC XtremIO (for vsphere datastores) Virtual machines (used for validation, but not required for deployment) Base operating system Microsoft Windows Server 2012 Datacenter VDBench (workload generator) Release 5.0.4 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 29

Chapter 4: Solution Architecture Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer to provide high availability and the expected level of performance. VMware vsphere allows more than one method of storage when hosting virtual machines. This tested solution uses the FC and iscsi block protocols, and the storage layout adheres to all current best practices. You can modify this solution, if necessary, based on your system usage and load requirements. XtremIO X-Brick scalability XtremIO storage clusters support a fully distributed, scale-out design that allows linear increases in both capacity and performance to provide infrastructure agility. XtremIO uses a building-block approach in which the array can be scaled by adding X-Bricks. With clusters of two or more X-Bricks, XtremIO uses a redundant 40 Gb/s quad data rate (QDR) Infiniband network for back-end connectivity among the storage controllers. This ensures a highly available, ultra-low latency network. N-way active controllers provide host access for linear scaling of performance and capacity for simplified support of growing virtual environments. As a result, as capacity in the array grows, performance also grows with the addition of more storage controllers. Figure 6 shows what the different cluster configurations look like as you scale up. You can start from one single X-Brick (a 6U system). As you scale, you can add a second X-Brick, and then a third and fourth X-Brick. You will see the performance linearly scaling as additional X-Bricks are added. 2 Note: In Figure 6, IOPS (mix) is measured with 4 KB fully random storage, 50 percent writes and 50 percent reads, while IOPS (read) is measured with 4 KB and 100 percent reads. 2 Online cluster expansion will be supported in a post-ga release. 30 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Figure 6. XtremIO scalability XtremIO server virtualization validated maximums This solution uses a single X-Brick that is validated with the environment profile described in Table 3. Table 3. Profile characteristic Validated profile characteristics Maximum number of virtual machines Virtual machine OS Number of processors per virtual machine 1 Number of virtual processors per physical CPU core 4 Value 2,800 per X-Brick Windows Server 2012 Datacenter RAM per virtual machine 2 GB Average storage available per virtual machine 100 GB I/O per second (IOPS) per virtual machine 25 I/O pattern Random I/O read/write ratio 2:1 Number of datastores to store virtual machine disks 28 Number of virtual machines per datastore 100 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 31

Chapter 4: Solution Architecture Profile characteristic Disk and RAID type for datastores Value 400 GB emlc SSD drives XDP (XtremIO proprietary data protection that delivers the equivalent of RAID 6 data protection and better performance than RAID 10) Note: We tested and validated this solution with Windows Server 2012 for vsphere virtual machines. However, the solution also works with Windows Server 2008 on VSphere with the same configuration and sizing. The EMC XtremIO array configuration included the following: Twenty-eight 6 TB volumes for virtual machines. Each volume stored 100 virtual machines. XtremIO supports the VAAI ATS primitive, thereby enhancing virtual server performance. One initiator group using the vsphere host FC World Wide Names (WWNs) from the hosts in the indicated vsphere cluster. Figure 7 shows the simple volume configuration in the EMC XtremIO console and the volume mapping for one of the two initiator groups. Figure 7. EMC XtremIO volume configuration and mapping 32 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Table 4 provides the validated test metrics for this configuration. Table 4. Validated test metrics for 2,800 virtual machines Performance metrics Value Number of virtual machines 2,800 per X-Brick Number of volumes 28 Volume size 168.78 TB Address space 30.46 TB SSD space in use 3.34 TB SSD space 7.48 TB Overall efficiency 50.61% Deduplication ratio 9.13 Thin provisioning savings 81.95% Front-end IOPS 70045 Front-end bandwidth Front-end latency 547.22 MB/s 1.76 ms Bandwidth on XtremIO array 798.85 MB/s IOPS on XtremIO array 82002 Average IOPS per SSD 3137 Average bandwidth per SSD 12.26 MB/s CPU utilization for storage controllers 61 to 75% The test metrics in Table 4 show an I/O latency of less than 2 milliseconds (ms) for 2,800 concurrently running virtual machines, with approximately 70 percent CPU utilization on the array, which confirms room to scale. We measured front-end latency inside the virtual machines, which is the round trip latency from the guest OS through the hypervisor, over the SAN to the XtremIO array, and back again. On the array we observed latency of less than 1 ms. This test confirms that a single X-Brick supports up to thousands of virtual machines with the ability to scale to multiples of thousands in a four X-Brick cluster. Steadystate response and array utilization at high scale points on page 43 provides more details about the scale-out results and projections. Note: The number of virtual machines in practice is determined by I/O performance and capacity utilization. You must monitor capacity utilization for capacity-intensive applications such as Exchange and SharePoint. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 33

Chapter 4: Solution Architecture External data considerations XtremIO provides high-performance, high-efficiency storage when deploying virtual servers. Applications such as Microsoft Exchange and SharePoint require high capacity instead of I/O performance. Table 5 provides an example of a configuration for a medium-sized Exchange 2013 server. Table 5. Example of a virtualized Exchange 2013 server Configuration Value Number of mailboxes 2,500 Maximum mailbox size Mailbox IOPS profile (messages sent and received per mailbox per day) 1.5 GB 0.101 IOPS (150 messages) DAG copies (including active copies) 2 Deleted items retention (DIR) Backup and truncation failure tolerance Years of growth 14 days 3 days 1 year Annual growth rate of mailbox number 11% To size the Exchange storage IOPS, use the following block calculation to get the total database IOPS required to support the specified number of mailbox users: Total transactional IOPS = IOPS per mailbox x mailboxes per server x (1 + I/O overhead factor) In the example, the calculation is: 0.101 x 2500 x (1 + 20% + 20%) = 354 IOPS To calculate the Exchange storage capacity requirement, use the following steps: 1. To determine the mailbox size on disk, use the following formula: Mailbox size on disk = Maximum mailbox size + White space + Dumpster Mailbox space White space Dumpster Test formula Email messages sent and received for each user, each day multiplied by the average message size divided by 1024 Example: A mailbox in 0.101 IOPS profile sends and receives a total of 150 email messages per day on average, so the white space is 150 x 75 / 1024 = 11 MB. Email messages sent and received for each user, each day multiplied by the average message size multiplied by the deleted item retention window added to the maximum mailbox size x 0.012 plus maximum mailbox size times 0.03: Example: (150 x 75 x 14 / 1024) + (1536 x 0.012) + (1536 x 0.03) = 218 MB. 34 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture In the example, the mailbox size on the disk equals 1,765 MB (1,536 + 11 + 218). 2. To determine the total database LUN size, use the following formula: Total database LUN size = Number of mailboxes x Mailbox size on disk x (1 + Index space + Additional index space for maintenance) / (1 + LUN free space) In the example, considering 20 percent for the index, 20 percent for the additional index space for maintenance, and 20 percent for LUN free space, the database size is 2,500 x 1,765 MB x (1 + 0.2 + 0.2) / (1+ 0.2) / 1,024 = 5,001 GB. 3. To determine the total log LUN size, use the following formula: Total log LUN size = Log size x Number of mailboxes x Backup/truncation failure tolerance days / (1 + LUN free space) A mailbox with a 0.101 IOPS profile generates 30 transaction logs per day on average. So, for the example, the total log LUN size = (30 logs x 1 MB) x 2,500 x 3 / 1,024 / (1 + 0.2) = 184 GB. In this example, 2,500 user mailboxes require 354 IOPS and more than 5 TB in capacity from the back-end storage array. As the user number grows larger (5,000, 10,000, and so on), the infrastructure of email mailboxes should not tax the I/O capabilities of a single XtremIO X-Brick, but could exceed the capacity limit. Note: EMC VSPEX for Virtualized Microsoft Exchange 2013 Design Guide on EMC.com provides additional information about sizing Exchange. This Exchange environment example is not a good virtualization candidate for XtremIO. However, you might want to deploy an I/O-intensive application, such as SQL Server for OLTP, in the same virtualized infrastructure along with the capacityintensive external application (such as Exchange server). In other cases, you might want to deploy the virtual servers on XtremIO while maintaining user data with lower I/O requirements on a separate NAS storage system. In both scenarios, while we advise you to keep the application servers on the XtremIO array, you can use an optional VNX array, as shown in Figure 8, to provide storage services for external applications or user data. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 35

Chapter 4: Solution Architecture Figure 8. Logical architecture with optional VNX array for external data Sizing guidelines Overview The following sections provide definitions of the reference workload used to size and implement the architectures of this solution. We guide you on how to correlate the reference workload to your workloads and describe how that might change the end delivery from the server and network perspective. Reference workload When you move an existing server to a virtual infrastructure, you have the opportunity to gain efficiencies by right-sizing the virtual hardware resources assigned to that system. Each EMC Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and building a reference workload that considers every possible combination of workload characteristics is impractical. Defining the reference workload To simplify the discussion, this section presents a representative reference workload. By comparing actual usage to this reference workload, you can extrapolate which reference architecture to choose. 36 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture In this solution, the reference workload is a single virtual machine. Table 6 lists the test configuration for this virtual machine. This configuration for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference to measure other virtual machines. Table 6. Configuration Virtual machine test configuration Value Virtual machine operating system Microsoft Windows Server 2012 Datacenter Number of virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine IOPS per virtual machine 25 I/O size I/O pattern 2 GB 100 GB 8 KB I/O read/write ratio 2:1 Random Applying the reference workload Overview The solution creates a pool of resources sufficient to host a target number of reference virtual machines with the characteristics shown in Table 6. Your virtual machines may not exactly match the specifications. In that case, define a single specific virtual machine as the equivalent of some number of combined reference virtual machines, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. Example 1: Custom-built application A small custom-built application server must move into a virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. Based on these numbers, the resource pool needs the following resources: CPU of one reference virtual machine Memory of two reference virtual machines Storage of one reference virtual machine I/O of one reference virtual machine In this example, one virtual machine uses the resources of two reference virtual machines. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 37

Chapter 4: Solution Architecture Example 2: Pointof-sale system The database server for a point-of-sale system must move into a virtual infrastructure. The server currently runs on a physical system with 4 CPUs, 16 GB of memory, and 200 GB of storage. The workload is 200 IOPS during an average busy cycle. The requirements to virtualize this application are as follows: CPUs of four reference virtual machines Memory of eight reference virtual machine Storage of two reference virtual machines I/O of eight reference virtual machines In this case, one virtual machine uses the resources of eight reference virtual machines. Example 3: Web server A web server must move into this virtual infrastructure. The server currently runs on a physical system with 2 CPUs, 8 GB of memory, and 25 GB of storage. The workload is 50 IOPS during an average busy cycle. The requirements to virtualize this application are as follows: CPUs of two reference virtual machines Memory of four reference virtual machines Storage of one reference virtual machine I/O of two reference virtual machines In this case, one virtual machine uses the resources of four reference virtual machines. Example 4: Decision-support database The database server for a decision support system must move into a virtual infrastructure. The server currently runs on a physical system with 10 CPUs, 64 GB of memory, and 5 TB of storage. The workload is 700 IOPS during an average busy cycle. The requirements to virtualize this application are as follows: CPUs of 10 reference virtual machines Memory of 32 reference virtual machines Storage of 52 reference virtual machines I/O of 28 reference virtual machines In this case, one virtual machine uses the resources of 52 reference virtual machines. 38 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Summary These preceding examples illustrate the flexibility of the resource pool model. In all of the examples, the workloads reduce the amount of available resources in the pool. Suppose that, with business growth, the customer must implement a much larger virtual environment to support one custom-built application, one point-of-sale system, two web servers, and ten decision-support databases. Using the same strategy, calculate the number of equivalent reference virtual machines, to get a total of 538 reference virtual machines. All these reference virtual machines can be implemented on the same virtual infrastructure supported with a single X-Brick. And the balance of capacity remains available in the resource pool for deploying other services, as shown in Figure 9. Figure 9. Resource pool flexibility In more advanced cases, tradeoffs might be necessary between memory and I/O or other relationships in which increasing the amount of one resource decreases the need for another. In such cases, the interactions between resource allocations become highly complex and are beyond the scope of this document. In such cases, you must examine the change in resource balance and determine the new level of requirements. You can then add virtual machines to the infrastructure with the method described in the examples. Use cases for server virtualization with XtremIO Overview While every organization has unique goals for deployment of large-scale server virtualization, all organizations have commonalities that affect their IT teams. Organizations that implement server virtualization expect a range of benefits in efficiency, ability, and reliability. XtremIO storage clusters provide high performance and storage efficiency while delivering the operational ease required for virtualization. The following use cases demonstrate how you can use XtremIO for server virtualization. Use Case 1: Software build environments A popular use case for VMware virtualization includes development and test servers. For example, software development commonly creates a nightly software build in a virtual machine, clones it many times, and then executes regression testing against the clones in parallel. Each night the clones are destroyed and the process repeats with the next build. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 39

Chapter 4: Solution Architecture The XtremIO all-flash array complements such automated development and test environments. XtremIO provides extraordinarily rapid provisioning of hundreds up to thousands of virtual machines while minimizing flash storage capacity requirements. With XtremIO, you can increase the number of test machines without a corresponding increase in storage capacity costs. Provisioning test machines can run in parallel with ongoing test cycles without performance impact or reduced test-cycle times. Use Case 2: Virtual classrooms On-demand training and virtual labs are important tools in many organizations. These organizations often package training courses and labs as virtual machines and offer them in a course catalog. When a student wants to take a course, a clone of the master virtual machine (or virtual application containing multiple virtual machines) for that course is created as an environment for the student. When the course is completed, the virtual machines are destroyed. Such environments require rapid, ondemand virtual machine provisioning while I/O to existing virtual machines continues. VMworld 2013 provided such an educational environment for the show attendees. The Hands on Lab (HoL) had 400-plus students taking labs. Each lab had an average of 10 virtual machines, and the HoL provisioned 20 labs in a pool for students to log into. When a student selected a lab, new virtual machines were cloned to replace the virtual machines that were assigned in the pool. With an average virtual machine size of 30 GB, the complex, highly random, virtual workload required 222 TB of highperformance storage. In addition to managing all the IOPS for the active virtual machines, the storage infrastructure also had to handle the massive amount of vapp cloning activity throughout the day as pre-populated labs are replenished. This historically has been difficult to achieve but is simple to accomplish with XtremIO. The entire infrastructure at VMworld 2013 ran on four XtremIO X-Bricks consuming one half of a rack with a power budget of 3,000 watts. According to the VMware lab team results, the arrays ran far under peak capabilities and the ESX datastore latency was about 0.2 ms. The following website has more facts about XtremIO powering the HoL at VMworld: http://www.xtremio.com/vmworld-2013-cool-facts-about-xtremio-powering-thehands-on-labs/. Use Case 3: Service provider for cloud deployment Enterprise IT has embraced cloud computing as a model for achieving on-demand IT and meeting business requirements through increased efficiency, agility, and reliability. The result is that IT is now able to deliver rapid innovation while balancing costs, quality of service, and risks. This shifts the cloud from being an opportunistic technology purchase to a strategic environment with broader business impact. Staying ahead of the competition and growing the business require the ability to respond quickly and flexibly to changes in the marketplace. To achieve these objectives, business users are demanding ever-faster service deployment. Delivery models have ratcheted expectations skyward. Users want this kind of responsiveness for their business applications and are more than happy to do their own provisioning if IT will just give them the right tools. 40 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Automatic provisioning and deployment of requested infrastructure is a solution. A fully automated system can in principle reduce the infrastructure provisioning time, providing significant benefits to the IT organization as well as business users. However, realizing the full benefits of automated provisioning requires addressing the storage-related challenges pertaining to virtual machine provisioning. Standard data store, virtual machine provisioning, and cloning methods commonly implemented in flash arrays can be expensive, requiring pre-allocation of capacity and full copies of virtual machines, each of which can be 50 GB or more in size. In a large-scale cloud data center, this mandates a huge amount of flash capacity and an equally large cost that most organizations are not willing to absorb. Furthermore, virtual machine provisioning tasks alone can consume the bulk of the flash array s performance potential, limiting its value. The XtremIO all-flash array enables the rapid, large-scale provisioning of virtual machines while removing the capacity and performance impacts associated with high-volume virtual machine cloning operations. XtremIO uses its in-memory metadata model and in-line, real-time deduplication to complete cloning operations nearly instantaneously, because no data copying actually takes place. In addition, unlike traditional flash arrays, large-scale virtual machine provisioning does not affect the performance of storage I/O requests to existing virtual machines. Finally, the XtremIO array eliminates the need for VMware administrators to make a trade-off between performance and storage capacity for cloning, and supports full cloning as the preferred method. XtremIO test results Overview The following section is a summary of the test results, with multiple metrics for XtremIO and the virtual machines. These test results do not represent the only I/O rates and read/write ratios that the XtremIO will support but are merely the results of the tests that were performed. All tests were run on a correctly pre-conditioned XtremIO array. All-Flash Array Performance Testing Framework at http://idcdocserv.com/241856 provides more details. Capacity savings from deduplication The XtremIO array performs in-line data deduplication based on an algorithm that checks to ensure each 4 KB data block being stored on the array is not the same as existing content. The result is that every storage I/O is deduplicated in real-time on ingest with only unique blocks being written to the SSDs. In this test, we cloned n virtual machines (n represents 200, 400, 600, and so on, up to 2,800), all from a single X-Brick) from the same virtual machine template, which is defined in Table 6 on page 37. And we monitored the address space (volume of space allocated) and the SSD space in use (physical space used) during the virtual machine number scale. Figure 10 shows both metrics. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 41

Chapter 4: Solution Architecture Figure 10. Capacity savings from deduplication During the test, both the address space and SSD space in use grew linearly as the virtual machine number grew, because the cloning occurred with the same virtual machine template. At the high scale (2,800 virtual machines), the test metrics show that XtremIO used a total of 3.335 TB of disk space, with an average of only 1.22 GB of disk space for each virtual machine. This represents a space savings of more than 90 percent (a deduplication ratio of 9:1) as compared to an array without deduplication. Note: For each virtual machine, we populated 2 GB of user data with 1 GB unique data among the virtual machines. You might see different physical disk usage with different sizes of data for each virtual machine. Thin provisioning The XtremIO array offers thin provisioning capabilities, which means it provides allocated capacity on demand to applications without any post reclamation operations and without affecting I/O performance. XtremIO thin provisioning is also granular in that capacity is allocated in 4 KB blocks to ensure optimal use of flash capacity consistent with I/O block sizes used by VMware vsphere. In the same deduplication test (with the test metrics in Table 4 on page 33), we also monitored the volume size (provisioned space) and address space (volume space allocated) during the virtual machine scale-up. Figure 11 shows both metrics. 42 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Figure 11. Capacity savings from thin provisioning During the test, both volume size and address space grew linearly as the virtual machine number grew, because we were cloning from the very same virtual machine template and every volume size was the same (6 TB each). At the high scale of 2,800 virtual machines, XtremIO provided more than 160 TB to the vsphere hosts with only 30 TB of volume space allocated from the array. This represents a space savings of more than 80 percent as compared to an array without thin provisioning capabilities. If you consider both thin provisioning and deduplication, XtremIO maintains an overall 50 times greater efficiency throughout the virtual machine scale-up: Overall efficiency = Provisioned Space / Physical Space Used The overall result is a 50x savings in efficiency as compared with storage systems with no thin provisioning and deduplication functionality. In summary, XtremIO can drive and enhance the storage efficiency better than traditional storage systems. Steady-state response and array utilization at high scale points XtremIO delivers highly sustainable levels of small, random I/O typically required in large-scale virtualized environments. A single XtremIO X-Brick delivers hundreds of thousands of IOPS using random 4 KB blocks. In this test, we were running I/O scaling tests from n virtual machines concurrently (n represents 200, 400, 600, and so on up to 2,800, all from a single X-Brick), with the I/O profile defined in Table 6 on page 37. During the test we focused our observation on the IOPS/latency and the CPU utilization of the array. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 43

Chapter 4: Solution Architecture Figure 12 shows the IOPS/latency from the array while the virtual machine number scales up. These performance results show that the overall IOPS on the XtremIO cluster increases linearly as the virtual machine number increases. Each virtual machine contains the same IOPS profile. Alternatively, the latency of the whole system is less than 2 ms, with the largest scale of 2,800 virtual machines, which is an excellent result for most large-scale virtualized data centers. Note: The 2 ms shown in Figure 12 is end-to-end at the guest OS. The latency on the array is less than 1ms. Figure 12. IOPS and array latency scale results 44 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Figure 13 shows the IOPS and array latency while the virtual machine number scales up. Figure 13. CPU utilization scale test results As the virtual machine number increases, the CPU usage increases linearly. This becomes more obvious as you scale up. Most importantly, 2,800 virtual machines only consume a fraction of the CPU, leaving plenty of room for growth. Based on projections, you can use many more virtual machines with a single X-Brick, assuming the same trend follows. In summary, XtremIO can support a large-scale virtual data center, including virtual I/O-intensive applications, while providing nondisruptive performance and the extra capacity needed for infrastructure growth. VAAI Because XtremIO is fully VAAI compliant, the array can communicate directly with vsphere and provide accelerated Storage vmotion, virtual machine provisioning, and thin provisioning functionality. In this test, virtual machine storage migrated from one data store to another, with each data store corresponding to a volume on XtremIO. During the test, 100 virtual machines (each with 100 GB VMDKs) were migrated in one batch. The test ran two batches, the first with VAAI enabled and the second with VAAI disabled on the ESXi host. The first batch with VAAI enabled completed in 29 minutes, while the non-vaaienabled batch took about 140 minutes. This test validates an efficiency gain of about 400 percent for a VAAI-enabled, virtual machine migration with Storage vmotion. Figure 14 shows the overall I/O throughput test statistics. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 45

Chapter 4: Solution Architecture Figure 14. Average I/O throughput of Storage vmotion migrations with VAAI-enabled virtual machines On the host, VAAI minimizes the I/O throughput (especially with the large I/O bandwidth required by vmotion). The test validates a 200x decrease in bandwidth and a 15x decrease in IOPS when VAAI is enabled. With VAAI enabled, XtremIO, instead of the host, does the VMDK copy. Figure 15 shows the time-based performance results (IOPS and bandwidth) of the virtual machine migration with Storage vmotion. The I/O throughput usage follows a cyclic pattern in the vmotion test. Not all virtual machines are migrated concurrently with vsphere. vmotion groups virtual machines into batches and sequentially migrates the batches. Therefore, the I/O throughput usage peaks during the migration of virtual machines within a batch, and they drop before the next batch begins. Figure 15. XtremIO I/O throughput during virtual machine migration with Storage vmotion 46 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture The time-based performance statistics follow the same trend. VAAI-enabled storage requires less IOPS and CPU utilization on the array. In summary, the tests validate that large-scale VMware environments operate more efficiently with the XtremIO VAAI integration. Simple provisioning and monitoring XtremIO clusters enable rapid, large-scale provisioning of virtual machines, through support of the VAAI X-COPY SCSI primitive, offloading the cloning of virtual machines from vsphere servers. This next test proves that application performance improves when the virtual machine cloning tasks are entirely metadata driven, which makes more I/O available to SSDs for virtual machine requests. Further, this benefit is not exclusive to linked clones. The array also benefits from full clones or any combination of full and linked clones. This test deployed one batch of virtual machines (100 in total) on XtremIO with a full clone from the gold image of one virtual machine. Each virtual machine contained one 50 GB OS drive and one 50 GB data drive. We did the deployment twice, once with VAAI enabled on the VSphere servers (which is the default), and the other with VAAI disabled on the VSphere servers. Figure 16 shows the total time duration and average deployment time for each virtual machine. Figure 16. Total and average virtual machine deployment duration Deployment completed in about 20 minutes, or about 12 seconds per virtual machine, with VAAI enabled on the VSphere server, which is only one-fifth of the duration with VAAI disabled. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 47

Chapter 4: Solution Architecture As we see similar percentage decreases during the Storage vmotion test, we have proven that XtremIO array is 400 percent more efficient than traditional arrays in VMDK copying/cloning. The deduplication statistics after deployment of the new data are as follows: Address space (allocated volume space): 5.67 TB SSD space (physical space used): 0.022 TB Deduplication ratio: 257:1 The newly deployed virtual machines have a high deduplication ratio because the virtual machines have almost the same layout. The XtremIO cluster uses the in-line, real-time deduplication implementation and in-memory metadata structures to complete cloning operations nearly instantaneously, so no data copying is necessary. As a result, a much smaller amount of disk space is physically consumed. Figure 17 illustrates that the XtremIO implementation of the VAAI XCOPY command consists of memory-based operations without any read or write operations to SSDs. No data blocks are copied. New pointers are created to the existing data. VM1 Addr +4K +8K +12K +16K +20K VAAI X-Copy command VM2 New Addr +4K +8K +12K +16K +20K Ptr Ptr Ptr Ptr Ptr Ptr Copy RAM-based metadata pointers Ptr Ptr Ptr Ptr Ptr Ptr XtremIO A B C D Data on SSD Figure 17. XtremIO implementation of the VAAI XCOPY The test validates, in addition to the time and capacity efficiency, a lower average IOPS for each SSD (less than 1,000 typically), which benefit from in-memory metadata structures of XtremIO, as shown in Figure 18. 48 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 4: Solution Architecture Figure 18. Average IOPS per SSD during virtual machine deployment In summary, with in-memory metadata and in-line deduplication, virtual machine deployment can be completed instantaneously, with the array supporting virtual machine cloning and storage vmotion at the rate of several GB/s per X-Brick. For example, virtual-machine cloning performance linearly increases in speed as the size of the XtremIO cluster grows. This gives administrators tremendous flexibility to roll out virtual machines without affecting on-going support of storage I/O to active virtual machines. Finally, XtremIO provides a performance dashboard for monitoring the I/O throughput live on the array during the virtual-machine deployment process, as shown in Figure 19. Figure 19. Performance dashboard of the XtremIO array The unit of measurement for the activity depends on the selected tab: Ban dwidth is in MB per second. IOPS is the I/O operations per second. Latency is the response time in milliseconds. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 49

Chapter 4: Solution Architecture All information appears in two colors, which indicate read and write activity. 50 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Chapter 5: Conclusion Chapter 5 Conclusion Summary The XtremIO all-flash array is built on an innovative architecture that completely redefines server virtualization. XtremIO enables unparalleled agility and greatly reduced deployment and operational complexity for virtual data centers. The tests confirm that XtremIO provides the multiple advantages to support data center virtualization. Findings While testing, we identified that this solution: Excels with small random I/O patterns common in VMware environments. It also acts as the storage foundation for virtualized data centers, while providing industry-leading performance and extra capacity for growth. The combination enables the virtualization of I/O-intensive applications. Guarantees that only unique blocks are written to the array. This in-line deduplication model has no I/O performance impact, providing more bandwidth for data management performance through the storage controllers. It also dramatically reduces flash capacity requirements of virtual machines with high amounts of redundant data. Provides deduplication-aware VAAI integration so I/O intensive operations, such as virtual machine clones, are entirely cache-memory-based operations that require no access to flash media. As a result, latencies for handling bursts and read-intensive operations, such as boot storms, are significantly decreased, which allows for consistent sub-millisecond data access. Natively provides automated thin provisioning capabilities so that capacity is allocated on demand, without affecting array storage I/O performance. Provides data protection techniques that simultaneously combine the benefits of traditional RAID algorithms, while avoiding the pitfalls, with new capabilities of the XtremIO storage cluster to leverage the specific properties of flash media. Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 51

Chapter 5: Conclusion 52 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5

Appendix A: References Appendix A References This appendix presents the following topics: References... 54 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5 53

Appendix A: References References EMC documentation Other documentation The following documentation on EMC.com provides additional and relevant information. If you do not have access to a document, contact your EMC representative: EMC XtremIO Overview (http://www.xtremio.com/resources/xtremio-overview/) EMC VSPEX Private Cloud VMware vsphere 5.5 for up to 1,000 Virtual Machines Proven Infrastructure EMC VSPEX for Virtualized Microsoft Exchange 2013 Design Guide The following VMware vsphere documentation on VMware.com provides additional and relevant information: EMC XtremIO: VMware View Solution Guide vsphere Networking vsphere Storage Guide vsphere Virtual Machine Administration vsphere Installation and Setup vcenter Server and Host Management vsphere Resource Management Guide vsphere Storage APIs Array Integration (VAAI) 54 Server Virtualization with EMC XtremIO All-Flash Array and VMware vsphere 5.5