EMC VSPEX PRIVATE CLOUD

Size: px
Start display at page:

Download "EMC VSPEX PRIVATE CLOUD"

Transcription

1 EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with VMware vsphere 5.5, EMC VNXe3200, and EMC Powered Backup for up to 125 virtual machines. May 2014

2 Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Published May 2014 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup Part Number H EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

3 Contents Contents Chapter 1 Executive Summary 13 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 17 Introduction Virtualization Compute Network Storage EMC next-generation VNXe EMC Powered Backup Chapter 3 Solution Technology Overview 25 Overview Key components Virtualization Overview VMware vsphere New VMware vsphere 5.5 features VMware vcenter VMware vsphere High Availability EMC Virtual Storage Integrator for VMware vsphere VNXe support for VMware vsphere Storage API for Array Integration Compute Network Storage Overview EMC VNXe series VNXe Virtual Provisioning VNXe FAST Cache VNXe FAST VP vcloud Networking and Security EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 3

4 Contents VNXe file shares ROBO EMC Powered Backup Overview EMC Avamar deduplication EMC Data Domain deduplication storage systems VMware vsphere Data Protection vsphere Replication EMC RecoverPoint Other technologies Overview VMware vcloud Automation Center VMware vcenter Operations Management Suite VMware vcenter Single Sign-On Public-key infrastructure PowerPath/VE (for block) EMC XtremCache Chapter 4 Solution Architecture Overview 46 Overview Solution architecture Overview Logical architecture Key components Storage network Hardware resources Software resources Server configuration guidelines Overview Ivy Bridge Updates VMware vsphere memory virtualization for VSPEX Memory configuration guidelines Network configuration guidelines Overview VLANs Enable jumbo frames (for iscsi and NFS) Link aggregation (for NFS) Storage configuration guidelines Overview EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

5 Contents VMware vsphere storage virtualization for VSPEX VSPEX storage building blocks VSPEX private cloud validated maximums High-availability and failover Overview Virtualization layer Compute layer Network layer Storage layer Validation test profile Profile characteristics Backup and recovery configuration guidelines Sizing guidelines Reference workload Overview Defining the reference workload Applying the reference workload Overview Example 1: Custom-built application Example 2: Point of sale system Example 3: Web server Example 4: Decision-support database Summary of examples Implementing the solution Overview Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Quick assessment Overview CPU requirements Memory requirements Storage performance requirements I/O operations per second I/O size I/O latency EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 5

6 Contents Storage capacity requirements Determining equivalent reference virtual machines Fine-tuning hardware resources EMC VSPEX Sizing Tool Chapter 5 VSPEX Configuration Guidelines 85 Overview Pre-deployment tasks Overview Deployment prerequisites Customer configuration data Prepare switches, connect network, and configure switches Overview Prepare network switches Configure infrastructure network Configure VLANs Configure jumbo frames (iscsi and NFS only) Complete network cabling Prepare and configure the storage array VNXe configuration for block protocols VNXe configuration for file protocols FAST VP configuration (optional) FAST Cache configuration (optional) Install and configure the VMware vsphere hosts Overview Install ESXi Configure ESXi networking Install and configure PowerPath/VE (block only) Connect VMware datastores Plan virtual machine memory allocations Install and configure Microsoft SQL Server databases Overview Create a virtual machine for SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Configure database for VMware vcenter Configure database for VMware Update Manager Install and configure VMware vcenter Server Overview EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

7 Contents Create the vcenter host virtual machine Install vcenter guest OS Create vcenter ODBC connections Install vcenter Server Apply vsphere license keys Install the EMC VSI plug-in Create a virtual machine in vcenter Perform partition alignment, and assign file allocation unit size Create a template virtual machine Deploy virtual machines from the template virtual machine Summary Chapter 6 Verifying the Solution 115 Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components Block and File environments Chapter 7 System Monitoring 118 Overview Key areas to monitor Performance baseline Servers Networking Storage VNX resource monitoring guidelines Monitoring block storage resources Monitoring file storage resources Appendix A Bill of Materials 135 Bill of materials Appendix B Customer Configuration Data Sheet 139 Customer configuration data sheet Appendix C Server Resource Component Worksheet 143 Server resources component worksheet Appendix D References 145 References EMC documentation EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 7

8 Contents Other documentation Appendix E About VSPEX 147 About VSPEX EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

9 Contents Figures Figure 1. Next-Generation VNXe with multicore optimization Figure 2. New Unisphere Management Suite Figure 3. EMC Powered Backup solutions Figure 4. Private cloud components Figure 5. Compute layer flexibility Figure 6. Example of highly available network design for block Figure 7. Storage pool rebalance progress Figure 8. Thin LUN space utilization Figure 9. Examining storage pool space utilization Figure 10. Logical architecture for block storage Figure 11. Logical architecture for file storage Figure 12. Ivy Bridge processor guidance Figure 13. Hypervisor memory consumption Figure 14. Required networks for block storage Figure 15. Required networks for file storage Figure 16. VMware virtual disk types Figure 17. Storage layout building block for 15 virtual machines Figure 18. Storage layout building block for 125 virtual machines Figure 19. Storage layout for 125 virtual machines using VNXe Figure 20. Maximum scale levels and entry points of different arrays Figure 21. High availability at the virtualization layer Figure 22. Redundant power supplies Figure 23. Network layer high availability (VNXe) Figure 24. VNXe series high availability Figure 25. Resource pool flexibility Figure 26. Required resource from the reference virtual machine pool Figure 27. Aggregate resource requirements stage Figure 28. Pool configuration stage Figure 29. Aggregate resource requirements - stage Figure 30. Pool configuration stage Figure 31. Customizing server resources Figure 32. Sample network architecture Block storage Figure 33. Sample Ethernet network architecture File storage Figure 34. Configure NAS Server Address Figure 35. Fast VP relocation tab Figure 36. Scheduled Fast VP relocation Figure 37. Fast VP Relocation Schedule Figure 38. Create Fast Cache EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 9

10 Contents Figure 39. Advanced tab in the Create Storage Pool dialog box Figure 40. Advanced tab in the Storage Pool Properties dialog box Figure 41. Virtual machine memory settings Figure 42. Storage Pool alert settings Figure 43. Storage Pool Snapshot settings Figure 44. Storage Pools panel Figure 45. LUN Properties dialog box Figure 46. System panel Figure 47. System Health panel Figure 48. IOPS on the LUNs Figure 49. IOPS on the drives Figure 50. Latency on the LUNs Figure 51. SP CPU Utilization Figure 52. VNXe file statistics Figure 53. System Capacity panel Figure 54. File Systems panel Figure 55. File System Capacity panel Figure 56. System Performance panel displaying file metrics Tables Table 1. VNXe customer benefits Table 2. Solution hardware Table 3. Solution software Table 4. Hardware resources for the compute layer Table 5. Hardware resources for the network layer Table 6. Hardware resources for the storage layer Table 7. Number of disks required for different number of virtual machines Table 8. Profile characteristics Table 9. Virtual machine characteristics Table 10. Blank worksheet row Table 11. Reference virtual machine resources Table 12. Example worksheet row Table 13. Example applications stage Table 14. Example applications -stage Table 15. Server resource component totals Table 16. Deployment process overview Table 17. Tasks for pre-deployment Table 18. Deployment prerequisites checklist EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

11 Contents Table 19. Tasks for switch and network configuration Table 20. Tasks for VNXe configuration Table 21. Storage allocation table for block data Table 22. Tasks for storage configuration Table 23. Storage allocation table for file data Table 24. Tasks for server installation Table 25. Tasks for SQL Server database setup Table 26. Tasks for vcenter configuration Table 27. Tasks for testing the installation Table 28. Rules of thumb for drive performance Table 29. Best practice for performance monitoring Table 30. List of components used in the VSPEX solution for 125 virtual machines Table 31. Common server information Table 32. ESXi server information Table 33. Array information Table 34. Network infrastructure information Table 35. VLAN information Table 36. Service accounts Table 38. Blank worksheet for server resource totals EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 11

12 Contents 12 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

13 Chapter 1: Executive Summary Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 13

14 Chapter 1: Executive Summary Introduction EMC VSPEX validated and modular architectures are built with proven technologies comprising complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, expanded choices, greater efficiency, and lower risk. This document is a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select server and networking hardware that meet or exceed the stated minimums. Target audience Document purpose The readers of this document must have the necessary training and background to install and configure VMware vsphere 5.5, EMC Next-Generation VNXe series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the customer installation. Individuals selling and sizing a VMware Private Cloud infrastructure must pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This document includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system. The VSPEX Private Cloud architecture provides customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the VMware vsphere virtualization layer backed by highly available VNXe series storage. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. The 125 virtual machine VMware Private Cloud solution described in this document is based on the VNXe3200 storage array and on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective as deployed. For larger environments, solutions for up to 1,000 virtual machines based on the EMC 14 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

15 Business needs Chapter 1: Executive Summary VNX series are described in EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 1000 Virtual Machines. A private cloud architecture is a complex system offering. This document facilitates setup by providing prerequisite software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, validation tests and monitoring instructions ensure that your system is running properly. Following the instructions in this document ensures an efficient and painless journey to the cloud. VSPEX solutions are built with proven technologies to create complete virtualization solutions that allow you to make an informed decision in the hypervisor, server, and networking layers. Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment model. The solution simplifies integration management while maintaining the application design and implementation options. It also provides unified administration while enabling adequate control and monitoring of process separation. The business benefits for the VSPEX Private Cloud for VMware architectures include: Provide an end-to-end virtualization solution to effectively use the capabilities of the unified infrastructure components. Provide a VSPEX Private Cloud solution for VMware for efficiently virtualizing up to 125 virtual machines for varied customer use cases. Provide a reliable, flexible, and scalable reference design. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 15

16 Chapter 1: Executive Summary 16 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

17 Chapter 2: Solution Overview Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Network Storage EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 17

18 Chapter 2: Solution Overview Introduction Virtualization Compute The VSPEX Private Cloud for VMware vsphere 5.5 solution provides a complete system architecture capable of supporting up to 125 virtual machines with a redundant server and network topology and highly available storage. The core components that make up this solution are virtualization, compute, storage, and networking. VMware vsphere is the leading virtualization platform in the industry. For years, it has provided flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vsphere components are the VMware vsphere hypervisor and the VMware vcenter Server for system management. The VMware hypervisor runs on a dedicated server and allows multiple operating systems to run simultaneously on the system as virtual machines. These hypervisor systems can be connected to operate in a clustered configuration. The clustered configurations are then managed as a larger resource pool through VMware vcenter, and allow for dynamic allocation of CPU, memory, and storage across the cluster. Features such as VMware vmotion, which allows a virtual machine to move between different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS), which performs vmotion migrations automatically to balance the load, make vsphere a solid business choice. With vsphere 5.5, a VMware-virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). VSPEX provides the flexibility to design and implement the customer s choice of server components. The infrastructure must conform to the following attributes: Sufficient cores and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Sufficient capacity to enable the environment to withstand a server failure and failover in the environment 18 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

19 Chapter 2: Solution Overview Network VSPEX provides the flexibility to design and implement the customer s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage Traffic isolation based on industry-accepted best practices Support for link aggregation Storage The EMC VNXe series is the leading shared storage platform in the industry. VNXe provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation. VNXe storage includes the following components that are sized for the specified reference architecture workload: I/O ports (for block and file): Provide host connectivity to the array which supports CIFS/SMB, NFS, FC, and iscsi). Storage processors (SP): The compute components of the storage array, used for all aspects of data moving into, out of, and between arrays. Unlike the VNX series, which requires external processing units known as Data Movers to provide file services, the VNXe contains integrated code that provides file services to hosts. Disk drives: Disk spindles and solid state drives (SSDs) that contain the host or application data and their enclosures. The VMware Private Cloud solution for 125 virtual machines described in this document is based on the VNXe3200 storage array. The VNXe3200 can currently support a maximum of 50 drives. The EMC VNXe series supports a wide range of business class features ideal for the private cloud environment, including: Fully Automated Storage Tiering for Virtual Pools (FAST VP ) FAST Cache Thin provisioning Snapshots/checkpoints File-Level Retention (FLR) Quota management Deduplication EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 19

20 Chapter 2: Solution Overview EMC nextgeneration VNXe Features and enhancements EMC now offers customers even greater performance and choice than before with the inclusion of the next generation of VNXe Unified Storage into the VSPEX family of Proven Infrastructures. The next-generation VNXe, led by the VNXe3200, offers a hybrid, unified storage system for VSPEX customers who need to centralize and simplify storage when transforming their IT infrastructure and delivery model. Customers who need to virtualize up to 125 virtual machines with VSPEX Private Cloud solutions will now see the benefits that the new multicore (MCx ) VNXe3200 brings. The new architecture distributes all data services across all the system s cores. This means that cache management and backend RAID management processes scale linearly and benefit greatly from the latest Intel multicore CPUs. Simply put, I/O operations in VSPEX run faster and more efficiently than ever before with the new VNXe3200. The VNXe3200 is ushering in a profoundly new experience for small and mediumsized VSPEX owners as it delivers performance and scale at a lower price. The VNXe3200 is a significantly more powerful system than the previous VNXe series and ships with many enterprise-like features and capabilities such as auto-tiering, file deduplication, and compression, which add to the simplicity, efficiency, and flexibility of the VSPEX Private Cloud. EMC FAST Cache and FAST VP, features that have in the past been exclusive to the VNX, are now available to VSPEX customers with VNXe3200 storage. FAST Cache dynamically extends the storage system s existing read/write caching capacity to increase system-wide performance and lower the cost per virtual machine. FAST Cache uses high-performing flash drives that are positioned between the primary cache (DRAM-based) and the hard disk drives. This feature boosts the performance of highly transactional applications and virtual desktops by keeping hot data in the cache, so it is available when you need it. VNXe3200 FAST Cache and FAST VP auto-tiering lowers the total cost of ownership through policy-based movement of your data to the right storage type. This maximizes the cost investment and speed benefit of SSDs across the system intelligently while leveraging the capacity of less-costly spinning drives. This avoids over-purchasing and exhaustive manual configuration. The EMC VNXe flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, the VNXe combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s virtualized application environments. The VNXe includes many features and enhancements designed and built upon the success of the next generation VNX series. These features and enhancements include: More capacity with multicore optimization with multicore cache, multicore RAID, and multicore FAST Cache (MCx) Greater efficiency with a flash-optimized hybrid array 20 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

21 Chapter 2: Solution Overview Easier administration and deployment by increasing productivity with the new Unisphere Management Suite VSPEX built with the next generation VNXe delivers even greater efficiency, performance, and scalability than ever before. Flash-optimized hybrid array The VNXe is a flash-optimized hybrid array that provides automated tiering to deliver the best performance for your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flash-optimized VNXe takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and migrates the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. Data is used most frequently at the time it is created; therefore, new data should be first stored on flash drives for the best performance. As that data ages and becomes less active over time, FAST VP moves the data from high-performance to high-capacity drives automatically, based on customer-defined policies. FAST Cache dynamically absorbs unpredicted spikes in system workloads. Inactive data on high-capacity drives that suddenly become active can benefit from FAST Cache because it can provide immediate performance benefits by promoting the data to flash drives. All VSPEX use cases benefit from this increased efficiency. Note: This reference architecture does not make use of FAST Cache or FAST VP. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With the VNXe, customers can realize an even greater return on their investment. The VNXe also provides out-of-band, file-based deduplication that can dramatically lower the costs of the flash tier. VNXe Intel MCx code path optimization The advent of flash technology has been a catalyst in changing the requirements of entry and midrange storage systems. EMC redesigned the VNXe storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNXe data services across all processor cores, as shown in Figure 1. The VNXe series with MCx has dramatically improved the file performance for transactional applications like databases or virtual machines over network-attached storage (NAS). EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 21

22 Chapter 2: Solution Overview Figure 1. Next-Generation VNXe with multicore optimization Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is key to the overall efficiency of the platform in handling variable and changing workloads. The VNXe cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage hard disk drives (HDDs) and SSDs. The modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors, greatly increased performance improvements in VNXe. VNXe performance Performance enhancements VNXe storage, enabled with the MCx architecture, is optimized for FLASH 1 st and provides unprecedented overall performance; it optimizes transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and capacity efficiency (cost per GB). VNXe with MCx technology improves overall storage performance by up to four times when compared with previous generation VNXe models. Unisphere Management Suite The new Unisphere Management Suite extends Unisphere s easy-to-use, interface to include VNXe Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 2, the suite also includes Unisphere Remote for centrally managing thousands of VNX and VNXe systems with new support for EMC XtremCache. 22 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

23 Chapter 2: Solution Overview Figure 2. New Unisphere Management Suite Virtualization Management VMware Virtual Storage Integrator for VMware vsphere EMC Virtual Storage Integrator (VSI) for VMware VSphere is a no-charge VMware vcenter plug-in available to all VMware users with EMC storage. VSPEX customers can use VSI to simplify management of virtualized storage. VMware administrators can gain visibility into their VNXe storage using the same familiar vcenter interface to which they are accustomed. With VSI, IT administrators can do more work in less time. VSI offers unmatched access control that enables you to efficiently manage and delegate storage tasks with confidence. VSI offers unmatched access control that enables you to efficiently manage and delegate storage tasks with confidence: you can perform daily management tasks with up to 90 percent fewer clicks and up to 10 times higher productivity. Note: VSI will be supported with VNXe3200 after general availability. For more information, contact the appropriate EMC support channel. VMware vsphere Storage APIs for Array Integration VMware vsphere Storage APIs for Array Integration (VAAI) offloads VMware storagerelated functions from the server to the storage system, enabling more efficient use of server and network resources for increased performance and consolidation. VMware vsphere Storage APIs for Storage Awareness VMware vsphere Storage APIs for Storage Awareness (VASA) is a VMware-defined API that displays storage information through vcenter. Integration between VASA technology and VNXe makes storage management in a virtualized environment a seamless experience. EMC Powered Backup EMC Powered Backup solutions, EMC Avamar and EMC Data Domain, deliver the protection confidence needed to accelerate the deployment of VSPEX Private Clouds. Optimized for virtual environments, EMC Powered Backup reduces backup times by 90 percent and increases recovery speeds by 30 times, even offering virtual machine instant access for worry-free protection. EMC backup appliances add another layer of assurance with end-to-end verification and self-healing to ensure successful EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 23

24 Chapter 2: Solution Overview recoveries. With industry-leading deduplication, you can reduce backup storage by 10 to 30 times, backup management time by 81 percent, and WAN bandwidth by 99 percent for efficient disaster recovery; this delivers a seven-month payback period on average. You will be able to scale storage easily and efficiently as your environment grows. EMC Powered Backup solutions that can be used in this VSPEX solution include EMC Avamar deduplication software and system, EMC Data Domain deduplication storage system, and VDP Advanced. Figure 3. EMC Powered Backup solutions For a 125 virtual machine VMware-based VSPEX Private Cloud deployment, we recommend VMware vsphere Data Protection Advanced (VDP Advanced) for your backup solution. Powered by Avamar technology, VDP Advanced offers the benefits of Avamar's fast, efficient image-level backup and recovery for complete protection confidence. 24 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

25 Chapter 3: Solution Technology Overview Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview Key components Virtualization Compute Network Storage EMC Powered Backup Other technologies EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 25

26 Chapter 3: Solution Technology Overview Overview This solution uses the EMC VNXe series and VMware vsphere 5.5 to provide storage and server hardware consolidation in a private cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 4 depicts the solution components. Figure 4. Private cloud components The following sections describe the components in more detail. 26 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

27 Chapter 3: Solution Technology Overview Key components This section describes the key components of this solution. Virtualization The virtualization layer decouples the physical implementation of resources from the applications that use them. In other words, the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. Compute The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the partner to implement the solution by using any server hardware that meets these requirements Network The network layer connects the users of the private cloud to the resources in the cloud, and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables the customer to implement the solution by using any network hardware that meets these requirements. Storage The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the private cloud can be implemented. The EMC VNXe storage family used in this solution provides high-performance data storage while maintaining high availability. EMC Powered Backup The backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable. The Solution architecture section provides details on all the components that make up the reference architecture. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 27

28 Chapter 3: Solution Technology Overview Virtualization Overview VMware vsphere 5.5 The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and allows the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. VMware vsphere 5.5 transforms the physical resources of a computer by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers. The high-availability features of VMware vsphere 5.5 such as vmotion and Storage vmotion enable seamless migration of virtual machines and stored files from one vsphere server to another, or from one data storage area to another, with minimal or no performance impact. Coupled with vsphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources. New VMware vsphere 5.5 features VMware vsphere 5.5 includes an expansive list of new and improved features that enhance performance, reliability, availability, and recovery of virtualized environments. Of those features, several have significant impacts upon VSPEX Private Cloud deployments, including: Expanded maximum memory and CPU limits for ESX hosts. Logical and virtual CPU counts have doubled in this version, as have non-uniform memory access (NUMA) node counts and maximum memory. This means host servers can support larger workloads. 62 TB Virtual Machine Disk (VMDK) file support including Raw Device Mapping (RDM). Datastores can hold more data from more virtual machines, which simplifies storage management and leverages larger capacity NL-SAS drives. Enhanced VAAI UNMAP support that includes a new esxcli storage vmfs unmap command with multiple reclamation methods. Enhanced Single-Root Input/Output (I/O) Virtualization (SR-IOV) support that simplifies configuration via workflows, and surfaces more properties into the virtual functions. 16 Gb end-to-end support for FC environments. Enhanced Link Aggregation Control Protocol (LACP) functions offering additional hash algorithms and up to 64 Link Aggregation Groups (LAGs). vsphere Data Protection (VDP), which can now replicate backup data directly to EMC Avamar. 40 Gb Mellanox NIC support. 28 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

29 Chapter 3: Solution Technology Overview Virtual Machine File System (VMFS) heap improvements, which reduce memory requirements while allowing access to the full 64 TB VMFS address space. VMware vcenter VMware vcenter is a centralized management platform for the VMware virtual infrastructure. This platform provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure, accessed from multiple devices. VMware vcenter also manages some advanced features of the VMware virtual infrastructure such as VMware vsphere High Availability (HA) and DRS, along with vmotion and Update Manager. VMware vsphere High Availability The VMware vsphere High Availability feature enables the virtualization layer to automatically restart virtual machines in various failure conditions. If the virtual machine operating system has an error, the virtual machine can automatically restart on the same hardware. If the physical hardware has an error, the impacted virtual machines can automatically restart on other servers in the cluster. Note: To restart virtual machines on different hardware, the servers must have available resources. The Compute section provides detailed information to enable this function. With vsphere High Availability, you can configure policies to determine which machines automatically restart, and under what conditions to attempt these operations. EMC Virtual Storage Integrator for VMware vsphere EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in for the vsphere client. It provides a single management interface for EMC storage within the vsphere environment, allowing you to add and remove features to VSI independently. This provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which enables new features to be introduced rapidly in response to customer requirements. Validation testing uses the following features: Storage Viewer: Extends the vsphere client to help to discover and identify EMC VNXe storage devices allocated to VMware vsphere hosts and virtual machines. Storage Viewer presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management: Simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision VMFS datastores, RDM volumes, or network file system (NFS) datastores seamlessly within vsphere client. Refer to the EMC VSI for VMware vsphere Product Guides on EMC Online Support for more information. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 29

30 Chapter 3: Solution Technology Overview VNXe support for VMware vsphere Storage API for Array Integration Hardware acceleration with VMware vsphere Storage API for Array Integration (VAAI) is a storage enhancement in vsphere 5.5 that enables vsphere to offload specific storage operations to compatible storage hardware such as the VNXe series platforms. With the assistance of storage hardware, vsphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. Compute The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX documents minimum requirements for the number of processor cores and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution. In the example shown in Figure 5, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this with white-box servers containing 16 processor cores and 64 GB of RAM, while another customer might select a higher-end server with 20 processor cores and 144 GB of RAM. 30 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

31 Chapter 3: Solution Technology Overview Figure 5. Compute layer flexibility The first customer needs four of the chosen servers, while the other customer needs two. Note: To enable high-availability at the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. Use the following best practices in the compute layer: Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 31

32 Chapter 3: Solution Technology Overview If you implement high-availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimaldowntime upgrades, and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores and RAM per core to meet the needs of the target environment. Network The infrastructure network requires redundant network links for each vsphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 6 depicts an example of this highly available network topology. 32 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

33 Chapter 3: Solution Technology Overview Figure 6. Example of highly available network design for block This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high-availability, and security. For block, EMC unified storage platforms provide network high-availability or redundancy by using two ports per storage processor. If a link is lost on the storage processor I/O port, the link fails over to another port. All network traffic is distributed across the active links. For file, EMC unified storage platforms provide network high-availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single media access control (MAC) address, and potentially multiple IP addresses. In this solution, LACP is configured on the VNXe, to combine multiple Ethernet ports into a single virtual device. If a link is lost on the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 33

34 Chapter 3: Solution Technology Overview Storage Overview EMC VNXe series The storage layer is also a key component of any cloud infrastructure solution that serves data generated by applications and operating system in the data center storage processing systems. In this VSPEX solution, EMC VNXe series arrays provide features and performance to enable and enhance any virtualization environment. This increases storage efficiency, management flexibility, and reduces total cost of ownership. The EMC VNXe series is optimized for virtual applications, and delivers industryleading innovation and enterprise capabilities for file and block storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. Intel Xeon processors power the VNXe series for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. The VNXe series is designed to meet the high performance, high-scalability requirements of small and midsize enterprises. Table 1 shows the customer benefits that are provided by the VNXe series. Table 1. VNXe customer benefits Feature Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-consistent copies High-availability, designed to deliver five 9s availability Automated tiering with FAST VP and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere with a single management interface for all NAS and SAN needs Benefit Tight integration with VMware allows for advanced array features and centralized management Reduced storage costs, more efficient use of resources and easier recovery of applications Higher levels of uptime and reduced outage risk More efficient use of storage resources without complicated planning and configuration Reduced management overhead and toolsets required to manage environment Various software suites and packs are available for the VNXe series. These provide multiple features for enhanced protections and performance. They include the following: FAST Suite: Automatically optimizes for the highest system performance and the lowest storage cost simultaneously. 34 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

35 Chapter 3: Solution Technology Overview Security and Compliance Suite: Keeps data safe from changes, deletions, and malicious activity. VNXe Virtual Provisioning EMC VNXe Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage as needed. Thick LUNs provide high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, and VNXe Snapshots. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and user capacity threshold setting. EMC VNXe Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNXe systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. Monitor the progress of a rebalance operation from the Jobs panel in Unisphere, as shown in Figure 7. Figure 7. Storage pool rebalance progress EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 35

36 Chapter 3: Solution Technology Overview LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNXe series has the capability to expand a pool LUN without disrupting user access. You can expand pool LUNs with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. For more detailed information of pool LUN expansion, refer to Virtual Provisioning for the New VNX. User alerting through capacity threshold setting Customers must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available to be provisioned when needed and capacity shortages can be avoided. Figure 8 illustrates why provisioning with thin pools requires monitoring. Figure 8. Thin LUN space utilization Monitor the following values for thin pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. Total allocation may never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. Figure 9 shows the Storage Pool Utilization in Unisphere, which displays parameters such as Available Space, Used Space, Subscription, Alert Threshold and Total Space. 36 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

37 Chapter 3: Solution Technology Overview Figure 9. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization, and send an alert when thresholds are reached, set the Percentage Full Threshold to allow enough buffer to make remediation before an outage situation occurs. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active as there is no risk of running out of space due to oversubscription. VNXe FAST Cache VNXe FAST VP vcloud Networking and Security VNXe FAST Cache, enables flash drives to function as an expanded cache layer for the array. FAST Cache is an array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of highly active data to flash drives. This dramatically improves the response time for the active data and reduces data hot spots that can occur within a LUN. FAST Cache is an optional component of this solution. VNXe FAST VP can automatically tier data across multiple types of drives to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing is part of a regularly scheduled maintenance operation. VMware s vshield features have been integrated and enhanced in vcloud Networking and Security, which is part of the VMware vcloud Suite. VSPEX Private Cloud solutions with VMware vcloud Networking and Security enable customers to adopt virtualized networks that eliminate the rigidity and complexity associated with physical equipment that creates artificial barriers to operating an optimized network architecture. Physical networking has not kept pace with the virtualization of the data center and it limits the ability of businesses to rapidly deploy, move, scale, and protect applications and data according to business needs. VSPEX with VMware vcloud Networking and Security solves these data center challenges by virtualizing networks and security to create efficient, agile, extensible logical constructs that meet the performance and scale requirements of virtualized EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 37

38 Chapter 3: Solution Technology Overview data centers. vcloud Networking and Security delivers software-defined networks and security with a broad range of services in a single solution and includes a virtual firewall, virtual private network (VPN), load balancing, and VXLAN-extended networks. Management integration with VMware vcenter Server and VMware vcloud Director reduces the cost and complexity of data center operations and unlocks the operational efficiency and agility of private cloud computing. There are several VSPEX solutions for specific virtualized applications that can also take advantage of vcloud Networking and Security features. For example, VSPEX offers specific solutions for Microsoft applications like Exchange, SharePoint, and SQL Server. With VMware vcloud, these applications can have protection and isolation from risk. Administrators have greater visibility into virtual traffic flows so that they can enforce policies and implement compliance controls on in-scope systems by implementing logical grouping and virtual firewalls. Administrators deploying virtual desktops with the VSPEX End User Computing with VMware vsphere and VMware Horizon View solution can also benefit from vcloud Networking and Security by creating logical security around individual or groups of virtual desktops. This will ensure that users of machines deployed on the VSPEX Proven Infrastructure can only access applications and data with authorization, preventing broader access to the data center. vcloud also enables rapid diagnosis of traffic and potential trouble spots. Administrators can effectively create software defined networks that scale and move virtual workloads within their VSPEX Proven Infrastructures without physical networking or security constraints, all of which can be streamlined via VMware vcenter and VMware vcloud Director Integration. VNXe file shares ROBO In many environments, it is important to have a common location to store files accessed by many users. CIFS or NFS file shares, available from a file server, provide this ability. The VNXe series of storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. Organizations with remote office and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments need to balance the benefits of local support with the need to maintain central control. Local systems and storage should be easy for local personnel to administer, but should also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also use Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining operational simplicity and unified storage functionality for local managers. 38 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

39 Chapter 3: Solution Technology Overview EMC Powered Backup Overview EMC Powered Backup, another important component in this VSPEX solution, provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster. EMC Powered Backup is a smart method of backup. It consists of optimal integrated protection storage and software designed to meet backup and recovery objectives now and in the future. With EMC market-leading protection storage, deep data source integration, and feature-rich data management services, you can deploy an open, modular protection storage architecture that allows you to scale resources while lowering cost and minimizing complexity. EMC Avamar deduplication EMC Data Domain deduplication storage systems VMware vsphere Data Protection EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, NAS servers, and desktops/laptops. Learn more: EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication for backup and archive workloads. Learn more: vsphere Data Protection (VDP) is a proven solution for backing up and restoring VMware virtual machines. VDP is based on EMC s award-winning Avamar product and has many integration points with vsphere 5.5, providing simple discovery of your virtual machines and efficient policy creation. One of the challenges that traditional systems have with virtual machines is the large amount of data that these files contain. VDP s usage of a variable-length deduplication algorithm ensures a minimum amount of disk space is used and reduces ongoing backup storage growth. Data is deduplicated across all virtual machines associated with the VDP virtual appliance. VDP uses vsphere Storage APIs for Data Protection (VADP), which sends, daily, only the changed blocks of data, resulting in less data being sent over the network. VDP enables up to eight virtual machines to be backed up concurrently. Because VDP resides in a dedicated virtual appliance, all the backup processes are offloaded from the production virtual machines. VDP can alleviate the burdens of restore requests from administrators by enabling end users to restore their own files using a web-based tool called vsphere Data Protection Restore Client. Users can browse their system s backups in an easy-to-use interface that provides search and version control features. The users can restore individual files or directories without any intervention from IT. This frees up valuable time and resources, and provides a better end user experience. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 39

40 Chapter 3: Solution Technology Overview For backup and recovery options, refer to the following documents: EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. EMC Backup and Recovery Options for VSPEX Private Clouds vsphere Replication vsphere Replication is a feature of the vsphere 5.5 platform that provides business continuity. vsphere Replication copies a virtual machine defined in your VSPEX infrastructures to a second instance of VSPEX or within the clustered servers in a single VSPEX system. vsphere Replication continues to protect the virtual machine on an ongoing basis and replicates the changes to the copied virtual machine. This replication ensures that the virtual machine remains protected and is available for recovery without requiring restoration from backup. Replication application virtual machines are defined in VSPEX to ensure application-consistent data with a single click when replication is set up. Administrators who manage virtualized Microsoft applications running on VSPEX can use the automatic integration of vsphere Replication with Microsoft s Volume Shadow Copy Service (VSS) to ensure that applications such as Microsoft Exchange or Microsoft SQL Server databases are quiescent and consistent when generating replica data. A very quick call to the virtual machine s VSS layer flushes the database writers for an instant to ensure that the replicated data is static and fully recoverable. This automated approach simplifies the management and increases the efficiency of your VSPEX-based virtual environment. EMC RecoverPoint EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology. This technology enables RPAs to protect data locally (continuous data protection, CDP), remotely (continuous remote replication, CRR), or both (CLR), offering the following advantages: RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and transfers the data via Fibre Channel (FC). RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order. In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously. RecoverPoint uses lightweight splitting technology to mirror application writes to the RecoverPoint cluster, and supports the following write splitter types: Array-based Intelligent fabric-based Host-based 40 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

41 Chapter 3: Solution Technology Overview Other technologies Overview VMware vcloud Automation Center In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to, the following technologies. VMware vcloud Automation Center, which is part of vcloud Suite Enterprise, orchestrates the provisioning of software-defined data center services as complete virtual data centers that are ready for consumption in a matter of minutes. vcloud Automation Center is a software solution that enables customers to build secure, private clouds by pooling infrastructure resources from VSPEX into virtual data centers and exposing them to users through Web-based portals and programmatic interfaces as fully automated, catalog-based services. VMware vcloud Automation Center uses pools of resources abstracted from the underlying physical, virtual, and cloud-based resources to automate the deployment of virtual resources when and where required. VSPEX with vcloud Automation Center enables customers to build complete virtual data centers delivering computing, networking, storage, security, and a complete set of services necessary to make workloads operational in minutes. Software-defined data center service and the virtual data centers fundamentally simplify infrastructure provisioning, and enable IT to move at the speed of business. VMware vcloud Automation Center integrates with existing or new VSPEX Private Cloud with VMware vsphere 5.5 deployments and supports existing and future applications by providing elastic standard storage and networking interfaces, such as Layer-2 connectivity and broadcasting between virtual machines. VMware vcloud Automation Center uses open standards to preserve deployment flexibility and pave the way to the hybrid cloud. The key features of VMware vcloud Automation Center include: Self-service provisioning Life-cycle management Unified cloud management Multi-virtual machine blueprints Context-aware, policy-based governance Intelligent resource management All VSPEX Proven Infrastructures can use vcloud Automation Center to orchestrate deployment of virtual data centers based on single VSPEX or multi-vspex deployments. These infrastructures enable simple and efficient deployment of virtual machines, applications, and virtual networks. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 41

42 Chapter 3: Solution Technology Overview VMware vcenter Operations Management Suite The VMware vcenter Operations Manager Suite provides unparalleled visibility into VSPEX virtual environments. The suite collects, analyzes data, correlates abnormalities, identifies the root cause of performance problems, and provides administrators with the information needed to optimize and tune their VSPEX virtual infrastructures. vcenter Operations Manager provides an automated approach to optimizing your VSPEX-powered virtual environment by delivering self-learning analytic tools that are integrated to provide better performance, capacity usage and configuration management. The Operations Manager Suite delivers a comprehensive set of management capabilities, including: Performance Capacity Adaptability Configuration and compliance management Application discovery and monitoring Cost metering The VMware vcenter Operations Manager Suite includes five components: VMware vcenter Operations Manager VMware vcenter Configuration Manager VMware vcenter Hyperic VMware vcenter Infrastructure Navigator VMware vcenter Chargeback Manager vcenter Operations Manager is the foundation of the suite and provides the operational dashboard interface that makes visualizing issues in your VSPEX virtual environment simple. vcenter Configuration Manager helps to automate configuration and compliance of physical, virtual, and cloud environments, which ensures security and configuration consistency across the ecosystem. vcenter Hyperic monitors physical hardware resources, operating systems, middleware, and applications that you may have deployed on VSPEX. vcenter Infrastructure Navigator provides visibility into the application services running over the virtual-machine infrastructure and their interrelationships for day-today operational management. vcenter Chargeback Manager enables accurate cost measurement, analysis, and reporting of virtual machines. It provides visibility into the cost of the virtual infrastructure that you have defined on VSPEX as being required to support business services. VMware vcenter Single Sign-On With the introduction of VMware vcenter Single Sign-On (SSO) in VMware vsphere 5.5, administrators now have a deeper level of available authentication services for 42 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

43 Chapter 3: Solution Technology Overview managing their VSPEX Proven Infrastructures. Authentication by vcenter SSO makes the VMware cloud infrastructure platform more secure. This function allows the vsphere software components to communicate with each other through a secure token exchange mechanism, instead of requiring each component to authenticate a user separately with a directory service such as Active Directory. When users log in to the vsphere Web client with user names and passwords, the vcenter SSO server receives their credentials. The credentials are then authenticated against the back-end identity source(s) and exchanged for a security token, which is returned to the client to access the solutions within the environment. SSO translates into time and cost savings which, when factored in against the entire organization, can result in savings and streamlined workflows. With vsphere 5.5, users have a unified view of their entire vcenter Server environment because multiple vcenter Server instances and their inventories are now displayed. This does not require Linked Mode unless users share roles, permissions, and licenses among vsphere 5.x vcenter Server instances. Administrators can now deploy multiple solutions within an environment with true single sign-on that creates trust between solutions without requiring authentication every time a user accesses the solution. VSPEX Private Cloud with VMware vsphere 5.5 is simple, efficient, and flexible. VMware SSO makes authentication simpler, workers can be more efficient, and administrators have the flexibility to make SSO servers local or global. Public-key infrastructure The ability to secure data and ensure the identity of devices and users is critical in today s enterprise IT environment. This is particularly true in regulated sectors such as healthcare, financial, and government. VSPEX solutions can offer hardened computing platforms in many ways, most commonly by implementing a public-key infrastructure (PKI). The VSPEX solutions can be engineered with a PKI solution designed to meet the security criteria of your organization, and the solution can be implemented via a modular process, where layers of security are added as needed. The general process involves first implementing a PKI infrastructure by replacing generic self-certified certificates with trusted certificates from a third-party certificate authority. Services that support PKI can then be enabled using the trusted certificates to ensure a high degree of authentication and encryption where supported. Depending on the scope of PKI services needed, it may become necessary to implement a PKI infrastructure dedicated to those needs. There are many third party tools that offer these services including end-to-end solutions from RSA that can be deployed within a VSPEX environment. For additional information, visit the RSA website. PowerPath/VE (for block) EMC PowerPath/VE for VMware vsphere 5.5 is a module that provides multi-pathing extensions for vsphere and works in combination with SAN storage to intelligently manage FC, iscsi, and Fiber Channel over Ethernet (FCoE) I/O paths. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 43

44 Chapter 3: Solution Technology Overview PowerPath/VE is installed on the vsphere host and scales to the maximum number of virtual machines on the host, improving I/O performance. The virtual machines do not have PowerPath/VE installed nor are they aware that PowerPath/VE is managing I/O to storage. PowerPath/VE dynamically balances I/O load requests and automatically detects and recovers from path failures. EMC XtremCache EMC XtremCache is a server flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe flash technology. Server-side flash caching for maximum speed XtremCache performs the following functions to improve system performance: Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application. Automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server flash card. This means that the most active data automatically resides on the PCIe card in the server for faster access. Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application accelerates with XtremCache, the array performance for other applications remains the same or slightly enhanced. Write-through caching to the array for total protection XtremCache accelerates reads and protects data by using a write-through cache to the storage to deliver persistent high-availability, integrity, and disaster recovery. Application agnostic XtremCache is transparent to applications; there is no need to rewrite, retest, or recertify to deploy XtremCache in the environment. Integration with vsphere 5.5 XtremCache enhances both virtualized and physical environments. Integration with the VSI plug-in for VMware vsphere 5.5 simplifies the management and monitoring of XtremCache. Minimal impact on system resources Unlike other caching solutions on the market, XtremCache does not require a significant amount of memory or CPU cycles, as all flash and wear-leveling management is done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead on server resources from using XtremCache. XtremCache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. 44 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

45 XtremCache active/passive clustering support Chapter 3: Solution Technology Overview The configuration of XtremCache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The XtremCache-enabled active/passive cluster ensures data integrity and accelerates application performance. XtremCache performance considerations XtremCache performance considerations are: On a write request, XtremCache first writes to the array, then to the cache, and then completes the application I/O. On a read request, XtremCache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds; therefore, the array limits how fast the cache can work. As the number of writes increases, XtremCache performance decreases. XtremCache is most effective for workloads with a 70 percent or greater read/write ratio, and with small, random I/O (8K is ideal). I/O greater than 128K is not cached in XtremCache 1.5. Note: For more information, refer to the white paper titled Introduction to XtremCache. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 45

46 Chapter 4: Solution Architecture Overview Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High-availability and failover Validation test profile Backup and recovery configuration guidelines Sizing guidelines Reference workload Applying the reference workload Implementing the solution Quick assessment EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

47 Chapter 4: Solution Architecture Overview Overview This chapter provides a comprehensive guide to the major aspects of this solution. Server capacity is presented in generic terms for required minimums of CPU, memory, and network resources; you can select server and networking hardware that meets or exceeds the stated minimums. The specified storage architecture, along with a system that meets the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. Each virtual machine has its own set of requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Solution architecture Overview The VSPEX Private Cloud solution for VMware vsphere with EMC VNXe validates the configuration for up to 125 virtual machines. Note: VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 47

48 Chapter 4: Solution Architecture Overview Logical architecture The architecture diagrams in this section show the layout of major components in the solutions. Storage for block and file-based systems are shown in the following diagrams. Figure 10 characterizes the infrastructure validated with block-based storage, where an 8 Gb FC or 10 Gb-iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic. Figure 10. Logical architecture for block storage 48 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

49 Chapter 4: Solution Architecture Overview Figure 11 characterizes the infrastructure validated with file-based storage, where 10 GbE carries storage traffic and all other traffic. Figure 11. Logical architecture for file storage Key components This architecture includes the following key components: VMware vsphere 5.5: Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 2 on page 51. vsphere 5.5 provides highly available infrastructure through such features as: vmotion: Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Storage vmotion: Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. vsphere High-Availability (HA): Detects and provides rapid recovery for a failed virtual machine in a cluster. Distributed Resource Scheduler (DRS): Provides load balancing of computing capacity in a cluster. Storage Distributed Resource Scheduler (SDRS): Provides load balancing across multiple datastores based on space usage and I/O latency. VMware vcenter Server: Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vsphere cluster. vcenter manages all vsphere hosts and their virtual machines. Microsoft SQL Server: VMware vcenter Server requires a database service to store configuration and monitoring details. This solution uses a Microsoft SQL Server 2012 database. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 49

50 Chapter 4: Solution Architecture Overview DNS server: Use DNS services for the various solution components to perform name resolution. This solution uses the Microsoft DNS Service running on Windows Server 2012 R2. Active Directory server: Various solution components require Active Directory services to function properly. The Microsoft AD Service runs on a Windows Server 2012 server. Shared infrastructure: Add DNS and authentication/authorization services, such as AD Service, with existing infrastructure or set up as part of the new virtual infrastructure. IP network: A standard Ethernet network carries all network traffic with redundant cabling and switching. A shared IP network carries user and management traffic. Storage network The storage network is an isolated network that provides hosts with access to the storage array. VSPEX offers different options for block-based and file-based storage. Storage network for block: This solution provides two options for block-based storage networks. Fibre Channel (FC): A set of standards that define protocols for performing high speed serial data transfer. FC provides a standard data transport frame among servers and shared storage devices. 10 Gb Ethernet (iscsi): Enables the transport of SCSI blocks over a TCP/IP network. iscsi works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. Storage network for file: With file-based storage, a private, non-routable 10 GbE subnet carries the storage traffic. VNXe storage array The VSPEX private cloud configuration begins with the VNXe series storage arrays, including: EMC VNXe3200 array: Provides storage to vsphere hosts for up to 125 virtual machines. VNXe series storage arrays include the following components: Storage processors (SPs): Support block data with EMC UltraFlex I/O technology that supports FC and iscsi. The SPs provide access for all external hosts, and for the file side of the VNXe array. Standby power supply (SPS): Is 1U in size and provides enough power to each SP to ensure that any data in flight de-stages to the vault area in the event of a power failure. This ensures that no writes are lost. On restart of the array, the pending writes reconcile and persist. Disk Array Enclosures (DAE) House the drives used in the array. 50 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

51 Chapter 4: Solution Architecture Overview Hardware resources Table 2 lists the hardware used in this solution. Table 2. Solution hardware Component VMware vsphere servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core* For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host For 125 virtual machines: Minimum of 250 GB RAM Add 2 GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBA per server File 4 x 10 GbE NICs per server Note Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High-Availability (HA) functionality and to meet the listed minimums. Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per VMware vsphere server 1 x 1 GbE port per storage processor for management 2 ports per VMware vsphere server, for storage network 2 ports per SP, for storage data File 2 physical switches 4 x 10 GbE ports per VMware vsphere server 1 x 1 GbE port per storage processor for management 2 x 10 GbE ports per storage processor for data EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 51

52 Chapter 4: Solution Architecture Overview Component EMC VNXe series storage array Block File Configuration Common: 1 x 1 GbE interface per storage processor for management 2 front end ports per storage processor System disks for VNXe operating environment (OE) For 125 virtual machines: EMC VNXe x 600 GB 10k rpm 2.5-inch SAS drives 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares Common: 2 x 10 GbE interfaces per storage processor 1 x 1 GbE interface per storage processor for management System disks for VNXe OE For 125 virtual machines: EMC VNXe x 600 GB 10k rpm 2.5-inch SAS drives 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares Shared infrastructure In most cases, a customer environment already has infrastructure services such as AD and DNS services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, the new minimum requirements are: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. * For Ivy Bridge or later processors, use 8 vcpus per physical core. Note: The solution recommends using a 10 GbE network or an equivalent 1GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. 52 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

53 Chapter 4: Solution Architecture Overview Software resources Table 3 lists the software used in this solution. Table 3. Solution software Software VMware vsphere 5.5 vsphere Server vcenter Server Operating system for vcenter Server Microsoft SQL Server EMC VNXe Configuration Enterprise Edition Standard Edition Windows Server 2012 R2 Standard Edition Note : Any operating system that is supported for vcenter can be used. Version 2012 R2 Standard Edition Note: Any supported database for vcenter can be used. VNXe OE 3.0 EMC VSI for VMware vsphere: Unified Storage Management EMC VSI for VMware vsphere: Storage Viewer EMC PowerPath/VE EMC backup Avamar Use latest version Use latest version Use latest version Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. Data Domain OS Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. Virtual machines (used for validation not required for deployment) Base operating system Microsoft Window Server 2012 R2 Datacenter Edition EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 53

54 Chapter 4: Solution Architecture Overview Server configuration guidelines Overview When designing and ordering the compute/server layer of this VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as memory ballooning and transparent page sharing can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. Ivy Bridge Updates Testing on Intel s Ivy Bridge series processors has shown significant increases in virtual machine density from the server resource perspective. If your server deployment comprises Ivy Bridge processors, we recommend increasing the vcpu/pcpu ratio from 4:1 to 8:1. This essentially halves the number of server cores required to host the reference virtual machines. Figure 12 demonstrates results from tested configurations: Figure 12. Ivy Bridge processor guidance Current VSPEX sizing guidelines specify a virtual CPU core to physical CPU core ratio of 4:1 (8:1 for Ivy Bridge or later processors). This ratio was based upon an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, OEM server vendors that are VSPEX partners may suggest differing (normally higher) ratios. Please follow the updated guidance supplied by your OEM server vendor. Table 4 lists the hardware resources used for the compute layer. 54 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

55 Chapter 4: Solution Architecture Overview Table 4. Hardware resources for the compute layer Component VMware vsphere servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host For 125 virtual machines: Minimum of 250 GB RAM Add 2 GB for each physical serve Network Block 2 x 10 GbE NICs per server 2 HBA per server File 4 x 10 GbE NICs per server Note : Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums. Note: The solution recommends using a 10 GbE network or an equivalent 1GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VMware vsphere memory virtualization for VSPEX VMware vsphere 5.5 has a number of advanced features that help maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features, and the items to consider when using these features in the environment. In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 13. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 55

56 Chapter 4: Solution Architecture Overview Figure 13. Hypervisor memory consumption Understanding the technologies in this section makes it easier to understand this basic concept. Memory compression Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vsphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vsphere 5.5 can handle memory over-commitment without any performance degradation. However, if memory usage exceeds server capacity, vsphere might resort to swapping out portions of the memory of a virtual machine. Non-Uniform Memory Access (NUMA) vsphere 5.5 uses a NUMA load-balancer to assign a home node to a virtual machine. Because the home node allocates virtual machine memory, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature. Transparent page sharing Virtual machines running similar operating systems and applications typically have similar sets of memory content. Page sharing enables the hypervisor to reclaim any redundant copies of memory pages and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same 56 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

57 Chapter 4: Solution Architecture Overview operating system and application binaries, total memory usage can reduce to increase consolidation ratios. Memory ballooning By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention, with little or no impact to the performance of the application. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. These guidelines take into account vsphere memory overhead and the virtual machine memory settings. vsphere memory overhead Some associated overhead is required for the virtualization of memory resources. The memory space overhead has two components: The fixed system overhead for the VMkernel Additional overhead for each virtual machine Memory overhead depends on the number of virtual CPUs and configured memory for the guest operating system. Allocating memory to virtual machines Many factors determine the proper sizing for virtual machine memory in VSPEX architectures. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments for optimal results. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 57

58 Chapter 4: Solution Architecture Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines consider jumbo frames, VLANs, and LACP on EMC unified storage. For detailed network resource requirements, refer to Table 5. Table 5. Hardware resources for the network layer Component Configuration Network infrastructure Minimum switching capacity Block iscsi - 2 physical LAN switches Two 10GbE ports per VMware vsphere server One 1GbE port per storage processor for management. FC 2 physical LAN switches, 2 physical SAN switches Two FC ports per VMware vsphere server One 1 GbE port per storage processor for management File 2 physical switches 4 x 10 GbE ports per VMware vsphere server One 1GbE port per storage processor for management 2 x 10 GbE ports per storage processor for data Note: The solution may use 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VLANs Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient. This solution uses a minimum of three VLANs for: Client access Storage (for iscsi, NFS, and vmotion) Management 58 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

59 Chapter 4: Solution Architecture Overview Figure 14 depicts the VLANs and the network connectivity requirements for a blockbased VNXe array. Figure 14. Required networks for block storage EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 59

60 Chapter 4: Solution Architecture Overview Figure 15 depicts the VLANs for file and the network connectivity requirements for a file-based VNXe array. Figure 15. Required networks for file storage Note: Figure 15 demonstrates the network connectivity requirements for a VNXe array using 10 GbE connections. Create a similar topology for 1 GbE network connections. The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note: Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. Enable jumbo frames (for iscsi and NFS) Link aggregation (for NFS) This solution recommends setting MTU at 9,000 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames on switch ports for storage and host ports on the switches. A link aggregation resembles an Ethernet channel, but uses LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In 60 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

61 Chapter 4: Solution Architecture Overview this solution, LACP is configured on VNXe, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. VMware vsphere 5.5 allows more than one method of storage when hosting virtual machines. The tested solutions use different block protocols (FC/iSCSI) and NFS (for file), and the storage layout described in this section adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system usage and load if required. However, the building blocks described in this document ensure acceptable performance. VSPEX storage building blocks documents specific recommendations for customization. Table 6 lists the hardware resources used for storage. Table 6. Hardware resources for the storage layer Component EMC VNXe series storage array Block File Configuration Common: 1 x 1 GbE interface per storage processor for management 2 front end ports per storage processor system disks for VNXe3200 OE For 125 virtual machines: EMC VNXe x 600 GB 10k rpm 2.5-inch SAS drives 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares Common: 2 x 10 GbE interfaces per storage processor 1 x 1 GbE interface per storage processor for management system disks for VNXe3200 OE For 125 virtual machines: EMC VNXe x 600 GB 10k rpm 2.5-inch SAS drives 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 61

62 Chapter 4: Solution Architecture Overview VMware vsphere storage virtualization for VSPEX VMware ESXi provides host-level storage virtualization, virtualizes the physical storage, and presents the virtualized storage to the virtual machines. A virtual machine stores its operating system and all the other files related to the virtual machine activities in a virtual disk. The virtual disk itself consists of one or more files. VMware uses a virtual SCSI controller to present virtual disks to a guest operating system running inside the virtual machines. Virtual disks reside on a datastore. Depending on the protocol used, a datastore can be either a VMware VMFS datastore or an NFS datastore. An additional option, RDM, allows the virtual infrastructure to connect a physical device directly to a virtual machine. Figure 16. VMware virtual disk types VMFS VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. Deploy over any SCSI-based local or network storage. Raw Device Mapping (RDM) VMware also provides RDM, which allows a virtual machine to directly access a volume on the physical storage. Only use RDM with FC or iscsi. NFS VMware supports using NFS from an external NAS storage system or device as a virtual machine datastore. VSPEX storage building blocks Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache or FAST VP (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. 62 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

63 Chapter 4: Solution Architecture Overview VSPEX uses a building block approach to reduce complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. VSPEX solutions have been engineered to provide a variety of sizing configurations which afford flexibility when designing the solution. Customers can start out by deploying smaller configurations and scale up as their needs grow. At the same time, customers can avoid over-purchasing by choosing a configuration that closely meets their needs. To accomplish this, VSPEX solutions can be deployed using one or both of the scale-points below to obtain the ideal configuration all while guaranteeing a given performance level. Building block for 15 virtual servers The first building block can contain up to 15 virtual servers, with five SAS drives in a storage pool, as shown in Figure 17. Figure 17. Storage layout building block for 15 virtual machines This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 15 more virtual servers. Building block for 125 virtual servers The second building block can contain up to 125 virtual servers. It contains 40 SAS drives, as shown in Figure 18. This figure also shows the four drives required for the VNXe operating system. The preceding sections outline an approach to grow from 15 virtual machines to 125 virtual machines in a pool. Figure 18. Storage layout building block for 125 virtual machines Implement this building block with all of the resources in the pool initially, or expand the pool over time as the environment grows. Table 7 lists the SAS drive requirements in a pool for different numbers of virtual servers. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 63

64 Chapter 4: Solution Architecture Overview Table 7. Number of disks required for different number of virtual machines Virtual servers SAS drives Note: Due to increased efficiency with larger stripes, the building block with 40 SAS drives can support up to 125 virtual servers. VSPEX private cloud validated maximums VSPEX private cloud configurations are validated on the VNXe3200 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX private cloud configuration. In addition to the VSPEX private cloud building blocks, each storage array must contain the drives used for the VNXe Operating Environment (OE) and hot spare disks for the environment. Note: Allocate at least one hot spare for every 30 disks of a given type and size. VNXe3200 The VNXe3200 is validated for up to 125 virtual servers. Figure 19 shows a typical configuration. Figure 19. Storage layout for 125 virtual machines using VNXe3200 This configuration uses the following storage layout: Forty 600 GB SAS drives are allocated to one block-based storage pool for 125 virtual machines. 64 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

65 Chapter 4: Solution Architecture Overview Note: To meet the load recommendations, all drives in the storage pool must be 10k rpm and the same size. Two 600 GB SAS drives are configured as hot spares. For block, allocate at least two LUNs to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. For file, allocate at least two NFS shares to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. Optionally configure flash drives as FAST Cache (up to 200 GB) in the array. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration, the VNXe3200 can support 125 virtual servers as defined in Reference workload. Conclusion The scale levels listed in Figure 20 highlight the entry points and supported maximum values for the arrays in the VSPEX private cloud environment. The entry points represent optimal model demarcations in terms of the number of virtual machines within the environment. This helps you to determine which VNXe array to choose based upon your requirements. You can choose to configure any of the listed arrays with a smaller number of virtual machines than the maximum values supported using the building block approach described earlier. Figure 20. Maximum scale levels and entry points of different arrays EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 65

66 Chapter 4: Solution Architecture Overview High-availability and failover Overview Virtualization layer This VSPEX solution provides a high availability virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, business operations survive from single-unit failures with little or no impact. Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart failed virtual machines. Figure 21 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 21. High availability at the virtualization layer By implementing high availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, use enterprise-class servers designed for the data center. This type of server has redundant power supplies, as shown in Figure 22. Connect these servers to separate power distribution units (PDUs) in accordance with your server vendor s best practices. Figure 22. Redundant power supplies To configure high-availability in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as demonstrated in Figure 21. Network layer The advanced networking features of the VNXe series provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in 66 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

67 Chapter 4: Solution Architecture Overview Figure 23. Spread these connections across multiple Ethernet switches to guard against component failure in the network. Figure 23. Network layer high availability (VNXe) Ensure there is no single point of failure to allow the compute layer to access storage and communicate with users even if a component fails. Storage layer The VNXe series is designed is for five 9s availability by using redundant components throughout the array. All the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can replace a failing disk, as shown in Figure 24. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 67

68 Chapter 4: Solution Architecture Overview Validation test profile Figure 24. VNXe series high availability EMC storage arrays are highly available by default. When configured according to the directions in their installation guides, no single-unit failures result in data loss or unavailability. Profile characteristics The VSPEX solution was validated with the environment profile described in Table 8. Table 8. Profile characteristics Profile characteristic Value Number of virtual machines 125 Virtual machine OS Windows Server 2012 R2 Datacenter Edition Processors per virtual machine 1 Number of virtual processors per physical CPU core 4* RAM per virtual machine Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or NFS shares to store virtual machine disks Number of virtual machines per LUN or NFS share 2 GB 100 GB 25 IOPS 2 62 or 63 per LUN or NFS share 68 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

69 Profile characteristic Disk and RAID type for LUNs or NFS shares Chapter 4: Solution Architecture Overview Value * For Ivy Bridge or later processors, use 8 vcpus per physical core RAID 5, 600 GB, 10k rpm, 2.5-inch SAS disks Note: This solution was tested and validated with Windows Server 2012 R2 as the operating system for vsphere virtual machines, but also supports Windows Server Windows Server 2008 on vsphere 5.5 uses the same configuration and sizing. Backup and recovery configuration guidelines Sizing guidelines For details regarding backup and recovery configuration for this VSPEX Private Cloud solution, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. The sections include instructions on how to correlate those reference workloads to customer workloads, and how that may change the end delivery from the server and network perspective. Modify the storage definition by adding drives for greater capacity and performance, and by adding features such as FAST Cache and FAST VP. The disk layouts provide support for the appropriate number of virtual machines at the defined performance level and for typical operations such as snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response time. Reference workload Overview When you move an existing server to a virtual infrastructure, you can gain efficiency by right-sizing the virtual hardware resources assigned to that system. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. Defining the reference workload To simplify the discussion, this section presents a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can determine which reference architecture to choose. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 69

70 Chapter 4: Solution Architecture Overview For the VSPEX solutions, the reference workload is a single virtual machine. Table 9 lists the characteristics of this virtual machine. Table 9. Virtual machine characteristics Characteristic Value Virtual machine operating system Microsoft Windows Server 2012 Data Center Edition R2 Virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine 2 GB 100 GB IOPS per virtual machine 25 I/O pattern Random I/O read/write ratio 2:1 This specification for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference by which to measure other virtual machines. Applying the reference workload Overview When you consider an existing server for movement into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. The solution creates a pool of resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 9 on page 70. Virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. Example 1: Custom-built application A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. Based on these numbers, the resource pool needs the following resources: CPU of one reference virtual machine Memory of two reference virtual machines Storage of one reference virtual machine 70 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

71 Chapter 4: Solution Architecture Overview I/Os of one reference virtual machine In this example, a corresponding virtual machine uses the resources for two of the reference virtual machines. If implemented on a VNXe3200 storage system, which can support up to 125 virtual machines, resources for 123 reference virtual machines remain. Example 2: Point of sale system The database server for a customer s point-of-sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of four reference virtual machines Memory of eight reference virtual machines Storage of two reference virtual machines I/Os of eight reference virtual machines In this case, the corresponding virtual machine uses the resources of eight reference virtual machines. If implemented on a VNXe3200 storage system, which can support up to 125 virtual machines, resources for 117 reference virtual machines remain. Example 3: Web server The customer s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of two reference virtual machines Memory of four reference virtual machines Storage of one reference virtual machine I/Os of two reference virtual machines In this case, the corresponding virtual machine uses the resources of four reference virtual machines. If implemented on a VNXe3200 storage system, which can support up to 125 virtual machines, resources for 121 reference virtual machines remain. Example 4: Decision-support database The database server for a customer s decision support system must move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of 10 reference virtual machines Memory of 32 reference virtual machines Storage of 52 reference virtual machines EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 71

72 Chapter 4: Solution Architecture Overview I/Os of 28 reference virtual machines In this case, the corresponding virtual machine uses the resources of 52 reference virtual machines. If implemented on a VNXe3200 storage system, which can support up to 125 virtual machines, resources for 73 reference virtual machines remain. Summary of examples These four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 125 reference virtual machines, and resources for 59 reference virtual machines remain in the resource pool as shown in Figure 25. Figure 25. Resource pool flexibility Implementing the solution In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are beyond the scope of the document. Examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. Overview Resource types The solution described in this document requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are general requirements that are independent of any particular implementation except that the requirements grow linearly with the target level of scale. This section describes some considerations for implementing the requirements. The solution defines the hardware requirements in terms of four basic types of resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, their use in the solution, and key implementation considerations in a customer environment. CPU resources The solution defines the number of CPU cores required, but not a specific type or configuration. New deployments should use recent revisions of common processor 72 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

73 Chapter 4: Solution Architecture Overview technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution. In any running system, monitor the utilization of resources and adapt as needed. The reference virtual machine and required hardware resources in the solution assume that there are four virtual CPUs for each physical processor core (4:1 ratio). (For Ivy Bridge or later processors, use 8 vcpus per physical core.) Usually, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual server in the solution must have 2 GB of memory. Because of budget constraints, in a virtual environment, it is common to provision virtual machines with more memory than is installed on the physical hypervisor server. The memory overcommitment technique takes advantage of the fact that each virtual machine does not use all allocated memory. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate so that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware ESXi runs out of memory for the guest operating systems, paging takes place and results in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issues as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of vswap activity, add more disks for increased performance. The administrator must decide if it is more cost effective to add more physical memory to the server, or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option. This solution is validated with statically assigned memory and no over-commitment of memory resources. If a real-world environment uses memory over-commit, monitor the system memory utilization and associated page file I/O activity consistently to ensure that a memory shortfall does not cause unexpected results. Network resources The solution outlines the minimum needs of the system. If additional bandwidth is needed, add capability to both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and you can add ports using EMC UltraFlex I/O modules. For reference purposes in the validated environment, each virtual machine generates 25 IOPS per second with an average size of 8 KB. This means that each virtual machine generates at least 200 KB/s of traffic on the storage network. For an environment rated for 100 virtual machines, this is calculated as a minimum of approximately 20 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual machine migration EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 73

74 Chapter 4: Solution Architecture Overview Administrative and management operations The requirements for each network depend on how it will be used. It is not practical to provide precise numbers in this context. However, the network described in the reference architecture for each solution must be sufficient to handle average workloads for the previously described use cases. Regardless of the network traffic requirements, always have at least two physical network connections shared for a logical network so that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The storage building blocks described in this solution contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. Consider a few factors when examining storage sizing. Specifically, the array has a collection of disks assigned to a storage pool. From that storage pool, provision datastores to the VMware vsphere cluster. Each layer has a specific configuration defined for the solution and documented in the deployment section of this guide in Chapter 5: VSPEX Configuration Guidelines. It is acceptable to: Replace drives with larger capacity drives of the same type and performance characteristics, or with higher performance drives of the same type and capacity. Change the placement of drives in the drive shelves to comply with updated or new drive shelf arrangements. Increase the scale using the building blocks with a larger number of drives up to the limit defined in the VSPEX private cloud validated maximums section. Observe the following best practices: Use the latest best practices guidance from EMC regarding drive placement within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best Practice for Performance. When expanding the capability of a storage pool using the building blocks described in this document use the same type and size of drive in the pool. Create a new pool to use different drive types and sizes. This prevents uneven performance across the pool. Configure at least one hot spare for every type and size of drive on the system. Configure at least one hot spare for every 30 drives of a given type. In other cases where there is a need to deviate from the proposed number and type of drives specified, or from the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system and conforms to EMC published best practices. 74 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

75 Chapter 4: Solution Architecture Overview Implementation summary The requirements in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual machine. In any customer implementation, the load of a system varies over time as users interact with the system. Add resources to a system if the virtual machines differ significantly from the reference definition, and vary in the same resource group. Quick assessment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as listed in Table 10. Table 10. Blank worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application Resource requirements NA Equivalent reference virtual machines Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU, memory, IOPS, and capacity. CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as esxtop, on vsphere hosts to examine the CPU Utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 75

76 Chapter 4: Solution Architecture Overview In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as VMware esxtop, to determine memory efficiency. In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Storage performance requirements The storage performance requirements for an application are usually the least understood aspect of performance. Three components become important when discussing the I/O performance of a system: The number of requests coming in, or IOPS The size of the request, or I/O size. For example, a request for 4 KB of data is easier and faster to process than a request for 4 MB of data. The average I/O response time, or I/O latency I/O operations per second The reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as VMware esxtop, which provides several counters that can help. The most common are: For block: For file: Physical Disk\Commands/sec Physical Disk\Reads/sec Physical Disk\Writes/sec Physical Disk\Average Guest MilliSec/Command Physical Disk NFS Volume\Commands/sec Physical Disk NFS Volume\Reads/sec Physical Disk NFS Volume\Writes/sec Physical Disk NFS Volume\Average Guest MilliSec/Command The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application. 76 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

77 Chapter 4: Solution Architecture Overview I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even, powers of 2 4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of the even I/O sizes. The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application uses mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application generates 100 IOPS at 32 KB, the factor indicates you should plan for 400 IOPS, since the reference virtual machine assumes 8 KB I/O sizes. I/O latency The average I/O response time, or I/O latency, is a measurement of how quickly the storage system processes I/O requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target; however, monitor the system and re-evaluate the resource pool utilization if needed. To monitor I/O latency, use the Physical Disk \ Average Guest MilliSec/Command counter (block storage) or Physical Disk NFS Volume \ Average Guest MilliSec/Command counter (file storage) in esxtop. If the I/O latency is continuously over the target, re-evaluate the virtual machines in the environment to ensure these machines do not use more resources than intended. Storage capacity requirements Determining equivalent reference virtual machines The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. With all of the resources defined, determine an appropriate value for the equivalent reference virtual machines line by using the relationships in Table 11. Round all values up to the closest whole number. Table 11. Reference virtual machine resources Resource Value for reference virtual machine Relationship between requirements and equivalent reference virtual machines CPU 1 Equivalent reference virtual machines = resource requirements EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 77

78 Chapter 4: Solution Architecture Overview Resource Value for reference virtual machine Relationship between requirements and equivalent reference virtual machines Memory 2 Equivalent reference virtual machines = (Resource Requirements)/2 IOPS 25 Equivalent reference virtual machines = (resource requirements)/25 Capacity 100 Equivalent reference virtual machines = (resource requirements)/100 For example, the point of sale system database used in Example 2: Point of sale system requires four CPUs, 16 GB of memory, 200 IOPS, and 200 GB of storage. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 12 demonstrates how that machine fits into the worksheet row. Table 12. Example worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application Resource requirements Equivalent reference virtual machines N/A Use the highest value in the row to fill in Equivalent reference virtual machines column. As shown in Figure 26, the example requires eight reference virtual machines. 78 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

79 Chapter 4: Solution Architecture Overview Figure 26. Required resource from the reference virtual machine pool Implementation example - Stage 1 A customer wants to build a virtual infrastructure to support one custom-built application, one point of sale system, and one web server. The customer computes the sum of the Equivalent reference virtual machines column on the right side of the worksheet as shown in Table 13 to calculate the total number of reference virtual machines required. The table shows the result of the calculation, rounded up to the nearest whole number. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 79

80 Chapter 4: Solution Architecture Overview Table 13. Example applications stage 1 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custombuilt application Resource requirements Equivalent reference virtual machines NA Example application #2: Point of sale system Example application #3: Web server Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines NA NA Total equivalent reference virtual machines 14 This example requires 14 reference virtual machines. According to the sizing guidelines, one storage pool with 5 SAS drives provides sufficient resources for the current needs and room for growth. You can use a VNXe3200, which supports up to 125 reference virtual machines. Figure 27 shows that one reference virtual machine is available after implementing VNXe3200 with five SAS drives. Figure 27. Aggregate resource requirements stage 1 80 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

81 Figure 28 shows the pool configuration in this example. Chapter 4: Solution Architecture Overview Figure 28. Pool configuration stage 1 Implementation example stage 2 Next, the customer must add a decision support database to the virtual infrastructure. Using the same strategy, the number of reference virtual machines required can be calculated, as shown in Table 14. Table 14. Example applications -stage 2 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application #1: Custom built application Resource requirements Equivalent reference virtual machines N/A Example application #2: Point of sale system Example application #3: Web server Example application #4: Decision support database Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource Requirements Equivalent reference virtual machines N/A N/A N/A Total equivalent reference virtual machines 66 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 81

82 Chapter 4: Solution Architecture Overview This example requires 66 reference virtual machines. According to the sizing guidelines, one storage pool with 25 SAS drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with a VNXe3200, for up to 125 reference virtual machines. Figure 29 shows that nine reference virtual machines are available after implementing VNXe3200 with 25 SAS drives. Figure 29. Aggregate resource requirements - stage 2 Figure 30 shows the pool configuration in this example. Figure 30. Pool configuration stage 2 Fine-tuning hardware resources Usually, the process described determines the recommended hardware size for servers and storage. However, in some cases there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point. Storage resources In some applications, application data must be separated from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. To achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool. With the method outlined in Determining equivalent reference virtual machines, it is easy to build a virtual infrastructure scaling from 15 reference virtual machines to 125 reference virtual machines with the building blocks described in VSPEX storage 82 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

83 Chapter 4: Solution Architecture Overview building blocks, while keeping in mind the recommended limits of each storage array documented in the VSPEX private cloud validated maximums. Server resources For some workloads the relationship between server needs and storage needs does not match what is outlined in the reference virtual machine. Size the server and storage layers separately in this scenario. Figure 31. Customizing server resources To do this, first total the resource requirements for the server components as shown in Table 15. In the Server resource component totals line at the bottom of the worksheet, add up the server resource requirements from the applications in the table. Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage component totals line at the bottom of Table 15 describes the required amount of storage. Table 15. Server resource component totals Server resources Storage resources Application CPU (Virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference Virtual Machines Example Application #1: Custom Built Application Example Application #2: Point of Sale System Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 83

84 Chapter 4: Solution Architecture Overview Example Application #3: Web Server Example Application #4: Decision Support Database Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Server resources Storage resources Total equivalent reference virtual machines 66 Server resource component totals Note: Calculate the sum of the resource requirements row for each application, not the equivalent reference virtual machines, to get the server and storage component totals. In this example, the target architecture required 17 virtual CPUs and 155 GB of memory. If four virtual machines per physical processor core are used, and memory over-provisioning is not necessary, the architecture requires 5 physical processor cores and 155 GB of memory. With these numbers, the solution can be effectively implemented with fewer server and storage resources. Note: Keep high-availability requirements in mind when customizing the resource pool hardware. Appendix C provides a blank Server Resource Component Worksheet. EMC VSPEX Sizing Tool To simplify the sizing of this solution EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions. The VSPEX Sizing Tool enables you to input your resource requirements from the customer s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions and provides platform configuration information that meets those requirements. You can access this tool at the following location: EMC VSPEX Sizing Tool. 84 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

85 Chapter 5: VSPEX Configuration Guidelines Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare switches, connect network, and configure switches Prepare and configure storage array Install and configure vsphere hosts Install and configure SQL server database Install and configure VMware vcenter server Summary EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 85

86 Chapter 5: VSPEX Configuration Guidelines Overview The deployment process consists of the stages listed in Table 16. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. Table 16 lists the main stages in the solution deployment process. The table also includes references to sections that contain relevant procedures. Table 16. Deployment process overview Stage Description reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Deployment prerequisites 3 Gather customer configuration data 4 Rack and cable the components 5 Configure the switches and networks, connect to the customer network Customer configuration data Refer to the vendor documentation. Prepare switches, connect network, and configure switches 6 Install and configure the VNXe Prepare and configure the storage array 7 Configure virtual machine datastores 8 Install and configure the servers 9 Set up Microsoft SQL Server (used by VMware vcenter ) 10 Install and configure vcenter Server and virtual machine networking Prepare and configure the storage array Install and configure the VMware vsphere hosts Install and configure Microsoft SQL Server database Configure database for VMware vcenter 86 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

87 Chapter 5: VSPEX Configuration Guidelines Pre-deployment tasks Overview The pre-deployment tasks include procedures not directly related to environment installation and configuration, and provide needed results at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite. Table 17. Tasks for pre-deployment Task Description Reference Gather documents Gather tools Gather data Gather the related documents listed in Appendix D. These documents provide detail on setup procedures and deployment best practices for the various components of the solution. Gather the required and optional tools for the deployment. Use Table 18 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer Configuration Data Sheet for reference during the deployment process. Appendix D: References Table 18 Deployment prerequisites checklist Appendix B: Customer Configuration Data Sheet Deployment prerequisites Table 18 lists the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 2 and Table 3. Table 18. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual servers: Sufficient physical server capacity to host 125 virtual servers. VMware vsphere servers to host virtual infrastructure servers Note The existing infrastructure may already meet this requirement. Switch port capacity and capabilities as required by the virtual server infrastructure. EMC VNXe3200 (125 virtual machines): multiprotocol storage array with the required disk layout. Table 2: Solution hardware Software VMware ESXi installation media. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 87

88 Chapter 5: VSPEX Configuration Guidelines Requirement Description Reference VMware vcenter Server installation media. EMC VSI for VMware vsphere: Unified Storage Management. EMC Online Support EMC VSI for VMware vsphere: Storage Viewer. Microsoft Windows Server 2012 installation media (suggested OS for VMware vcenter). Microsoft SQL Server 2008 R2 or newer installation media. Note This requirement may be covered in the existing infrastructure. VMware Storage API for Array Integration Plug-in. EMC Online Support Microsoft Windows Server 2012 R2 Datacenter Edition installation media (suggested OS for virtual machine guest OS Licenses VMware vcenter license key. VMware ESXi license keys. Microsoft Windows Server 2012 R2 Standard Edition (or higher) license keys. Microsoft Windows Server 2012 R2 Datacenter Edition license keys. Note An existing Microsoft Key Management Server (KMS) may cover this requirement. Microsoft SQL Server license key. Note The existing infrastructure may already meet this requirement. 88 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

89 Chapter 5: VSPEX Configuration Guidelines Customer configuration data Assemble information such as IP addresses and hostnames as part of the planning process to reduce time onsite. Appendix B provides a set of tables to maintain a record of relevant customer information. Add, record, and modify information as needed during the deployment progress. Prepare switches, connect network, and configure switches Overview This section lists the network infrastructure requirements needed to support this architecture. Table 19 provides a summary of the tasks for switch and network configuration, and references for further information. Table 19. Tasks for switch and network configuration Task Description reference Configure infrastructure network Configure VLANs Complete network cabling Configure storage array and ESXi host infrastructure networking as specified in Prepare and configure the storage array and Install and configure the VMware vsphere hosts. Configure private and public VLANs as required. Connect the switch interconnect ports. Connect the VNXe ports. Connect the ESXi server ports. Prepare and configure the storage array and Install and configure the VMware vsphere hosts. Your vendor s switch configuration guide Prepare network switches For validated levels of performance and high-availability, this solution requires the switching capacity listed in Table 2. Do not use new hardware if existing infrastructure meets the requirements. Configure infrastructure network The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports, to provide both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure or the solution already exists, or you are deploying it alongside other components of the solution. Figure 32 and Figure 33 show a sample redundant infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that there are no single points of failure. In Figure 32, converged switches provide customers with different protocol options (FC or iscsi) for storage networks for block storage. While existing FC switches are acceptable for the FC protocol option, use 10 Gb Ethernet network switches for iscsi. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 89

90 Chapter 5: VSPEX Configuration Guidelines Figure 32. Sample network architecture Block storage 90 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

91 Chapter 5: VSPEX Configuration Guidelines Figure 33 shows a sample redundant Ethernet infrastructure for file storage. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in the network connectivity. Figure 33. Sample Ethernet network architecture File storage EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 91

92 Chapter 5: VSPEX Configuration Guidelines Configure VLANs Configure jumbo frames (iscsi and NFS only) Complete network cabling Ensure there are adequate switch ports for the storage array and ESXi hosts. Use a minimum of three VLANs for: Client access Storage networking (iscsi and NFS only) and vmotion (These are customerfacing networks. Separate them if required.) Management Use jumbo frames for iscsi and NFS protocols. Set the MTU to 9,000 for the switch ports for the iscsi or NFS storage network. Consult your switch configuration guide for instructions. Ensure the following: All servers, storage arrays, switch interconnects, and switch uplinks plug into separate switching infrastructures and have redundant connections. There is a complete connection to the existing customer network. Note: Ensure that unforeseen interactions do not cause service issues when you connect the new equipment to the customer network. Prepare and configure the storage array Implementation instructions and best practices may vary depending on the storage network protocol selected for the solution. Follow these steps in each case: 1. Configure the VNXe. 2. Provision storage to the hosts. 3. Optionally configure FAST VP. 4. Optionally configure FAST Cache. The following sections explain the options for each step separately, depending on whether one of the block protocols (FC, iscsi), or the file protocol (NFS) is selected: For FC or iscsi, refer to the instructions marked for block protocols. For NFS, refer to the instructions marked for file protocols. 92 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

93 Chapter 5: VSPEX Configuration Guidelines VNXe configuration for block protocols This section describes how to configure the VNXe storage array for host access with block protocols such FC and iscsi. In this solution, the VNXe provides data storage for VMware hosts. Table 20. Tasks for VNXe configuration Task Description Reference Prepare the VNXe Set up the initial VNXe configuration Provision storage for VMware hosts Physically install the VNXe hardware with the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNXe. Create the storage areas required for the solution. VNXe3200 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Prepare the VNXe The VNXe3200 Unified Installation Guide provides instructions to assemble, rack, cable, and power up the VNXe. There are no specific setup steps for this solution. Set up the initial VNXe configuration After completing the initial VNXe setup, configure key information about the existing environment so that the storage array can communicate with other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces For data connection using the FC protocols: Ensure that one or more servers are connected to the VNXe storage system, either directly or through qualified FC switches. Refer to the EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions. For data connection using iscsi protocol: Connect one or more servers to the VNXe storage system, either directly or through qualified IP switches. Refer to EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information: 1. Set up a storage network IP address. Logically isolate the other networks in the solution as described, in Chapter 3 Solution Technology Overview. This ensures that other network traffic does not impact traffic between hosts and storage. 2. Enable jumbo frames on the VNXe iscsi ports. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 93

94 Chapter 5: VSPEX Configuration Guidelines Use jumbo frames for iscsi networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment: a. In Unisphere, select Settings > Network > More Configuration >Port Settings. b. Select the appropriate iscsi network interface. c. On the right panel, set the MTU size to 9,000. d. Click Apply to apply the changes. The reference documents listed in Appendix D provide more information on how to configure the VNXe platform. Storage configuration guidelines provide more information on the disk layout. Provision storage for VMware hosts This section describes provisioning block storage for VMware hosts. To provision file storage, refer to VNXe configuration for file protocols. Complete the following steps in Unisphere to configure LUNs on the VNXe array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4 Solution Architecture Overview. This example uses the array recommended maximums described in Chapter 4 Solution Architecture Overview. a. Log in to Unisphere. b. Select Storage > Storage Configuration > Storage Pools. c. Click the List View tab. d. Click Create. Note: The pool does not use system drives for additional storage. Table 21. Storage allocation table for block data Configuration 125 virtual servers Number of pools Number of 10 K SAS drives per pool Number of LUNs per pool LUN size (TB) Total x 7 TB LUNs Note: Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. Figure 18 depicts the target storage layout for 125 virtual machines. 2. Use the pool created in Step 1 to provision thin LUNs: a. Click Storage > VMware Datastores. b. Click Create. 94 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

95 c. Specify the appropriate Datastore Type. d. Specify Datastore Name. Chapter 5: VSPEX Configuration Guidelines e. Select the pool created in Step 1. Always create two thin LUNs in one physical storage pool. User Capacity depends on the specific number of virtual machines. Refer to Table 22 for more information. f. Configure appropriate Snapshot Schedule g. Configure appropriate Host Access for each host. h. Review the Summary of DataStore Configuration and click Finish to create the Datastores. VNXe configuration for file protocols This section describes file storage provisioning for VMware. Table 22. Tasks for storage configuration Task Description Reference Prepare the VNXe Set up the initial VNXe configuration Create a network interface Create a storage pool for file Create file systems Physically install the VNXe hardware with the procedures in the product documentation. Configure the IP address information and other key parameters on the VNXe. Configure the IP address and network interface information for the NFS server. Create the pool structure and LUNs to contain the file system. Establish the file system that will be shared with the NFS protocol and export it to the VMware hosts. VNXe3200 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Prepare the VNXe The VNXe3200 Unified Installation Guide provides instructions to assemble, rack, cable, and power up the VNXe. There are no specific setup steps for this solution. Set up the initial VNXe configuration After the initial VNXe setup, configure key information about the existing environment to allow the storage array to communicate with other devices in the environment. Ensure one or more servers connect to the VNXe storage system, either directly or through qualified IP switches. Configure the following items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces Storage network IP address EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 95

96 Chapter 5: VSPEX Configuration Guidelines CIFS services and Active Directory Domain membership Refer to EMC Host Connectivity Guide for Windows for more detailed instructions. Enable jumbo frames on the VNXe storage network interfaces Use jumbo frames for storage networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment: 1. In Unisphere, click Settings > More Configuration >Port Settings. 2. Select network interfaces from the I/O Modules panel. 3. On the right panel, set the MTU size to 9, Click Apply to apply the changes. Create link aggregation on the VNXe storage network interfaces Link aggregation provides network redundancy on the VNXe3200 system. Complete the following steps to create a network interface link aggregation: 1. Log in to the VNXe. 2. Select a network interfaces from the I/O Modules panel. 3. On the right panel, Aggregate with another network interface. 4. Click the Create Aggregation button. 5. Click Yes to apply the changes. The reference documents listed in Appendix D provide more information on how to configure the VNXe platform. Storage configuration guidelines provide more information on the disk layout. Create a NAS Server A network interface maps to a NFS export. File shares provide access through this interface. Complete the following steps to create a NAS Server: 1. Log in to the VNXe. 2. Click Settings > NAS Servers. 3. Click Create. In the Create NAS Server wizard, complete the following steps, as shown in Figure Specify Server Name. 2. Select the Storage Pool that will provide the file share. 3. Type an IP Address for the interface. 4. Type a Server Name for the interface. 5. Type the Subnet Mask for the interface. 6. Click Show Advanced. 7. Select a storage processor that will support the file share. 96 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

97 Chapter 5: VSPEX Configuration Guidelines 8. Select the Ethernet Port to the link aggregated interface created in Create link aggregation on the VNXe storage network interfaces 9. If required, specify the VLAN ID. 10. Click Next. Figure 34. Configure NAS Server Address 11. Select Linux/Unix shares (NFS). 12. Type in the DNS/NIS if required. 13. Review NAS Server Summary and click Finish to complete the wizard. Provision storage for VMware hosts This section describes provisioning block storage for VMware hosts. To provision file storage, refer to VNXe configuration for file protocols. Complete the following steps in Unisphere to configure LUNs on the VNXe array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4: Solution Architecture Overview. This example uses the array recommended maximums described in Chapter 4: Solution Architecture Overview. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 97

98 Chapter 5: VSPEX Configuration Guidelines a. Log in to Unisphere. b. Select Storage > Storage Configuration > Storage Pools. c. Click the List View tab. d. Click Create. Note: The pool does not use system drives for additional storage. Table 23. Storage allocation table for file data Configuration 125 virtual servers Number of pools Number of 10 K SAS drives per pool Number of File System per pool File System size (TB) Total x 7 TB LUNs Create file systems A file system exports an NFS file share. Create a file system before creating the NFS file share. VNXe requires a storage pool and a NAS Server to create a file system. If no storage pools or NAS Server exist, follow the steps in Provision storage for VMware hosts and Create a NAS Server to create a storage pool and a network interface. Create two thin file systems from each Storage Pool for File. Refer to Table 23 for details on the number of file systems. Complete the following steps to create file systems on the VNXe for NFS File shares: 1. Log in to Unisphere. 2. Select Storage > File Systems. 3. Click Create. The file system creation wizard appears. 4. Select a NAS server. 5. Specify the file system name. 6. Specify the storage pool and size. Size depends on the specific number of virtual machines. Refer to Table 23 for more information. 7. Specify the share name of the file system. 8. Configure host access for each host. 9. Select an appropriate snapshot schedule. 10. Review File System Creation Summary and click Finish to complete the wizard. 98 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

99 Chapter 5: VSPEX Configuration Guidelines FAST VP configuration (optional) Optionally, this procedure applies to both file and block storage implementations. Complete the following steps to configure FAST VP. Assign two flash drives in the storage pool: 1. Navigate to Storage > Storage Configuration > Storage Pools. 2. Select the pool created when provisioning file or block storage and click Details. 3. Click Fast VP. Here, you can see the amount of data relocated or to relocate in different tier. You can either manually click Start Data Relocation to start relocation or go to Fast VP Settings for more configuration. Figure 35 shows the Fast VP relocation tab. Figure 35. Fast VP relocation tab Note: The Tier Status area shows FAST VP relocation information specific to the selected pool. 4. In Fast VP Settings, click General, select Enable Scheduled Relocations to enable the scheduled relocations and select an appropriate Data Relocation Rate as shown in Figure 36. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 99

100 Chapter 5: VSPEX Configuration Guidelines Figure 36. Scheduled Fast VP relocation Use the dialog box to control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O. 5. Click the Schedule tab, and select appropriate days and times for schedule relocation. Figure 37 shows an example of Fast VP relocation schedule. Figure 37. Fast VP Relocation Schedule 100 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

101 Chapter 5: VSPEX Configuration Guidelines Note: FAST VP is an automated tool that provides the ability to create a relocation schedule. Schedule the relocations during off-hours to minimize any potential performance impact. FAST Cache configuration (optional) Optionally, configure FAST Cache. To configure FAST Cache on the storage pools for this solution, complete the following steps: Note: FAST Cache is an optional component of this solution that can provide improved performance as outlined in Chapter Configure flash drives as FAST Cache: a. Click Storage > Storage Configuration > Fast Cache to configure the Fast Cache. b. Click Create to start the configuration wizard. The wizard will show if it s licensed to use the Fast Cache feature and has eligible flash disks. c. Click Next, the wizard shows the number of disks and the RAID type. d. Click Finish to complete the configuration. Figure 38 shows the steps to create Fast Cache. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 101

102 Chapter 5: VSPEX Configuration Guidelines Figure 38. Create Fast Cache Note: If a sufficient number of flash drives are not available, the Next button will be greyed out. 2. Enable FAST Cache on the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. All the LUNs created in the storage pool have FAST Cache enabled or disabled. Configure FAST Cache for a pool during the Create Storage Pool wizard, as shown in Figure 39. After installing FAST Cache on the VNXe series, it is enabled by default at storage pool creation. 102 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

103 Chapter 5: VSPEX Configuration Guidelines Figure 39. Advanced tab in the Create Storage Pool dialog box If a storage pool is created before FAST Cache is installed, use the Settings tab in the Storage Pool Detail dialog box to configure FAST Cache, as shown in Figure 40. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 103

104 Chapter 5: VSPEX Configuration Guidelines Figure 40. Advanced tab in the Storage Pool Properties dialog box Note: The VNXe FAST Cache feature does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves. 104 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

105 Install and configure the VMware vsphere hosts Chapter 5: VSPEX Configuration Guidelines Overview This section provides the requirements for the installation and configuration of the ESXi hosts and infrastructure servers required to support the architecture. Table 24 describes the tasks that must be completed. Table 24. Tasks for server installation Task Description Reference Install ESXi Configure ESXi networking Install and configure PowerPath/VE (block storage only) Connect VMware datastores Plan virtual machine memory allocations Install the ESXi hypervisor on the physical servers that are deployed for the solution. Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and jumbo frames. Install and configure PowerPath/VE to manage multipathing for VNXe LUNs. Connect the VMware datastores to the ESXi hosts deployed for the solution. Ensure that VMware memory management technologies are configured properly for the environment. vsphere Installation and Setup Guide vsphere Networking PowerPath VE for VMware vsphere Installation and Administration Guide. vsphere Storage Guide vsphere Installation and Setup Guide Install ESXi When starting the servers being used for ESXi, confirm or enable the hardwareassisted CPU virtualization and the hardware-assisted MMU virtualization setting in the BIOS for each server. If the servers have a RAID controller, configure mirroring on the local disks. Boot the ESXi install media and install the hypervisor on each of the servers. ESXi requires hostnames, IP addresses, and a root password for installation. In addition, install the HBA drivers or configure iscsi initiators on each ESXi host. For details refer to EMC Host Connectivity Guide for VMware ESX Server. Configure ESXi networking During the installation of VMware ESXi, a standard virtual switch (vswitch) is created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and bandwidth requirements, add an additional NIC either by using the ESXi console or by connecting to the ESXi host from the vsphere Client. Each VMware ESXi server must have multiple interface cards for each virtual network to ensure redundancy and provide network load balancing and network adapter failover. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 105

106 Chapter 5: VSPEX Configuration Guidelines VMware ESXi networking configuration, including load balancing and failover options, is described in vsphere Networking. Choose the appropriate load balancing option based on what is supported by the network infrastructure. Create VMkernel ports as required, based on the infrastructure configuration: VMkernel port for storage network (iscsi and NFS protocols) VMkernel port for VMware vmotion Virtual server port groups (used by the virtual servers to communicate on the network) vsphere Networking describes the procedure for configuring these settings. Refer to Appendix D for more information. Jumbo frames (iscsi and NFS only) Enable jumbo frames for the NIC if using NIC for iscsi and NFS data. Set the MTU to 9,000. Consult your NIC vendor s configuration guide for instructions. Install and configure PowerPath/VE (block only) Connect VMware datastores To improve and enhance the performance and capabilities of VNXe storage array, install PowerPath/VE on the VMware vsphere host. For detailed installation steps, refer to the PowerPath VE for VMware vsphere Installation and Administration Guide. Connect the datastores configured in the Install and configure the VMware vsphere hosts section to the appropriate ESXi servers. These include the datastores configured for: Virtual server storage Infrastructure virtual machine storage (if required) SQL Server storage (if required) The vsphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. Refer to Appendix E for more information. Plan virtual machine memory allocations Server capacity in the solution is required for two purposes: To support the new virtualized server infrastructure To support the required infrastructure services such as authentication/authorization, DNS, and databases For information on minimum infrastructure requirements, refer to Table 2. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Memory configuration When configuring server memory, properly size and configure the solution. This section provides an overview on memory allocation for the virtual servers, and factors in vsphere overhead and the virtual machine configuration. 106 EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled

107 ESXi memory management Chapter 5: VSPEX Configuration Guidelines Memory virtualization techniques allow the vsphere hypervisor to abstract physical host resources such as memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. In cases where advanced processors, such as Intel processors with EPT support, are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. vsphere employs the following memory management techniques: Allocation of memory resources greater than those physically available to the virtual machine is known as memory over-commitment. Identical memory pages that are shared across virtual machines are merged with a feature known as transparent page sharing. Duplicate pages return to the host free memory pool for reuse. ESXi stores pages, which would otherwise be swapped out to disk through host swapping, in a compressed cache located in the main memory. Memory ballooning relieves host resource exhaustion. This process requests free pages to be allocated from the virtual machine to the host for reuse. Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk. Additional information can be obtained from the Understanding Memory Resource Management in VMware vsphere 5.0 White Paper. Virtual machine memory concepts Figure 41 shows the memory settings in the virtual machine. Figure 41. Virtual machine memory settings The memory settings are: Configured memory: Physical memory allocated to the virtual machine at the time of creation. Reserved memory: Memory that is guaranteed to the virtual machine. Touched memory: Memory that is active or in use by the virtual machine. EMC VSPEX Private Cloud: VMware vsphere 5.5 for up to 125 Virtual Machines Enabled 107

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This document

More information

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx) EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNXe3200,

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe, and EMC Next-Generation Backup EMC VSPEX Abstract This

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNX and EMC

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP Enabled By EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This describes how to design virtualized Oracle Database resources on

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014 DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTIONS FOR VSPEX PRIVATE CLOUD EMC VSPEX December 2014 Copyright 2013-2014 EMC Corporation. All rights reserved. Published in USA. Published December,

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE.

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE. EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications Essentials Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX White Paper EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX Citrix XenDesktop 5.6 with Provisioning Services 6.1 for 5000 Desktops Including: Citrix XenDesktop Citrix Provisioning Services EMC

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE DESIGN AND IMPLEMENTATION GUIDE EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE EMC VSPEX Abstract This describes how to design virtualized VMware vcloud Suite resources on

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise REFERENCE ARCHITECTURE PernixData FVP Software and Splunk Enterprise 1 Table of Contents Executive Summary.... 3 Solution Overview.... 4 Hardware Components.... 5 Server and Network... 5 Storage.... 5

More information

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for file, block, and object storage MCx multi-core optimization unlocks the power of flash

More information

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015 VMware vsphere Data Protection REVISED APRIL 2015 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Requirements.... 4 Evaluation Workflow... 5 Overview.... 5 Evaluation

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

Nimble Storage for VMware View VDI

Nimble Storage for VMware View VDI BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important

More information

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE White Paper MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE EMC VNX Family, EMC Symmetrix VMAX Systems, and EMC Xtrem Server Products Design and sizing best practices

More information

How To Get A Storage And Data Protection Solution For Virtualization

How To Get A Storage And Data Protection Solution For Virtualization Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. November 2014 Copyright 2014 EMC Corporation.

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

EMC VNXe3200 UFS64 FILE SYSTEM

EMC VNXe3200 UFS64 FILE SYSTEM White Paper EMC VNXe3200 UFS64 FILE SYSTEM A DETAILED REVIEW Abstract This white paper explains the UFS64 File System architecture, functionality, and features available in the EMC VNXe3200 storage system.

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Three Paths to the Virtualized Private Cloud

Three Paths to the Virtualized Private Cloud The Essential Guide to Virtualizing Microsoft Applications on EMC VSPEX For organizations running mission-critical Microsoft enterprise applications like Microsoft Exchange, Microsoft SharePoint, and Microsoft

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage

Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage EMC Information Infrastructure Solutions Abstract Virtual desktop infrastructures introduce a new way for IT organizations

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Table of Contents Introduction.......................................3 Benefits of VDI.....................................4

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2 Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer

More information

Consolidate and Virtualize Your Windows Environment with NetApp and VMware

Consolidate and Virtualize Your Windows Environment with NetApp and VMware White Paper Consolidate and Virtualize Your Windows Environment with NetApp and VMware Sachin Chheda, NetApp and Gaetan Castelein, VMware October 2009 WP-7086-1009 TABLE OF CONTENTS 1 EXECUTIVE SUMMARY...

More information

VirtualclientTechnology 2011 July

VirtualclientTechnology 2011 July WHAT S NEW IN VSPHERE VirtualclientTechnology 2011 July Agenda vsphere Platform Recap vsphere 5 Overview Infrastructure Services Compute, Storage, Network Applications Services Availability, Security,

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview Pivot3 Desktop Virtualization Appliances vstac VDI Technology Overview February 2012 Pivot3 Desktop Virtualization Technology Overview Table of Contents Executive Summary... 3 The Pivot3 VDI Appliance...

More information

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER ESSENTIALS Mitigate project risk with the proven leader, many of largest EHR sites run on EMC storage Reduce overall storage costs with automated

More information

Backup & Recovery for VMware Environments with Avamar 6.0

Backup & Recovery for VMware Environments with Avamar 6.0 White Paper Backup & Recovery for VMware Environments with Avamar 6.0 A Detailed Review Abstract With the ever increasing pace of virtual environments deployed in the enterprise cloud, the requirements

More information

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Proven Solution Guide

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Proven Solution Guide Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

VMware vsphere-6.0 Administration Training

VMware vsphere-6.0 Administration Training VMware vsphere-6.0 Administration Training Course Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Classroom Fee = 20,000 INR Online / Fast-Track Fee = 25,000 INR Fast

More information

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200

More information

VMware vsphere 5.1 Advanced Administration

VMware vsphere 5.1 Advanced Administration Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

New Generation of IT self service vcloud Automation Center

New Generation of IT self service vcloud Automation Center New Generation of IT self service vcloud Automation Center Maciej Kot, Senior SE Warszawa 2014 VMware Delivers: The Foundation for the Software-Defined Enterprise End User Computing Desktop Virtual Workspace

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

VMware vsphere 5.0 Boot Camp

VMware vsphere 5.0 Boot Camp VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User-Computing solution for Citrix XenDesktop 7.5.

More information

VNX HYBRID FLASH BEST PRACTICES FOR PERFORMANCE

VNX HYBRID FLASH BEST PRACTICES FOR PERFORMANCE 1 VNX HYBRID FLASH BEST PRACTICES FOR PERFORMANCE JEFF MAYNARD, CORPORATE SYSTEMS ENGINEER 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation and undertakes no obligations with regard to product

More information

VMware Software-Defined Storage & Virtual SAN 5.5.1

VMware Software-Defined Storage & Virtual SAN 5.5.1 VMware Software-Defined Storage & Virtual SAN 5.5.1 Peter Keilty Sr. Systems Engineer Software Defined Storage pkeilty@vmware.com @keiltypeter Grant Challenger Area Sales Manager East Software Defined

More information

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts

More information

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

EMC XtremSF: Delivering Next Generation Performance for Oracle Database White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information