How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

Size: px
Start display at page:

Download "How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)"

Transcription

1 EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with Microsoft Hyper-V, EMC VNX Series, and EMC Powered Backup for up to 1,000 virtual machines. April 2014

2 Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Published April 2014 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to Part Number H EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual MachinesEnabled by EMC VNX Series and EMC Powered Backup

3 Contents Chapter 1 Executive Summary 15 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 19 Introduction Virtualization Compute Network Storage EMC VNX Series EMC backup and recovery Chapter 3 Solution Technology Overview 31 Overview Summary of key components Virtualization Overview Microsoft Hyper-V Virtual Fibre Channel ports Microsoft System Center Virtual Machine Manager High availability with Hyper-V Failover Clustering Hyper-V Replica Hyper-V snapshot Cluster-Aware Updating EMC Storage Integrator Compute Network Overview EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 3

4 Contents Storage Overview EMC VNX series EMC VNX Snapshots EMC VNX SnapSure EMC VNX Virtual Provisioning Windows Offloaded Data Transfer EMC PowerPath EMC FAST Cache VNX file shares ROBO SMB 3.0 features Overview SMB versions and negotiations VNX and VNXe storage support SMB 3.0 VHD/VHDX storage support SMB 3.0 Continuous Availability SMB Multichannel SMB 3.0 Copy Offload SMB 3.0 BranchCache SMB 3.0 Remote VSS SMB 3.0 encryption SMB 3.0 PowerShell cmdlets SMB 3.0 Directory Leasing Summary of feature defaults Backup and recovery Overview EMC Avamar deduplication EMC Data Domain deduplication storage systems VMware vsphere data protection Continuous availability EMC RecoverPoint EMC VNX Replicator Other technologies EMC XtremCache Chapter 4 Solution Architecture Overview 71 Overview Solution architecture EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

5 Contents Overview Logical architecture Key components Hardware resources Software resources Server configuration guidelines Overview Ivy Bridge Updates Hyper-V memory virtualization Memory configuration guidelines Network configuration guidelines Overview VLAN Enable jumbo frames (iscsi, FCoE, or SMB only) Link aggregation (SMB only) Storage configuration guidelines Overview Hyper-V storage virtualization for VSPEX VSPEX storage building blocks VSPEX private cloud validated maximums High-availability and failover Overview Virtualization layer Compute layer Network layer Storage layer Validation test profile Profile characteristics Backup and recovery configuration guidelines Sizing guidelines Reference workload Overview Defining the reference workload Applying the reference workload Overview Example 1: Custom-built application Example 2: Point-of-Sale system Example 3: Web server Example 4: Decision-support database EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 5

6 Contents Summary of examples Implementing the solution Overview Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Quick assessment of customer environment Overview CPU requirements Memory requirements Storage performance requirements IOPS I/O size I/O latency Storage capacity requirements Determining equivalent reference virtual machines Fine-tuning hardware resources EMC VSPEX Sizing Tool Chapter 5 VSPEX Configuration Guidelines 127 Overview Pre-deployment tasks Overview Deployment prerequisites Customer configuration data Prepare switches, connect network, and configure switches Overview Prepare network switches Configure infrastructure network Configure VLANs Configure jumbo frames (iscsi or SMB only) Complete network cabling Prepare and configure storage array VNX configuration for block protocols VNX configuration for file protocols FAST VP configuration EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

7 Contents FAST Cache configuration Install and configure Hyper-V hosts Overview Install Windows hosts Install Hyper-V and configure failover clustering Configure Windows host networking Install PowerPath on Windows servers Plan virtual machine memory allocations Install and configure SQL Server database Overview Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Configure a SQL Server for SCVMM System Center Virtual Machine Manager server deployment Overview Create a SCVMM host virtual machine Install the SCVMM guest OS Install the SCVMM server Install the SCVMM Management Console Install the SCVMM agent locally on a host Add a Hyper-V cluster into SCVMM Add file share storage to SCVMM (file variant only) Create a virtual machine in SCVMM Perform partition alignment, and assign File Allocation Unite Size Create a template virtual machine Deploy virtual machines from the template virtual machine Summary Chapter 6 Verifying the Solution 159 Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components Block environments File environments Chapter 7 System Monitoring 163 Overview Key areas to monitor EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 7

8 Contents Performance baseline Servers Networking Storage VNX resources monitoring guidelines Monitoring block storage resources Monitoring file storage resources Summary Chapter 8 Validation with Microsoft Fast Track v3 181 Overview Business case for validation Process requirements Step 1: Core prerequisites Step 2: Select the VSPEX Proven Infrastructure platform Step 3: Define additional Microsoft Hyper-V Fast Track Program components Step 4: Build a detailed bill of materials Step 5: Test the environment Step 6: Document and publish the solution Additional resources Appendix A Bill of Materials 187 Bill of materials Appendix B Customer Configuration Data Sheet 197 Customer configuration data sheet Appendix C Server Resources Component Worksheet 201 Server resources component worksheet Appendix D References 203 References EMC documentation Other documentation Appendix E About VSPEX 207 About VSPEX EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

9 Figures Figure 1. Next-Generation VNX with multicore optimization Figure 2. Active/active processors increase performance, resiliency, and efficiency Figure 3. New Unisphere Management Suite Figure 4. Storage Processor utilization using Windows deduplication Figure 5. Disk IOPS using Windows deduplication Figure 6. Disk latency using Windows deduplication Figure 7. Deduplication efficiency using VNX deduplication Figure 8. Deduplication efficiency using Windows Server 2012 R2 deduplication28 Figure 9. EMC backup and recovery solutions Figure 10. VSPEX private cloud components Figure 11. Compute layer flexibility Figure 12. Example of highly available network design for block Figure 13. Example of highly available network design for file Figure 14. Storage pool rebalance progress Figure 15. Thin LUN space utilization Figure 16. Examining storage pool space utilization Figure 17. Defining storage pool utilization thresholds Figure 18. Defining automated notifications - for block Figure 19. SMB 3.0 baseline performance comparison point Figure 20. SMB 3.0 Continuous Availability Figure 21. CA application performance Figure 22. SMB Multichannel fault tolerance Figure 23. Multichannel network throughput Figure 24. Copy Offload Figure 25. Enabling the Encrypt Data parameter Figure 26. Enabling encryption: Client CPU utilization Figure 27. Enabling encryption: Data Mover CPU utilization Figure 28. PowerShell execution of Show Shares Figure 29. PowerShell execution of Get-SmbServerConfiguration EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 9

10 Figures Figure 30. SMB 3.0 Directory Leasing Figure 31. Logical architecture for block storage Figure 32. Logical architecture for file storage Figure 33. Ivy Bridge processor guidance Figure 34. Hypervisor memory consumption Figure 35. Required networks for block storage Figure 36. Required networks for file storage Figure 37. Hyper-V virtual disk types Figure 38. Building block for 13 virtual servers Figure 39. Building block for 125 virtual servers Figure 40. Storage layout for 200 virtual machines using VNX Figure 41. Storage layout for 300 virtual machines using VNX Figure 42. Storage layout for 600 virtual machines using VNX Figure 43. Storage layout for 1,000 virtual machines using VNX Figure 44. Maximum scale levels and entry points of different arrays Figure 45. High availability at the virtualization layer Figure 46. Redundant power supplies Figure 47. Network layer high availability (VNX) block variant Figure 48. Network layer high availability (VNX) file variant Figure 49. VNX series HA components Figure 50. Resource pool flexibility Figure 51. Required resource from the reference virtual machine pool Figure 52. Aggregate resource requirements stage Figure 53. Pool configuration stage Figure 54. Aggregate resource requirements - stage Figure 55. Pool configuration stage Figure 56. Aggregate resource requirements for stage Figure 57. Pool configuration stage Figure 58. Customizing server resources Figure 59. Sample Ethernet network architecture - block variant Figure 60. Sample Ethernet network architecture - file variant Figure 61. Network Settings for File dialog box Figure 62. The Create Interface dialog box Figure 63. The Create CIFS Server dialog box Figure 64. The Create File System dialog box Figure 65. The File System Properties dialog box Figure 66. The Create File Share dialog box Figure 67. The Storage Pool Properties dialog box Figure 68. Manage Auto-Tiering dialog box EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

11 Figures Figure 69. The Storage System Properties dialog box Figure 70. The Create FAST Cache dialog box Figure 71. Advanced tab in the Create Storage Pool dialog Figure 72. Advanced tab in the Storage Pool Properties dialog Figure 73. Storage Pool Alerts area Figure 74. Storage Pools panel Figure 75. LUN Properties dialog box Figure 76. Monitoring and Alerts panel Figure 77. IOPS on the LUNs Figure 78. IOPS on the disks Figure 79. Latency on the LUNs Figure 80. SP utilization Figure 81. Data Mover statistics Figure 82. Front-end Data Mover network statistics Figure 83. Storage Pools for File panel Figure 84. File Systems panel Figure 85. File System Properties window Figure 86. File System I/O Statistics window Figure 87. CIFS Statistics window EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 11

12 Tables Tables Table 1. VNX customer benefits Table 2. Thresholds and settings under VNX OE Block Release Table 3. SMB dialect used between client and server Table 4. Storage migration improvement with Copy Offload Table 5. Microsoft PowerShell cmdlets Table 6. EMC-provided PowerShell cmdlets Table 7. Default status of SMB 3.0 features Table 8. Solution hardware Table 9. Solution software Table 10. Hardware resources for compute layer Table 11. Hardware resources for network Table 12. Hardware resources for storage Table 13. Number of disks required for different number of virtual machines Table 14. Profile characteristics Table 15. Virtual machine characteristics Table 16. Blank worksheet row Table 17. Reference virtual machine resources Table 18. Example worksheet row Table 19. Example applications stage Table 20. Example applications - stage Table 21. Example applications - stage Table 22. Server resource component totals Table 23. Deployment process overview Table 24. Tasks for pre-deployment Table 25. Deployment prerequisites checklist Table 26. Tasks for switch and network configuration Table 27. Tasks for VNX configuration for block protocols Table 28. Storage allocation table for block Table 29. Tasks for storage configuration for file protocols Table 30. Storage allocation table for file EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

13 Tables Table 31. Tasks for server installation Table 32. Tasks for SQL Server database setup Table 33. Tasks for SCVMM configuration Table 34. Hyper-V Fast Track component classification Table 35. Table 36. Table 37. Table 38. List of components used in the VSPEX solution for 200 virtual machines List of components used in the VSPEX solution for 300 virtual machines List of components used in the VSPEX solution for 600 virtual machines List of components used in the VSPEX solution for 1,000 virtual machines Table 39. Common server information Table 40. Hyper-V server information Table 41. Array information Table 42. Network infrastructure information Table 43. VLAN information Table 44. Service accounts Table 45. Blank worksheet for determining server resources EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 13

14

15 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 15

16 Executive Summary Introduction Validated EMC VSPEX modular architectures are built with proven superior technologies to create complete virtualization solutions. These solutions enable you to make an informed decision in the hypervisor, compute, backup, storage, and networking layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, expanded choices, greater efficiency, and lower risk. This document is a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. Target audience The readers of this document should have the necessary training and background to install and configure a VSPEX computing solution based on Microsoft Hyper-V as a hypervisor, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the customer s existing installation. Individuals focusing on selling and sizing a VSPEX end-user computing solution for Microsoft Hyper-V private cloud infrastructure must pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. Document purpose This proven infrastructure guide includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system. The VSPEX private cloud architecture provides the customer with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the Microsoft Hyper-V virtualization layer backed by the highly available VNX family of storage. The compute and network components, which are defined by the VSPEX partners, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. The environments for 200, 300, 600, and 1,000 virtual machines are based on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective when deployed. For smaller environments, solutions for up to 100 virtual machines based on the EMC VNXe series are described in the EMC VSPEX 16 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

17 Executive Summary Private Cloud: Microsoft Windows Server 2012 with Hyper-V for up to 125 Virtual Machines. A private cloud architecture is a complex system offering. This document facilitates its setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, validation tests and monitoring instructions ensure that your customer s system is running correctly. Following the instructions in this document ensures an efficient and expedited journey to the cloud. Business needs Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX private cloud solutions using Microsoft Hyper-V reduce the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design flexibility and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The business needs for the VSPEX private cloud solutions for Microsoft Hyper-V architectures are: Provide an end-to-end virtualization solution to effectively utilize the capabilities of the unified infrastructure components. Provide a VSPEX private cloud solution for Microsoft Hyper-V to efficiently virtualize up to 1,000 virtual machines for varied customer use cases. Provide a reliable, flexible, and scalable reference design. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 17

18 Executive Summary 18 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

19 Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Network Storage EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 19

20 Solution Overview Introduction The EMC VSPEX private cloud for Microsoft Hyper-V provides complete system architecture capable of supporting up to 1,000 virtual machines with a redundant server or network topology and highly available storage. The core components that make up this particular solution are virtualization, compute, backup, storage, and networking. Virtualization Microsoft Hyper-V is a key virtualization platform in the industry. For years, Hyper-V has provided flexibility and cost savings to end users by consolidating large, inefficient server farms into nimble, reliable cloud infrastructures. Features such as Live Migration, which enables a virtual machine to move between different servers with no disruption to the guest operating system, and Dynamic Optimization, which performs Live Migrations automatically to balance loads, make Hyper-V a solid business choice. With the release of Windows Server 2012 R2, a Microsoft virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). Compute VSPEX provides the flexibility to design and implement the customer s choice of server components. The infrastructure must conform to the following attributes: Sufficient cores and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to withstand a server failure and failover within the environment Network VSPEX provides the flexibility to design and implement the customer s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage Traffic isolation based on industry-accepted best practices Support for link aggregation IP network switches used to implement this reference architecture must have a minimum non-blocking backplane capacity which is sufficient for the target number of virtual machines and their associated workloads. Enterprise-class 20 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

21 Solution Overview switches with advanced features such as Quality of Service are highly recommended. Storage The VNX storage series provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation. VNX storage includes the following components, sized for the stated reference architecture workload: Host adapter ports (For block) Provide host connectivity through fabric to the array Storage processors The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays Disk drives Disk spindles and solid state drives (SSDs) that contain the host or application data and their enclosures Data Movers (For file) Front-end appliances that provide file services to hosts (optional if CIFS services are provided) Note: The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables Common Internet File System (CIFS-SMB) and Network File System (NFS) protocols on the VNX. The Microsoft Hyper-V private cloud solutions for 200, 300, 600, and 1,000 virtual machines described in this document are based on the EMC VNX5200, VNX5400, EMC VNX5600 and the EMC VNX5800 storage arrays respectively. The VNX5200 array can support a maximum of 125 drives, the VNX5400 array can support a maximum of 250 drives, the VNX5600 can host up to 500 drives, and the VNX5800 can host up to 750 drives. The VNX series supports a wide range of business-class features that are ideal for the private cloud environment, including: EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP ) EMC FAST Cache File-level data deduplication and compression Block deduplication Thin provisioning Replication Snapshots or checkpoints File-level retention Quota management Block compression EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 21

22 Solution Overview EMC VNX Series Features and Enhancements The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s virtualized application environments. VNX includes many features and enhancements designed and built upon the first generation s success. These features and enhancements include: More capacity with multicore optimization through the use of Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx) Greater efficiency with a flash-optimized hybrid array Better protection by increasing application availability with active/active storage processors Easier administration and deployment by increasing productivity with a new Unisphere Management Suite VSPEX is built with the next generation of VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-optimized hybrid array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and promotes the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. Data is typically used most frequently at the time it is created; therefore new data is first stored on flash drives for the best performance. As that data ages and becomes less active over time, FAST VP moves the data from high-performance to high-capacity drives automatically, based on customer-defined policies. EMC has enhanced this functionality with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multi-level cell (emlc) technology to lower the cost per gigabyte. FAST Cache assists performance by dynamically absorbing unpredicted spikes in system workloads. All VSPEX use cases benefit from the increased efficiency. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an even greater return on their investment. VNX also provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier. 22 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

23 Solution Overview VNX Intel MCx Code Path Optimization The advent of flash technology has been a catalyst in totally changing the requirements of midrange storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNX data services across all cores up to 32, as shown in Figure 1. The VNX series with MCx has dramatically improved the file performance for transactional applications like databases or virtual machines over network-attached storage (NAS). Figure 1. Next-Generation VNX with multicore optimization Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is key to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors. VNX performance Performance enhancements VNX storage, enabled with the MCx architecture, is optimized for FLASH 1 st and provides unprecedented overall performance, optimizing for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and providing optimal capacity efficiency (cost per GB). EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 23

24 Solution Overview VNX provides the following performance improvements: Up to four times more file transactions when compared with dual controller arrays Increased file performance for transactional applications by up to three times, with a 60 percent better response time Up to four times more Oracle and Microsoft SQL Server OLTP transactions Up to six times more virtual machines Active/active array storage processors The new VNX architecture provides active/active array storage processors, as shown in Figure 2, which eliminate application timeouts during path failover since both paths are actively serving I/O. Figure 2. Active/active processors increase performance, resiliency, and efficiency Load balancing is also improved and applications can achieve an up to two times improvement in performance. Active/active for block is ideal for applications that require the highest levels of availability and performance, but do not require tiering or efficiency services like compression or deduplication. With this VNX release, VSPEX customers can use virtual Data Movers (VDMs) and VNX Replicator to perform automated and high-speed file system migrations between systems. This process migrates all snaps and settings automatically, and enables the clients to continue operation during the migration. Note: The active/active processors are only available for RAID logical unit numbers (LUNs), not for pool LUNs. Unisphere Management Suite The new Unisphere Management Suite extends Unisphere s easy-to-use, interface to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 3, the suite also includes Unisphere 24 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

25 Solution Overview Remote for centrally managing up to thousands of VNX and VNXe systems with new support for XtremCache products. Figure 3. New Unisphere Management Suite Virtualization Management EMC Storage Integrator EMC Storage Integrator (ESI) is targeted towards the Windows and application administrator. ESI is easy to use, delivers end-to end monitoring, and is hypervisor agnostic. Administrators can provision in both virtual and physical environments for a Windows platform, and troubleshoot by viewing the topology of an application from the underlying hypervisor to the storage. Microsoft Hyper-V With Windows Server 2012, Microsoft provides Hyper-V 3.0, an enhanced hypervisor for private cloud that can run on NAS protocols for simplified connectivity. Offloaded Data Transfer The Offloaded Data Transfer (ODX) feature of Microsoft Hyper-V enables data transfers during copy operations to be offloaded to the storage array, freeing up host cycles. For example, using ODX for a live migration of a SQL Server virtual machine doubled performance, decreased migration time by 50 percent, reduced CPU on the Hyper-V server by 20 percent, and eliminated network traffic. Block deduplication Native block deduplication was introduced in Windows Server 2012, and the R2 release contained minor improvements to the offering. It is important to understand the impact of using OS-based deduplication on overall VSPEX performance and this becomes critical if array-based deduplication is enabled. Lab testing has created the following guidance: If deduplication is enabled, either within the array or within the OS, FAST Cache significantly reduces the overhead impact and minimizes impact on latency; it is considered a best-practice to enable FAST Cache if deduplication is enabled within a VSPEX environment. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 25

26 Solution Overview VNX array based deduplication provided significantly better deduplication results (~2x improvement in space savings) and proved beneficial to a wider range of workloads than OS-based deduplication. Do not enable OS-based and VNX array-based deduplication on the same LUNs Ensure that the allocation unit size matches the I/O size of the workload. Failure to do so may result in non-optimal deduplication savings. Windows deduplication will not start if the LUN contains less than 64 GB of data. Windows deduplication consumes both host and storage array resources and requires monitoring to ensure other storage services on the array are not adversely affected. The following three figures show SP resources consumption values, IOPS, and latency when implementing Windows deduplication. Figure 4. Storage processor utilization using Windows deduplication 26 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

27 Solution Overview Figure 5. Disk IOPS using Windows deduplication Figure 6. Disk latency using Windows deduplication EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 27

28 Solution Overview Figure 7. Deduplication efficiency using VNX deduplication Figure 8. Deduplication efficiency using Windows Server 2012 R2 deduplication EMC backup and recovery EMC backup and recovery solutions, EMC Avamar and EMC Data Domain, deliver the protection confidence needed to accelerate the deployment of VSPEX private clouds. Optimized for virtual environments, EMC backup and recovery reduces backup times by 90 percent and increases recovery speeds by 30 times, even offering virtual machines instant access for worry-free protection. EMC backup appliances add another layer of assurance with end-to-end verification and self-healing to ensure successful recoveries. Our solutions also deliver big saving. With industry-leading deduplication, you can reduce backup storage by 10 to 30 times, backup management time by 81 percent, and WAN bandwidth by 99 percent for efficient disaster recovery, delivering a sevenmonth payback period on average. You will be able to scale storage easily and efficiently as your environment grows. 28 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

29 Solution Overview Figure 9. EMC backup and recovery solutions EMC backup and recovery solutions used in this VSPEX solution include EMC Avamar deduplication software and system, EMC Data Domain deduplication storage system. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 29

30

31 Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview Summary of key components Virtualization Compute Network Storage SMB 3.0 features Backup and recovery Continous availability Other technologies EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 31

32 Solution Technology Overview Overview This solution uses the VNX array and Microsoft Hyper-V to provide storage and server hardware consolidation in a VSPEX private cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 10 depicts the solution components. Figure 10. VSPEX private cloud components The following sections describe the components in more detail. 32 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

33 Solution Technology Overview Summary of key components This section briefly describes the key components of this solution. Virtualization The virtualization layer decouples the physical implementation of resources from the applications that use them. The application s view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. Compute The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the customer to implement the solution by using any server hardware that meets these requirements. Network The network layer connects the users of the private cloud to the resources in the cloud, and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables the customer to implement the solution by using any network hardware that meets these requirements. Storage The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the private cloud can be implemented. The VNX used in this solution provides high-performance data storage while maintaining high availability. Backup and recovery The backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable. Solution architecture provides details on all the components that make up the reference architecture. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 33

34 Solution Technology Overview Virtualization Overview Microsoft Hyper-V The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and allows the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server Hyper-V virtualizes computer hardware resources, such as CPU, memory, storage, and networking. This transformation creates fully functional virtual machines that run their own operating systems and applications like physical computers. Hyper-V works with Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure. Live migration and live storage migration enable seamless movement of virtual machines or virtual machines files between Hyper-V servers or storage systems transparently and with mimimal performance impact. Virtual Fibre Channel ports Windows Server 2012 provides virtual Fibre Channel (FC) ports within a Hyper-V guest operating system. The virtual FC port uses the standard N-port ID virtualization (NPIV) process to address the virtual machine WWNs within the Hyper-V host s physical host bus adapter (HBA). This provides virtual machines with direct access to external storage arrays over FC, enables clustering of guest operating systems over FC, and offers an important new storage option for the hosted servers in the virtual infrastructure. Virtual FC in Hyper-V guest operating systems also supports related features, such as virtual SANs, live migration, and multipath I/O (MPIO). Prerequisites for virtual FC include: One or more installations of Windows Server 2012 with the Hyper-V role One or more FC HBAs installed on the server, each with an appropriate HBA driver that supports virtual FC NPIV-enabled SAN Virtual machines using the virtual FC adapter must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 as the guest operating system. Microsoft System Center Virtual Machine Manager Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform for the virtualized data center. SCVMM allows administrators to configure and manage the virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment. 34 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

35 Solution Technology Overview High availability with Hyper-V Failover Clustering The Windows Server 2012 Failover Clustering feature provides high-availability in Hyper-V. High availability is impacted by both planned and unplanned downtime, and Failover Clustering significantly increases the availability of virtual machines during planned and unplanned downtimes. Configure Windows Server 2012 Failover Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual machines between cluster nodes. The advantages of this configuration are: Enables migration of virtual machines to a different cluster node if the cluster node where they reside must be updated, changed, or rebooted. Allows other members of the Windows Failover Cluster to take ownership of the virtual machines if the cluster node where they reside suffers a failure or significant degradation. Minimizes downtime due to virtual machine failures. Windows Server Failover Cluster detects virtual machine failures and automatically takes steps to recover the failed virtual machine. This allows the virtual machine to be restarted on the same host server, or migrated to a different host server. Hyper-V Replica Hyper-V Replica was introduced in Windows Server 2012 to provide asynchronous virtual machine replication over the network from one Hyper-V host at a primary site to another Hyper-V host at a replica site. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates the changes to the replica server over the network with HTTP and HTTPS. The amount of network bandwidth required is based on the transfer schedule and data change rate. If the primary Hyper-V host fails, you can manually fail over the production virtual machines to the Hyper-V hosts at the replica site. Manual failover brings the virtual machines back to a consistent point from which they can be accessed with minimal impact on the business. After recovery, the primary site can receive changes from the replica site. You can perform a planned failback to manually revert the virtual machines back to the Hyper-V host at the primary site. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 35

36 Solution Technology Overview Hyper-V snapshot A Hyper-V snapshot creates a consistent point-in-time view of a virtual machine. Snapshots function as source for backups or other use cases. Virtual machines do not have to be running to take a snapshot. Snapshots are completely transparent to the applications running on the virtual machine. The snapshot saves the point-in-time status of the virtual machine, and enables users to revert the virtual machine to a previous point-in-time if necessary. Note: Snapshots require additional storage space. The amount of additional storage space depends on the frequency of data change on the virtual machine and the number of snapshots being retained. Cluster-Aware Updating Cluster-Aware Updating (CAU) was introduced in Windows Server It provides a way of updating cluster nodes with little or no disruption. CAU transparently performs the following tasks during the update process: 1. Puts one cluster node into maintenance mode and takes it offline (virtual machines are live-migrated to other cluster nodes). 2. Installs the updates. 3. Performs a restart if necessary. 4. Brings the node back online (migrated virtual machines are moved back to the original node). 5. Updates the next node in the cluster. The node managing the update process is called the Orchestrator. The Orchestrator can work in a couple of different modes: Self-updating mode: The Orchestrator runs on the cluster node being updated. Remote-updating mode: The Orchestrator runs on a standalone Windows operating system, and remotely manages the cluster update. CAU is integrated with Windows Server Update Service (WSUS). Powershell allows automation of the CAU process. EMC Storage Integrator EMC Storage Integrator (ESI) is an agentless, free plug-in that enables applicationaware storage provisioning for Microsoft Windows Server applications, Hyper-V, VMware, and Xen Server environments. Administrators can provision block and file storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI. ESI supports the following functions: Provisioning, formatting, and presenting drives to Windows servers Provisioning new cluster disks, and automatically adding them to the cluster Provisioning shared CIFS storage, and mounting it to Windows servers Provisioning SharePoint storage, sites, and databases in a single wizard 36 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

37 Solution Technology Overview Compute The choice of a server platform for an VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX documents the minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution. In the example shown in Figure 11, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this by using white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Figure 11. Compute layer flexibility EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 37

38 Solution Technology Overview The first customer needs four of the chosen servers, while the other customer needs two. Note: To enable high-availability at the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. Use the following best practices in the compute layer: Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you implement high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimaldowntime upgrades and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores, and RAM per core to meet the needs of the target environment. 38 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

39 Solution Technology Overview Network Overview The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 12 and Figure 13 depict examples of this highly available network topology. Figure 12. Example of highly available network design for block EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 39

40 Solution Technology Overview Figure 13. Example of highly available network design for file This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. For block, EMC unified storage platforms provide network high availability or redundancy by two ports per storage processor. If a link is lost on the storage processor front end port, the link fails over to another port. All network traffic is distributed across the active links. For file, the EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active (MAC) Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on the VNX array, combining multiple Ethernet ports into a single virtual device. If a link is lost on the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. 40 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

41 Solution Technology Overview Storage Overview EMC VNX series The storage layer is also a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in data center storage processing systems. This increases storage efficiency, management flexibility, and reduces total cost of ownership. In this VSPEX solution, EMC VNX series arrays provide features and performance to enable and enhance any virtualization environment. The EMC VNX family is optimized for virtual applications; and delivers innovation and enterprise capabilities for file and block storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. Intel Xeon processors power the VNX series for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, high-scalability requirements of midsize and large enterprises. Table 1 shows the customer benefits that are provided by the VNX series. Table 1. VNX customer benefits Feature Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-consistent copies High-availability, designed to deliver five 9s availability Automated tiering with FAST VP and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere for a single management interface for all NAS, SAN and replication needs Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for flash Benefit Tight integration with Microsoft Windows Server 2012 R2 and Microsoft System Center 2012 R2 allows for advanced array features and centralized management Reduced storage costs, more efficient use of resources and easier recovery of applications Higher levels of uptime and reduced outage risk More efficient use of storage resources without complicated planning and configuration Reduced management overhead and toolsets required to manage environment Reduced latency, increased bandwidth and IOPS result in more headroom for demanding workloads EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 41

42 Solution Technology Overview Different software suites and packs are also available for the VNX series, which provide multiple features for enhanced protection and performance. Software suites The following VNX software suites are available: FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously. Local Protection Suite Practices safe data protection and repurposing. Remote Protection Suite Protects data against localized failures, outages, and disasters. Application Protection Suite Automates application copies and provides compliance. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. Software packs The following VNX software packs are available: Total Efficiency Pack Includes all five software suites. Total Protection Pack Includes local, remote, and application protection suites. EMC VNX Snapshots VNX Snapshots is a software feature that creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation, and local rapid restores. VNX Snapshots improves on the existing ENC VNX SnapView snapshot functionality by integrating with storage pools. Note: LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView snapshots. This limitation exists because VNX Snapshots requires pool space as part of its technology. VNX Snapshots supports 256 writeable snapshots per pool LUN. It supports branching, also called Snap of a Snap, as long as the total number of snapshots for any primary LUN is less than 256, which is a hard limit. VNX Snapshots uses redirect on write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy on first write (COFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot. This release also supports consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application. 42 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

43 Solution Technology Overview EMC VNX SnapSure EMC VNX SnapSure is an EMC VNX File software feature that enables you to create and manage checkpoints that are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks. When a block within the PFS is modified, a copy containing the block s original contents is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. A checkpoint reflects the state of a PFS at the time the checkpoint was created. SnapSure supports these types of checkpoints: Read-only checkpoints Read-only file systems created from a PFS Writeable checkpoints Read/write file systems created from a read-only checkpoint SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note: Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint. For more detailed information, refer to the document Using VNX SnapSure. EMC VNX Virtual Provisioning EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide both high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, VNX Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and User Capacity Threshold setting. EMC VNX Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. You can monitor EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 43

44 Solution Technology Overview the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure 14. Figure 14. Storage pool rebalance progress LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family has the capability to expand a pool LUN without disrupting user access. You can expand pool LUNs with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data-protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. LUN shrink Use LUN shrink to reduce the capacity of existing thin LUNs. VNX can shrink a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process involves these steps: 1. Shrink the file system from Windows Disk Management. 2. Shrink the pool LUN using a command window and the DISKRAID utility. The utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package. The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is complete, any other LUN in that pool can use the reclaimed space. For more detailed information on LUN expansion/shrinkage, refer to EMC VNX Virtual Provisioning Applied Technology White Paper. 44 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

45 Solution Technology Overview Alerting the user through the Capacity Threshold setting You must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed and capacity shortages can be avoided. Figure 15 explains why provisioning with thin pools requires monitoring. Figure 15. Thin LUN space utilization Monitor the following values for thin pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host-reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. Total allocation must never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. Figure 16 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free, Percent Full, Total Allocation, Total Subscription of physical capacity, Percent Subscribed and Oversubscribed By of virtual capacity. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 45

46 Solution Technology Overview Figure 16. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization, and be alerted when thresholds are reached, set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by selecting Advanced in the Storage Pool Properties dialog box, as seen in Figure 17. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. 46 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

47 Solution Technology Overview Figure 17. Defining storage pool utilization thresholds View alerts by Alert in Unisphere. Figure 18 shows the Unisphere Event Monitor wizard, where you can also select the option of receiving alerts through , a paging service, or an SNMP trap. Figure 18. Defining automated notifications - for block EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 47

48 Solution Technology Overview Table 2 lists the information about thresholds and their settings. Table 2. Thresholds and settings under VNX OE Block Release 33 Threshold type Threshold range Threshold default Alert severity Side effect User settable 1% 84% 70% Warning None Built-in N/A 85% Critical Clears user settable alert If you allow total allocation to exceed 90 percent of total capacity, you are at risk of running out of space and affecting all applications that use thin LUNs in the pool. Windows Offloaded Data Transfer Windows Offloaded Data Transfer (ODX) provides the ability to offload data transfer from the server to the storage arrays. This feature is enabled by default in Windows Server VNX arrays are compatible with Windows ODX on Windows Server ODX supports the following protocols: iscsi Fibre Channel (FC) FC over Ethernet (FCoE) Server Message Block (SMB) 3.0 The following data-transfer operations currently support ODX: Transferring large amounts of data via the Hyper-V Manager, such as creating a fixed size VHD, merging a snapshot, or converting VHDs Copying files in File Explorer Using the Copy commands in Windows PowerShell Using the Copy commands in the Windows command prompt Because ODX offloads the file transfer to the storage array, host CPU and network utilization are significantly reduced. ODX minimizes latencies and improves the transfer speed by using the storage array for data transfer. This is especially beneficial for large files, such as database or video files. ODX is enabled by default in Windows Server 2012, so when ODX-supported file operations occur, data transfers automatically offloaded to the storage array. The ODX process is transparent to users. 48 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

49 Solution Technology Overview EMC PowerPath EMC PowerPath is a host-based software package that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It offers the following benefits for the VSPEX Proven Infrastructure: Standardized data management across physical and virtual environments. Automated multipathing policies and load balancing to provide predictable and consistent application availability and performance across physical and virtual environments. Improved service-level agreements by eliminating application impact from I/O failures. EMC FAST Cache VNX file shares ROBO EMC FAST Cache, a part of the EMC FAST Suite, enables flash drives to function as an expanded cache layer for the array. FAST Cache is an array-wide, nondisruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of highly active data to flash drives. This dramatically improves the response time for the active data and reduces data hot spots that can occur within a LUN. The FAST Cache feature is an optional component of this solution. In many environments it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. For more information, refer to the document Configuring and Managing CIFS on VNX. Organizations with remote office and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments need to balance the benefits of local support with the need to maintain central control. Local Systems and storage should be easy for local personnel to administer, but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also leverage Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining simplicity of operation and unified storage functionality for local managers. BranchCache is a feature that allows clients to cache data stored on SMB 3.0 shares locally at the branch office. With BranchCache capability, remote users that access file shares can cache files locally, which helps future lookups, reduces network traffic, and improves scalability and performance. For more information on BranchCache, refer to SMB 3.0 features. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 49

50 Solution Technology Overview SMB 3.0 features Overview SMB 3.0 supports Hyper-V and Microsoft SQL Server storage. Microsoft also introduced several key features that improve the performance of these applications, and simplify application management tasks. This section describes SMB 3.0 features supported on VNX storage arrays, and indicates how these features affect the performance of applications or data stored on SMB 3.0 file shares. For more information, refer to the EMC VNX Series: Introduction to SMB 3.0 Support White Paper. SMB versions and negotiations The SMB protocol follows the client-server model. The protocol level is negotiated by client request and server response when establishing a new SMB connection. The SMB versions for various Windows operating systems are as follows: CIFS Windows NT 4.0 SMB 1.0 Windows 2000, Windows XP, Windows Server 2003, and Windows Server 2003 R2 SMB 2.0 Windows Vista (SP1 or later) and Windows Server 2008 SMB 2.1 Windows 7 and Windows Server 2008 R2 SMB 3.0 Windows 8 and Windows Server 2012 Before establishing a session between the client and server, a common SMB dialect is negotiated. Table 3 shows the common dialect used based on the SMB versions supported by the client and server. Table 3. SMB dialect used between client and server Client-server SMB 3.0 SMB 2.1 SMB 2.0 SMB 3.0 SMB 3.0 SMB 2.1 SMB 2.0 SMB 2.1 SMB 2.1 SMB 2.1 SMB 2.0 SMB 2.0 SMB 2.0 SMB 2.0 SMB 2.0 SMB 1.0 SMB 1.0 SMB 1.0 SMB 1.0 For more information on SMB versions and negotiations, refer to the Microsoft TechNet technical document entitled Server Message Block (SMB) Protocol Versions 2 and 3. VNX and VNXe storage support All features mentioned in this document are supported in the latest releases of VNX operating environment (OE) for File and VNXe OE. 50 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

51 Solution Technology Overview SMB 3.0 VHD/VHDX storage support With Virtual Hard Disk file format (VHD and VHDX) storage support, Hyper-V can store virtual machines, and files such as configuration files, virtual hard drives, and snapshots on SMB 3.0 shares. This applies to standalone and clustered servers. Feature benefit With SMB 3.0 support for storing Hyper-V virtual machines, Microsoft supports block storage protocols and file storage protocols. This provides Hyper-V users with additional storage options to store Hyper-V virtual machine files. Baseline comparison point Support for VHD and VHDX files on a VNX storage array is enabled by default, without the need for additional configuration. Figure 19 shows the performance of 100 Hyper-V reference virtual machines on VNX SMB 3.0 file shares. Each virtual machine was driving 25 IOPS. The acceptable latency limit is 20 ms, and the average latency observed during the test was 12 ms. Figure 19. SMB 3.0 baseline performance comparison point Note: This performance result serves as a baseline comparison point for all other SMB 3.0 features discussed later in this chapter. SMB 3.0 Continuous Availability The SMB 3.0 Continuous Availability (CA) feature ensures the transparent failover of the file server (serviced by the VNX storage array) when faults occur. It enables clients connected to SMB 3.0 shares to transparently reconnect to another file server node when one node fails. All open file handles from the faulted server node are transferred to the new server node, which eliminates application errors. Figure 20 shows the sequence of events for a Data Mover failover with CA enabled: 1. The client (Windows Server 2012) requests a persistent handle by opening a file with associated leases and locks on a CIFS share. 2. The CIFS server saves the open state and persistent handle to disk. 3. If the primary Data Mover (Data Mover 2) fails, it fails over to the standby Data Mover (Data Mover 3). 4. The Data Mover reads and restores the persistent open state from the disk before starting the CIFS service. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 51

52 Solution Technology Overview 5. Using the persistent handle, the client re-establishes the connection to the same CIFS server, and recovers the same context associated with the open file as before the failover occurred. Figure 20. SMB 3.0 Continuous Availability Feature benefit When a Data Mover fails, clients accessing SMB 3.0 shares created with CA do not perceive any application errors. Instead, they experience a small I/O delay due to the primary Data Mover failing over to the standby Data Mover. After the failover, the application may experience a brief spike in latency but soon resumes normal operation. Enabling the feature This feature is required for Hyper-V environments. To enable this feature, run the following commands from the VNX Control Station. 1. To mount the file system through which the share will be exported with the smbca option: server_mount <server_name> -o smbca <fsname> /<fsmountpoint> 2. To export the share with the CA option: server_export <server_name> -P cifs n <sharename> o type=ca /<fsmountpoint> 52 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

53 Solution Technology Overview Performance impact This feature does not impact storage, server, or network performance. The only time that performance changes is after a failover or failback operation, when there is a spike in IOPS and latency for a brief period before normal operation resumes. Figure 21 shows the performance of VDbench on host when the primary Data Mover panics. There is an I/O delay during the failover operation. When the failover completes, the standby is active, and the VDbench returns to normal operation after a short spike in I/O and latency. Figure 21. CA application performance SMB Multichannel The SMB Multichannel feature utilizes multiple network interfaces and connections to provide higher throughput and fault tolerance. This is achieved without any additional configuration steps for the network interfaces. Feature benefits SMB Multichannel provides network high-availability. If one of the network interface cards (NICs) fails, the applications and clients continue operating at a lower throughput potential without any errors. SMB Multichannel is automatically configured. All network paths are automatically detected, and connections are added dynamically. SMB Multichannel works as follows: Multichannel connections on a single NIC for improved throughput: SMB Multichannel does not provide any additional throughput if the single NIC does not support RSS Receive Side Scaling (RSS). RSS allows multiple connections to spread across the CPU cores automatically and hence can distribute load between the CPU cores by creating multiple TCP/IP connections. Multichannel connections on multiple NICs for improved throughput: EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 53

54 Solution Technology Overview SMB Multichannel creates multiple TCP/IP sessions one for each available interface. If the NICs are RSS-capable, many TCP/IP connections per NIC are created. Enabling the feature SMB Multichannel is enabled by default on the VNX storage array. No parameter needs to be set on the system to use this feature. This feature is also enabled by default on Windows 8 and Windows 2012 clients. Performance impact SMB Multichannel provides additional network throughput by creating more TCP/IP connections (at least one per NIC). If the network is underutilized, no performance degradation is observed when one NIC fails. However, if the network is being heavily utilized, the application continues functioning at a lower throughput. Figure 22 shows the network-resiliency test result on an SMB 3.0 client when one out of two NICs is disabled. The application does not experience any errors or faults, and continues to perform normally even when the interface is enabled again. Figure 22. SMB Multichannel fault tolerance The application does not have an impact on performance because the network was not the bottleneck during the test. If it were a bottleneck, the response time would have been higher. However, the application would have continued functioning without any errors if the higher response time was acceptable. 54 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

55 Solution Technology Overview Figure 23 shows the SMB 3.0 client s network throughput on both interfaces. Figure 23. Multichannel network throughput Each SMB 3.0 client in the test environment has two network interfaces. When one interface is disabled, the surviving interface services the traffic. This is evident from the chart, which shows the throughput doubling on one NIC, and the throughput dropping to zero on the disabled NIC. After the disabled NIC is enabled again, the load balances equally on both NICs. SMB 3.0 Copy Offload Copy Offload enables the array to copy large amounts of data without involving server, network, or CPU resources. The server offloads the copy operation to the physical array where the data resides. Note: Copy Offload requires that the source and the destination file system be on the same Data Mover. Figure 24. Copy Offload EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 55

56 Solution Technology Overview Feature benefits Copy Offload enables faster data transfer from source to destination because it does not use any client CPU cycles. This feature is most beneficial for the following operations: Deployment operations: Deploy multiple virtual machines faster. The baseline VHDX can reside on an SMB 3.0 share, with new virtual machines deployed on SMB 3.0 shares with Hyper-V Manager, by pointing to the baseline VHDX. Cloning operations: Clone virtual machines from one SMB 3.0 share to another in minutes. Migration operations: Migrate virtual machines between file shares on the same Data Mover in 10 minutes, as opposed to almost 40 minutes without the Copy Offload feature. Table 4 shows the time taken to move virtual machine storage with and without the Copy Offload feature. Table 4. Storage migration improvement with Copy Offload Number of virtual machines (100 GB each) Time spent for storage migration with Copy Offload enabled 1 10 mins 37 mins 2 13 mins 82 mins Time spent for storage migration with Copy Offload disabled 5 26 mins More than 4 hours mins More than 8 hours Enabling the feature This feature is enabled by default on the VNX storage array, Windows 8, and Windows Server 2012 clients. Performance impact Because the array handles the entire copy operation, the Copy Offload feature increases the utilization of the Data Mover CPU and other array resources. The performance of the feature is limited by the array read/write bandwidth. SMB 3.0 BranchCache BranchCache enables clients to cache data stored on SMB 3.0 shares locally at the branch office. The cached content is encrypted between peers, clients, and hosted cache servers. This feature was first introduced with Windows 7 and Windows 2008 R2. SMB 3.0 supports BranchCache v2. Implement BranchCache in one of two modes: Distributed cache mode: Distributes cache between the client computers at the branch office. 56 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

57 Solution Technology Overview Hosted cache mode: Maintains cached content on a separate computer at the branch office. For more information on BranchCache, refer to the Microsoft TechNet Library topic Branch Cache Overview. Feature benefit With BranchCache capability, remote users who access file shares can cache files locally at the branch office. This helps future lookups, reduces network traffic, and improves scalability and performance. Enabling the feature TheBranchCache feature is not enabled by default on the VNX storage array. Run the following command on the VNX Control Station to enable BranchCache: server_cifs <server_name> smbhash service enable To create the share with type=hash, run the following command: server_export <server_name> -o type=hash On a DC of a Windows Server 2012 domain where the VNX is connected, edit the default domain policy as follows to activate: Computer Configuration\Policies\Administrative Templates\Network\Lanman Server\Hash Publication for BranchCache. Performance impact This feature reduces network traffic, as the cached data is available locally at the branch office. Client performance also improves due to faster access to data, but there is some overhead involved to encrypt and decrypt data between BranchCache members. SMB 3.0 Remote VSS Remote VSS (RVSS) is a Remote Procedure Call (RPC) based protocol, which enables application-consistent shadow copies of VSS-aware server applications. RVSS stores data on SMB 3.0 file shares. RVSS supports application backup across multiple file servers and shares. VSS-aware backup applications can perform snapshots of server applications that store data on the VNX CIFS shares. Hyper-V has the ability to store virtual machine files on CIFS shares, and RVSS can take point-in-time copies of the share contents. Some examples of shadow copy uses are: Creating backups Recovering data Testing scenarios Mining data EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 57

58 Solution Technology Overview Feature benefit RVSS uses the existing Microsoft VSS infrastructure to integrate with VSS-aware backup software and applications. Backup applications read directly from shadowcopy file shares instead of involving the server application computer. Enabling the feature RVSS is enabled by default on the VNX storage array, without a need for additional configuration. Performance impact RVSS increases the load on the VNX storage array because it takes applicationconsistent copies (or snapshots) of applications running on the file shares. SMB 3.0 encryption SMB 3.0 allows in-flight, end-to-end encryption of data, and protects it on untrusted networks. Enable this feature for an individual share, or for the entire CIFS server node. This feature only works with SMB 3.0 clients. If the share is encrypted, deny access, or allow unencrypted access for non-smb 3.0 clients. Feature benefit SMB encryption does not require any additional software or hardware. It protects data on the network from attacks and eavesdropping. Enabling the feature This feature is not enabled by default on the VNX storage array. Enabling encryption on all shares To configure encryption on all shares, set the Encrypt Data parameter in the VNX CIFS server registry to 0x1.To configure this parameter, complete the following steps: 1. Open the Registry Editor (regedit.exe) on a computer. 2. Select File > Connect Network Registry. 3. Enter the hostname or IP address of the CIFS server, and click Check Names. 4. When the server is recognized, click OK to close the window. 5. Edit the Encrypt Data parameter (0x1 is enabled, and 0x0 is disabled)under HKEY\System\CurrentControlSet\Services\LanmanServer\Parameters as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

59 Solution Technology Overview Figure 25. Enabling the Encrypt Data parameter By default only SMB 3.0 clients can access encrypted VNX file shares. In order to allow pre-smb 3.0 clients to access encrypted shares, the RejectUnencryptedAccess value under the VNX CIFS server registry location shown in Figure 16 must be set to 0x0. Enabling encryption on a specific share To enable encryption for a particular share, run the following command on the VNX Control Station: server_export <server_name> -P cifs n <sharename> o type=encrypted /<fsmountpoint> Performance impact With encryption enabled on the shares, Data Mover CPU, and SMB 3.0 client utilization increases because encryption and decryption require additional overhead. Figure 26 shows an increase in CPU utilization with encryption enabled on the SMB 3.0 shares. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 59

60 Solution Technology Overview Figure 26. Enabling encryption: Client CPU utilization Figure 27 shows the increase in Data Mover utilization with encryption enabled on the SMB 3.0 shares. Figure 27. Enabling encryption: Data Mover CPU utilization SMB 3.0 PowerShell cmdlets SMB 3.0 PowerShell cmdlets are PowerShell commands that allow file share management through Windows PowerShell CLI. SMB 3.0 Windows Powershell cmdlets use WMIv2 classes, so not all commands are compatible with VNX-hosted file shares. However, VNX provides a set of PowerShell commands to install and execute from a Windows 8 or Server2012 client. Download these commands from EMC Online Support. For more information on Windows PowerShell commands for SMB 3.0, refer to the Microsoft TechNet topic SMB Share CMDlets in Windows PowerShell. Table 5 lists Microsoft SMB 3.0 PowerShell cmdlets to execute from the clients. 60 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

61 Solution Technology Overview Table 5. Microsoft PowerShell cmdlets Command Get-SmbServerNetworkInterface Get-SmbServerConfiguration Get-SmbMultichannelConnection New-SmbMultichannelConstraint Get-SmbMultichannelConstraint Update-SmbMultichannelConnection Remove-SmbMultichannelConstraint Get-SmbMapping Remove-SmbMapping New-SmbMapping Get-SmbConnection Get-SmbClientNetworkInterface Get-SmbClientConfiguration Description Lists the network interfaces available to the SMB server Lists the SMB server configuration Lists the connections currently in use by SMB Multichannel Creates a new multichannel constraint Lists the constraints on multichannel connections Updates the constraint on the multichannel connection Removes the multichannel constraint Displays a list of drives mapped by an SMB client Removes an existing mapping Creates a new mapping Lists the SMB connections on the server Displays the client network interface Displays the current SMB client configuration settings Table 6 lists the EMC-provided SMB 3.0 PowerShell cmdlets to manage shares. Table 6. EMC-provided PowerShell cmdlets Command Add-LG Add-LGMember Add-Share Add-ShareAcl Add-SharePerms Remove-LG Remove-LGMember Remove-Session Remove-Share Remove-ShareAcl Remove-SharePerms Description Adds a new local group on a server name Adds a member in a specified local group on a server name Creates a share on a server name Adds an ACE in a share's ACL on a server name Adds an access in share's permissions on a server name Deletes a local group on a server name Deletes a member of a Local Group on a server name Deletes a session open on a server name Removes a share on a server name Removes an ACE in a share's ACL on a server Removes an access in share's permissions on a server name EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 61

62 Solution Technology Overview Command Set-ShareFlags Show-AccountSid Show-ACL Show-LG Show-LGMembers Show-RootDirMembers Show-SecurityEventLog Show-Sessions Show-Shares Show-ShareAcl Show-ShareFlags Show-SharePerms Description Sets share flags on a specified server name Displays SID of a specified user Displays the share's ACL on a server name Enumerates local group on a server name Enumerates members of a local group on a server name Lists the root directory members of a server name Displays the eventlogs of a server name Enumerates open sessions on a server name Displays all shares on a server name Displays the share's ACL on a server name Displays the share's flags values on a server name Enumerates access contained in a share's permissions on a server name The following are some examples of the PowerShell cmdlets: Show Shares command Figure 28 shows a list of all the SMB 3.0 shares on the VNX from the Show Shares command. Figure 28. PowerShell execution of Show Shares Get-SmbServerConfiguration command Figure 29 shows the SMB 3.0 server configuration from the Get- SMBServerConfiguration command. 62 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

63 Solution Technology Overview Figure 29. PowerShell execution of Get-SmbServerConfiguration Feature benefit PowerShell cmdlets enable clients and administrators to easily manage SMB 3.0 shares from a single location. Enabling the feature PowerShell commands are enabled by default on Windows 2012 and Windows 8 clients. Download the EMC PowerShell commands from EMC Online Support to use them. Performance impact The execution of these cmdlets has no impact on storage, server, or network resources. SMB 3.0 Directory Leasing SMB 3.0 Directory Leasing enables clients to cache directory metadata locally. All future metadata requests are serviced from the same cache. Cache coherency is maintained because clients are notified when directory information changes on the server. There are several types of leases: Read-caching lease (R) allows a client to cache reads, and can be granted to multiple clients. Write-caching lease (W) allows a client to cache writes. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 63

64 Solution Technology Overview A handle-caching lease (H) allows a client to cache open handles, and can be granted to multiple clients. Figure 30. SMB 3.0 Directory Leasing Feature benefit Directory leasing improves application response time in branch offices. This feature is useful in scenarios where a client in the branch office does not want to go over the high-latency WAN to fetch the same metadata information repeatedly. Instead, they can cache the same data and rely on the SMB server to notify them when information changes on the server. The typical usage includes: Home folders (read/write) Publication (read-only) Enabling the feature This feature is enabled by default on the Data Mover without a need for additional configuration. Performance impact This feature improves application response time, reduces network traffic and client processor utilization. 64 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

65 Solution Technology Overview Summary of feature defaults Table 7 summarizes the default status of the features. Table 7. Default status of SMB 3.0 features Feature Hyper-V storage support Continuous Availability Multichannel Copy Offload BranchCache Remote VSS Encryption PowerShell cmdlets Directory leasing Data Mover support Supported by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover Enabled by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover. EMC SMB PowerShell cmdlets for VNX can be downloaded from powerlink.emc.com Enabled by default on the Data Mover Backup and recovery Overview Backup and recovery, another important component in this VSPEX solution, provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster. EMC backup and recovery is a smart method of data protection. It consists of best of class, integrated protection storage and software designed to meet backup and recovery objectives now and in the future. With EMC market-leading protection storage, deep data source integration, and feature-rich data management services, you can deploy an open, modular protection storage architecture that allows you to scale while lowering cost and complexity. EMC Avamar deduplication EMC Data Domain deduplication storage systems VMware vsphere data protection EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, network-attached storage (NAS) servers, and desktops/laptops. Learn more: EMC Data Domain Deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication for backup and archive workloads. Learn more: vsphere Data Protection (VDP) is a proven solution for backing up and restoring VMware virtual machines. VDP is based on EMC s award-winning Avamar product and has many integration points with vsphere 5.5, providing simple discovery of your EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 65

66 Solution Technology Overview Continuous availability virtual machines and efficient policy creation. One of challenges that traditional systems have with virtual machines is the large amount of data that these files contain. VDP s usage of a variable-length deduplication algorithm ensures a minimum amount of disk space is used and reduces ongoing backup storage growth. Data is deduplicated across all virtual machines associated with the VDP virtual appliance. VDP uses vstorage APIs for Data Protection (VADP), which sends only the changed blocks of data, resulting in only a fraction of the data being sent over the network. VDP enables up to eight virtual machines to be backed up concurrently. Because VDP resides in a dedicated virtual appliance, all the backup processes are offloaded from the production virtual machines. VDP can alleviate the burdens of restore requests from administrators by enabling end users to restore their own files using a web-based tool called vsphere Data Protection Restore Client. Users can browse their system s backups in an easy to use interface that provides search and version control features. The users can restore individual files or directories without any intervention from IT, freeing up valuable time and resources, resulting in a better end user experience. For backup and recovery options, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. EMC RecoverPoint EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. EMC RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data locally (continuous data protection, CDP), remotely (continuous remote replication, CRR), or both (local and remote replication, CLR). RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and the data is transferred by FC. RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order. In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously. RecoverPoint uses lightweight splitting technology on the application server, in the fabric or in the array, to mirror application writes to the RecoverPoint cluster. RecoverPoint supports several types of write splitters: Array-based Intelligent fabric-based Host-based 66 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

67 Solution Technology Overview EMC VNX Replicator EMC VNX Replicator is a powerful, easy-to-use asynchronous replication solution. With its WAN-aware functionality, simple management interface, and advanced DR capability, it provides a complete replication solution. Replication between a primary and a secondary file system or iscsi LUN can be on the same VNX system, or on a remote system. EMC VNX Replicator supports application-consistent iscsi replication. The host can initiate the replication via the VSS interface in Windows environments or Replication Manager. For CIFS environments, the Virtual Data Mover (VDM) functionality replicates the necessary context to the remote site along with the file systems. This includes CIFS server data, audit logs, and local groups. For asynchronous data recovery, the secondary copy can be read/write, and production can continue at the remote site. If the primary system becomes available, incremental changes at the secondary copy can be played back to the primary with the resynchronization function. This operates as described above, with a role reversal between primary and secondary. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 67

68 Solution Technology Overview Other technologies EMC XtremCache In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. EMC XtremCache is a server flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe flash technology. Server-side flash caching for maximum speed XtremCache performs the following functions to improve system performance: Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application. Automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server flash card. This means that the hottest data (most active data) automatically resides on the PCIe card in the server for faster access. Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application accelerates with XtremCache, the array performance for other applications remains the same or slightly enhanced. Write-through caching to the array for total protection XtremCache accelerates reads and protects data by using a write-through cache to the storage to deliver persistent high-availability, integrity, and disaster recovery. Application agnostic XtremCache is transparent to applications; there is no need to rewrite, retest, or recertify to deploy XtremCache in the environment. Minimum impact on system resources Unlike other caching solutions on the market, XtremCache does not require a significant amount of memory or CPU cycles, as all flash and wear-leveling management are done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using XtremCache on server resources. XtremCache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. XtremCache active/passive clustering support The configuration of XtremCache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The XtremCache-enabled active/passive cluster ensures data integrity, and accelerates application performance. 68 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

69 Solution Technology Overview XtremCache performance considerations XtremCache performance considerations include: On a write request, XtremCache first writes to the array, then to the cache, and then completes the application I/O. On a read request, XtremCache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds; therefore, the array limits how fast the cache can work. As the number of writes increases, XtremCache performance decreases. XtremCache is most effective for workloads with a 70 percent or greater read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in XtremCache 1.5. Note: For more information, refer to the Introduction to EMC XtremCache White Paper. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 69

70

71 Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High-availability and failover Validation test profile Backup and recovery configuration guidelines Sizing guidelines Reference workload Applying the reference workload EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 71

72 Solution Architecture Overview Overview This chapter is a comprehensive guide to the major architectural aspects of this solution. Server capacity is presented in generic terms for required minimums of CPU, memory, and network resources; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Solution architecture Overview The VSPEX private cloud solution for Microsoft Hyper-V with VNX validates at four different points of scale: one configuration with up to 200 virtual machines, one configuration with up to 300 virtual machines, one configuration with up to 600 virtual machines, and one configuration with up to 1,000 virtual machines. The defined configurations form the basis of creating a custom solution. Note: VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload. 72 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

73 Solution Architecture Overview Logical architecture The architecture diagrams in this section show the layout of the major components in the solutions. Two types of storage, block-based and file-based, are shown in the following diagrams. Figure 31 shows the infrastructure validated with block-based storage, where an 8 Gb FC, FCoE, or 10 Gb-iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic. Figure 31. Logical architecture for block storage EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 73

74 Solution Architecture Overview Figure 32 shows the infrastructure validated with file-based storage, where 10 GbE carries storage traffic and all other traffic. Figure 32. Logical architecture for file storage Key components The architectures include the following key components: Microsoft Hyper-V Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 8. Hyper-V provides highly available infrastructure through features such as: Live Migration Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Live Storage Migration Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. Failover Clustering High Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster. Dynamic Optimization (DO) Provides load balancing of computing capacity in a cluster with support of SCVMM. Microsoft System Center Virtual Machine Manager (SCVMM) SCVMM is not required for this solution. However, if deployed, it (or its corresponding functionality in Microsoft System Center Essentials) simplifies provisioning, management, and monitoring of the Hyper-V environment. Microsoft SQL Server 2012 SCVMM, if used, requires a SQL Server database instance to store configuration and monitoring details. 74 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

75 Solution Architecture Overview DNS Server Use DNS services for the various solution components to perform name resolution. This solution uses Microsoft DNS service running on Windows Server 2012 R2. Active Directory Server Various solution components require Active Directory services to function properly. The Microsoft AD Service runs on a Windows Server 2012 R2. IP network A standard Ethernet network carries all network traffic with redundant cabling and switching. A shared IP network carries user and management traffic. Storage network The storage network is an isolated network that provides hosts with access to the storage arrays. VSPEX offers different options for block-based and file-based storage. Storage network for block This solution provides three options for block-based storage networks. Fibre Channel (FC) A set of standards that define protocols for performing high speed serial data transfer. FC provides a standard data transport frame among servers and shared storage devices. Fibre Channel over Ethernet (FCoE) A newer storage networking protocol that supports FC natively over Ethernet, by encapsulating FC frames into Ethernet frames. This allows the encapsulated FC frames to run alongside traditional Internet Protocol (IP) traffic. 10 Gb Ethernet (iscsi) Enables the transport of SCSI blocks over a TCP/IP network. iscsi works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. Storage network for file With file-based storage, a private, non-routable 10 GbE subnet carries the storage traffic. VNX storage array The VSPEX private cloud configuration begins with the VNX family storage arrays, including: EMC VNX5200 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to 200 virtual machines. EMC VNX5400 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to 300 virtual machines. EMC VNX5600 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SM B3.0) shares (for file) to Hyper-V hosts for up to 600 virtual machines. EMC VNX5800 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to 1,000 virtual machines. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 75

76 Solution Architecture Overview VNX family storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and FCoE protocols. The SPs provide access for all external hosts, and for the file side of the VNX array. Disk processor enclosure (DPE) is 3U in size, and houses the SPs and the first tray of disks. The VNX 5200, VNX5400, VNX5600 and VNX5800 use this component. X-Blades (or Data Movers) access data from the backend and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. Data Mover enclosure (DME) is 2U in size and houses the Data Movers (X-Blades). All VNX for File models use a DME. Standby power supply (SPS) is 1U in size and provides enough power to each SP to ensure that any data in-flight de-stages to the array s vault area in the event of a power failure. This ensures that no writes are lost. On restart of the array, the pending writes are reconciled and made persistent. Control Station is 1U in size and provides management functions to the X-Blades. The Control Station is responsible for X-Blade failover. An optional secondary Control Station ensures redundancy on the VNX array. Disk-array enclosures (DAE) house the drives used in the array. Hardware resources Table 8 lists the hardware used in this solution. Table 8. Solution hardware Component Microsoft Hyper-V servers CPU Configuration 1 vcpu per virtual machine 4 vcpus per physical core 8 vcpus per physical core (Ivy Bridge or later) 76 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

77 Solution Architecture Overview Component Configuration For 200 virtual machines: 200 vcpus Minimum of 50 physical CPUs Minimum of 25 physical CPUs (Ivy Bridge or later) For 300 virtual machines: 300 vcpus Minimum of 75 physical CPUs Minimum of 38 physical CPUs (Ivy Bridge or later) For 600 virtual machines: 600 vcpus Minimum of 150 physical CPUs Minimum of 75 physical CPUs (Ivy Bridge or later) For 1,000 virtual machines: 1,000 vcpus Minimum of 250 physical CPUs Minimum of 125 physical CPUs (Ivy Bridge or later) Memory 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 200 virtual machines: Minimum of 400 GB RAM Add 2GB for each physical server For 300 virtual machines: Minimum of 600 GB RAM Add 2GB for each physical server For 600 virtual machines: Minimum of 1200 GB RAM Add 2GB for each physical server For 1,000 virtual machines: Minimum of 2000 GB RAM Add 2GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBAs per server File 4 x 10 GbE NICs per server Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V High-Availability (HA) and meet the listed minimums. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 77

78 Solution Architecture Overview Component Configuration Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 ports per Hyper-V server, for storage network 2 ports per SP, for storage data File 2 physical switches 4 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper. Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper. 78 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

79 Solution Architecture Overview Component EMC VNX series storage array Block Configuration Common: 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front end ports per SP system disks for VNX OE For 200 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch Serial-attached SCSI (SAS) drives 4 x 200 GB flash drives. 3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 300 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch Serial-attached SCSI (SAS) drives 6 x 200 GB flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives. 8x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives. 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 79

80 Solution Architecture Overview Component File Configuration Common: 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management System disks for VNX OE For 200 virtual machines EMC VNX Data Movers (active/standby) 75 x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB flash drives. 3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 300 virtual machines EMC VNX Data Movers (active/standby) 110 x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives. 5 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines EMC VNX Data Movers (active/standby) 220 x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives. 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines EMC VNX Data Movers (2 active/1 standby) 360 x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives. 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare 80 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

81 Solution Architecture Overview Component Shared infrastructure Configuration In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, add the following: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. Note: The solution recommends using a 10 GbE network or an equivalent 1GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. Software resources Table 9 lists the software used in this solution. Table 9. Solution software Software Microsoft Hyper-V Microsoft Windows Server Microsoft System Center Virtual Machine Manager Configuration Windows Server 2012 Data Center Edition (Data Center Edition is necessary to support the number of virtual machines in this solution) Version 2012 SP1 Microsoft SQL Server EMC VNX EMC VNX OE for file 8.0 EMC VNX OE for block Version 2012 Enterprise Edition Note: Any supported database for SCVMM is acceptable. EMC Storage Integrator (ESI) EMC PowerPath Next-Generation Backup EMC Avamar Check for latest version Check for latest version 6.1 SP1 EMC Data Domain OS 5.2 Virtual machines (used for validation not required for deployment) Base operating system Microsoft Windows Server 2012 Data Center Edition EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 81

82 Solution Architecture Overview Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Dynamic Memory and Smart Paging can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. Ivy Bridge Updates Testing on Intel s Ivy Bridge series processors has shown significant increases in virtual machine density from the server resource perspective. If your server deployment comprises Ivy Bridge processors, we recommend increasing the vcpu/pcpu ratio from 4:1 to 8:1. This essentially halves the number of server cores required to host the reference virtual machines. Figure 33 demonstrates results from tested configurations: Figure 33. Ivy Bridge processor guidance Current VSPEX sizing guidelines specify a virtual CPU core to physical CPU core ratio of 4:1 (8:1 for Ivy Bridge or later processors). This ratio was based upon an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, OEM server vendors that are VSPEX partners may suggest differing (normally higher) ratios. Please follow the updated guidance supplied by your OEM server vendor. Table 10 lists the hardware resources that are used for the compute layer. 82 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

83 Solution Architecture Overview Table 10. Hardware resources for compute layer Component Microsoft Hyper-V servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core 8 vcpus per physical core (Ivy Bridge or later) For 200 virtual machines: 200 vcpus Minimum of 50 physical CPUs Minimum of 25 physical CPUs (Ivy Bridge or later) For 300 virtual machines: 300 vcpus Minimum of 75 physical CPUs Minimum of 38 physical CPUs (Ivy Bridge or later) For 600 virtual machines: 600 vcpus Minimum of 150 physical CPUs Minimum of 75 physical CPUs (Ivy Bridge or later) For 1,000 virtual machines: 1,000 vcpus Minimum of 250 physical CPUs Minimum of 125 physical CPUs (Ivy Bridge or later) 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 200 virtual machines: Minimum of 400 GB RAM Add 2GB for each physical server For 300 virtual machines: Minimum of 600 GB RAM Add 2GB for each physical server For 600 virtual machines: Minimum of 1200 GB RAM Add 2GB for each physical server For 1,000 virtual machines: Minimum of 2000 GB RAM Add 2GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBA per server File 4 x 10 GbE NICs per server EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 83

84 Solution Architecture Overview Component Configuration Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Hyper-V HA and meet the listed minimums. 84 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

85 Solution Architecture Overview Hyper-V memory virtualization Microsoft Hyper-V has a number of advanced features to maximize performance, and overall resource utilization. The most important features relate to memory management. This section describes some of these features, and the items to consider when using these features in the VSPEX environment. In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 34. Figure 34. Hypervisor memory consumption Understanding the technologies in this section enhances this basic concept. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 85

86 Solution Architecture Overview Dynamic Memory Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any given time. In Windows Server 2012, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines. Smart Paging Even with Dynamic Memory, Hyper-V allows more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory. Smart Paging is a memory management technique that uses disk resources as temporary memory replacement. It swaps out less-used memory to disk storage, and swaps in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, so Windows Server 2012 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host. Windows Server 2012 extends this functionality to the virtual machines, which provides improved performance in symmetrical multiprocessor (SMP) environments. Memory configuration guidelines The memory configuration guidelines take into account Hyper-V memory overhead, and the virtual machine memory settings. Hyper-V memory overhead Virtualized memory has some associated overhead, which includes the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. Leave at least 2 GB memory for the Hyper-V parent partition in this solution. Virtual machine memory In this solution, each virtual machine gets 2 GB memory in fixed mode. 86 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

87 Solution Architecture Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here consider jumbo frames, VLANs, and LACP on EMC unified storage. For detailed network resource requirements, refer to Table 11. Table 11. Hardware resources for network Component Configuration Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 ports per Hyper-V server, for storage network 2 ports per SP, for storage data File 2 physical switches 4 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Note: The solution may use a 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VLAN Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient. This solution calls for a minimum of three VLANs for the following usage: Client access Storage (for iscsi or SMB only) Management EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 87

88 Solution Architecture Overview Figure 35 depicts the VLANs and the network connectivity requirements for a blockbased VNX array. Figure 35. Required networks for block storage 88 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

89 Solution Architecture Overview Figure 36 depicts the VLANs and the network connectivity requirements for a filebased VNX array. Figure 36. Required networks for file storage The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note: Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. Enable jumbo frames (iscsi, FCoE, or SMB only) This solution recommends setting the MTU to 9,000 (jumbo frames) for efficient storage and virtual machine migration traffic. Most switch vendors also suggest enabling baby jumbo frames (setting MTU at 2158) to prevent frame fragmentation. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 89

90 Solution Architecture Overview Refer to the switch vendor guidelines to enable jumbo frames for storage and host ports on the switches. Link aggregation (SMB only) Link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. Hyper-V allows more than one method of using storage when hosting virtual machines. The tested solutions described below use different block protocols (FC/FCoE/iSCSI) and CIFS (for file), and the storage layout described adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based upon their understanding of the system usage and load if required. However, the building blocks described in this guide ensure acceptable performance. The VSPEX storage building blocks section provides specific recommendations for customization. Table 12 lists hardware resources for storage. 90 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

91 Solution Architecture Overview Table 12. Hardware resources for storage Component EMC VNX series storage array Block Configuration Common: 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front end ports per SP system disks for VNX OE For 200 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB flash drives 3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 300 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 91

92 Solution Architecture Overview Component File Configuration Common: 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management System disks for VNX OE For 200 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB flash drives 3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 300 virtual machines: EMC VNX Data Movers (active / standby) 110 x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines: EMC VNX Data Movers (active / standby) 220 x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines: EMC VNX Data Movers (2 x active /1 x standby) 360 x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare Note: For the VNX5800, EMC recommends that you run no more than 600 virtual machines on a single active Data Mover. Configure two active Data Movers (2 x active/1 x standby) when scaling to 600 or larger in that case. 92 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

93 Solution Architecture Overview Hyper-V storage virtualization for VSPEX This section provides guidelines to set up the storage layer of the solution to provide high-availability and the expected level of performance. Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes v2 and VHDX features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 37, the storage array presents either blockbased LUNs (as CSV), or file-based CIFS share (as SMB shares) to the Windows hosts to host virtual machines. Figure 37. Hyper-V virtual disk types CIFS Windows Server 2012 supports using CIFS (SMB 3.0) file shares as shared storage for a Hyper-V virtual machine. CSV A Cluster Shared Volume (CSV) is a shared disk containing a New Technology File System (NTFS) volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass Through Windows 2012 also supports Pass Through, which allows a virtual machine to access a physical disk mapped to the host that does not have a volume configured on it. SMB 3.0 (file-based storage only) The SMB protocol is the file sharing protocol that is used by default in Windows. With the introduction of Windows Server 2012, it provides a vast set of new SMB features with an updated (SMB 3.0) protocol. Some of the key features available with Windows Server 2012 SMB 3.0 are: SMB Transparent Failover SMB Scale Out SMB Multichannel EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 93

94 Solution Architecture Overview SMB Direct SMB Encryption VSS for SMB file shares SMB Directory Leasing SMB PowerShell With these new features, SMB 3.0 offers richer capabilities that, when combined, provide organizations with a high performance storage alternative to traditional Fibre Channel storage solutions at a lower cost. Note: For more details about SMB 3.0, refer to Chapter 3. ODX Offloaded Data Transfer (ODX) is a feature of the storage stack in Microsoft Windows Server 2012 that gives users the ability to use the investment in external storage arrays to offload data transfers from the server to the storage arrays. When used with storage hardware that supports the ODX feature, file copy operations are initiated by the host, but performed by the storage device. ODX eliminates the data transfer between the storage and the Hyper-V hosts by using a token-based mechanism for reading and writing data within storage arrays and reduces the load on your network and hosts. Using ODX helps to enable rapid cloning and migration of virtual machines. Because the file transfer is offloading to the storage array when using ODX, the host resource usage, such as CPU and network, is significantly reduced. By maximizing the use of storage array, ODX minimizes latencies and improve the transfer speed of large files, such as database or video files. When performing file operations that are supported by ODX, data transfers are automatically offloaded to the storage array and are transparent to users. ODX is enabled by default in Windows Server VHDX Hyper-V in Windows Server 2012 contains an update to the VHD format called VHDX, which has much larger capacity and built-in resiliency. The main features of the VHDX format are: Support for virtual hard disk storage with the capacity of up to 64 TB. Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures. Optimal structure alignment of the virtual hard disk format to suit large sector disks. The VHDX format also has the following features: Larger block size for dynamic and differential disks, which enables the disks to better meet the needs of the workload. 94 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

95 Solution Architecture Overview The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors. The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates. Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware). VSPEX storage building blocks Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for filebased storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST VP or FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce this complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. Each building block storage pool, regardless of the size, contains two flash drives with FAST VP storage tiering to enhance metadata operations and performance. VSPEX solutions have been engineered to provide a variety of sizing configurations which afford flexibility when designing the solution. Customers can start out by deploying smaller configurations and scale up as their needs grow. At the same time, customers can avoid over-purchasing by choosing a configuration that closely meets their needs. To accomplish this, VSPEX solutions can be deployed using one or both of the scale-points below to obtain the ideal configuration while guaranteeing a given performance level. Building block for 13 virtual servers The first building block can contain up to 13 virtual servers. It has two flash drives and five SAS drives in a storage pool, as shown in Figure 38. Figure 38. Building block for 13 virtual servers This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 13 more virtual servers. For details about pool expansion and restriping, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. Building block for 125 virtual servers The second building block can contain up to 125 virtual servers. It contains two flash drives, and 45 SAS drives, as shown in Figure 39. The following sections outline an EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 95

96 Solution Architecture Overview approach to grow from 13 virtual machines in a pool to 125 virtual machines in a pool. However, after reaching 125 virtual machines in a pool, do not go to 138. Create a new pool and start the scaling sequence again. Figure 39. Building block for 125 virtual servers Implement this building block with all of the resources in the pool initially, or expand the pool over time as the environment grows. Table 13 lists the flash and SAS requirements in a pool for different numbers of virtual servers. Table 13. Number of disks required for different number of virtual machines Virtual servers Flash drives SAS drives * Note: Due to increased efficiency with larger stripes, the building block with 45 SAS drives can support up to 125 virtual servers. To grow the environment beyond 125 virtual servers, create another storage pool using the building block method described here. VSPEX private cloud validated maximums VSPEX private cloud configurations are validated on the VNX5200, VNX5400, VNX5600, and VNX5800 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX private cloud configuration. In addition to the VSPEX private cloud building 96 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

97 Solution Architecture Overview blocks, each storage array must contain the drives used for the VNX OE, and hot spare disks for the environment. Notes: Allocate at least one hot spare for every 30 disks of a given type and size. The pool does not use system drives for additional storage. If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool must be 15k RPM and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes. For all VSPEX private cloud solutions: Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP : Works at the block storage pool level and automatically adjusts where data is stored based on access frequency. Promotes frequently-accessed data to higher tiers of storage in 256 MB increments, and migrates infrequently-accessed data to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation. For block storage, allocate at least two LUNs to the Windows cluster from a single storage pool to serve as Cluster Shared Volumes for the virtual servers. For file storage, allocate at least two CIFS shares to the Windows cluster from a single storage pool to serve as SMB shares for the virtual servers. Optionally configure flash drives as FAST Cache in the array. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 97

98 Solution Architecture Overview VNX5200 The VNX5200 is validated for up to 200 virtual servers. Figure 40 shows a typical configuration. Figure 40. Storage layout for 200 virtual machines using VNX5200 This configuration uses the following storage layout: Seventy-five 600 GB SAS drives are allocated to two block-based storage pools: one RAID-5 (4+1) pool with 45 SAS disks for 125 virtual machines and one RAID-5 (4+1) pool with 30 SAS disks for 75 virtual machines. Note: To meet the load recommendations, all drives in the storage pool must be 15k rpm and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes. Four 200 GB flash drives are configured for Fast VP, two for each pool configured as RAID 1/0. Three 600 GB SAS drives are configured as hot spares. One 200 GB flash drive is configured as a hot spare. Enable FAST VP to automatically tier data to leverage differences in performance and capacity. FAST VP: Works at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Promotes frequently accessed data to higher tiers of storage in 256-MB increments and migrates infrequently accessed data to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation. 98 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

99 Solution Architecture Overview For block, allocate at least two LUNs to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. For file, allocate at least two NFS shares to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. Optionally configure flash drives as FAST Cache (up to 600 GB) in the array. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration, the VNX5200 can support 200 virtual servers as defined in Figure 40. VNX5400 VNX5400 is validated for up to 300 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 41 shows one potential configuration. Figure 41. Storage layout for 300 virtual machines using VNX5400 This configuration uses the following storage layout: One hundred and ten 600 GB SAS disks are allocated to three block-based storage pools: two pools with 45 SAS disks for 125 virtual machines each and one pool with 20 SAS disks for 50 virtual machines. Four 600 GB SAS disks are configured as hot spares. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 99

100 Solution Architecture Overview Six 200 GB flash drives are configured for Fast VP, two for each pool. One 200 GB flash drive is allocated as a hot spare. Using this configuration, the VNX5400 can support 300 virtual servers as defined in the Reference workload. 100 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

101 Solution Architecture Overview VNX5600 VNX5600 has been validated for up to 600 virtual servers. There are multiple ways to achieve this configuration with the building block approach. Figure 42 shows one potential configuration. Figure 42. Storage layout for 600 virtual machines using VNX5600 This configuration uses the following storage layout: Two hundred and twenty 600 GB SAS disks are allocated to five block-based storage pools: four pools with 45 SAS disks for 125 virtual machines each and one pool with 40 SAS disks for 100 virtual machines. Eight 600 GB SAS disks are configured as hot spares. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 101

102 Solution Architecture Overview Ten 200 GB flash drives are configured for Fast VP, two for each pool One 200 GB flash drive is allocated as a hot spare. Using this configuration, the VNX5600 can support 600 virtual servers as defined in Reference workload. VNX5800 VNX5800 is validated for up to 1,000 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 43 shows one potential configuration. 102 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

103 Solution Architecture Overview Figure 43. Storage layout for 1,000 virtual machines using VNX5800 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 103

104 Solution Architecture Overview This configuration uses the following storage layout: Three hundred and sixty 600 GB SAS disks are allocated to eight block-based storage pools: each with 45 SAS disks for 125 virtual machines. Twelve 600 GB SAS disks are configured as hot spares. Sixteen 200 GB flash drives are configured for Fast VP, two for each pool. One 200 GB flash drive is allocated as a hot spare. Using this configuration, the VNX5800 can support 1,000 virtual servers as defined in the Reference workload. Conclusion The scale levels listed in Figure 44 highlight the entry points and supported maximums for the arrays in the VSPEX private cloud environment. The entry points represent optimal model demarcations in terms of the number of virtual machines within the environment. This aids in providing a frame of reference to determine which VNX array to choose based upon your requirements. It is acceptable to configure any of the listed arrays with a smaller number of virtual machines than the maximums supported using the building block approach described earlier. Figure 44. Maximum scale levels and entry points of different arrays 104 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

105 Solution Architecture Overview High-availability and failover Overview Virtualization layer This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with little or no impact on business operations. Configure high availability in the virtualization layer, and configure the hypervisor to automatically restart failed virtual machines. Figure 45 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 45. High availability at the virtualization layer By implementing high availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the data center. This type of server has redundant power supplies, as shown in Figure 46. Connect the servers to separate power distribution units (PDUs) in accordance with your server vendor s best practices. Figure 46. Redundant power supplies EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 105

106 Solution Architecture Overview To configure HA in the virtualization layer, configure the compute layer with enough resources that meet the needs of the environment, even with a server failure, as demonstrated in Figure 45. Network layer The advanced networking features of VNX provide protection against network connection failures at the array. Each Windows host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 47 and Figure 48. Spread these connections across multiple Ethernet switches to guard against component failure in the network. Figure 47. Network layer high availability (VNX) block variant Figure 48. Network layer high availability (VNX) file variant Ensure there is no single point of failure to allow the compute layer to access storage, and communicate with users even if a component fails. Storage layer The VNX design is for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk, as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

107 Solution Architecture Overview Figure 49. VNX series HA components EMC storage arrays support HA by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. Validation test profile Profile characteristics The VSPEX solution was validated with the environment profile described in Table 14. Table 14. Profile characteristics Profile characteristic Value Number of virtual machines 200/300/600/1,000 Virtual machine OS Windows Server 2012 Datacenter Edition Processors per virtual machine 1 Number of virtual processors per physical CPU core 4 RAM per virtual machine Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or CIFS shares to store virtual machine disks 2 GB 100 GB 25 IOPS 6/10/16 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 107

108 Solution Architecture Overview Profile characteristic Number of virtual machines per LUN or CIFS share Disk and RAID type for LUNs or CIFS shares Value 62 or 63 per LUN of CIFS share RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks Note: This solution was tested and validated with Windows Server 2012 R2 as the operating system for Hyper-V hosts and virtual machines; however it also supports Windows Server 2008, Windows Server 2008 R2, and Windows Server The sizing and configuration for Hyper-V hosts is the same for all supported versions of Windows Server. Backup and recovery configuration guidelines Sizing guidelines For complete backup and recovery guidelines for this VSPEX Private Cloud solution, please refer to the EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. There is guidance on how to correlate those reference workloads to customer workloads, and how that may change the end delivery from the server and network perspective. Modify the storage definition by adding drives for greater capacity and performance, and by adding features such as FAST Cache and FAST VP. The disk layouts provide support for the appropriate number of virtual machines at the defined performance level and typical operations such as snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response times. Reference workload Overview When you move an existing server to a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. Defining the reference workload To simplify the discussion, this section presents a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can decide which reference architecture to choose. For VSPEX solutions, the reference workload is a single virtual machine. Table 15 lists the characteristics of this virtual machine. 108 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

109 Solution Architecture Overview Table 15. Virtual machine characteristics Characteristic Virtual machine operating system Value Microsoft Windows Server 2012 R2 Datacenter Edition Virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine I/O operations per second (IOPS) per virtual machine I/O pattern 2 GB 100 GB 25 Random I/O read/write ratio 2:1 This specification for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference to measure other virtual machines. Server processor capabilities are constantly evolving. Server providers aligned with the VSPEX program may specify updated compute expectations based on recent technology changes. This guidance may override the compute requirements specified in the reference workload. Applying the reference workload Overview Example 1: Custom-built application The solution creates a pool of resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 15. The customer virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully utilized. A careful analysis of the existing application reveals that the application can use one processor, and needs 3 GB memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. Based on these numbers, the resource pool needs the following resources: CPU of one reference virtual machine Memory of two reference virtual machines Storage of one reference virtual machine I/Os of one reference virtual machine EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 109

110 Solution Architecture Overview In this example, an appropriate virtual machine uses the resources for two of the reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 298 reference virtual machines remain. Example 2: Pointof-Sale system The database server for a customer s Point-of-Sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB memory. It uses 200 GB storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of four reference virtual machines Memory of eight reference virtual machines Storage of two reference virtual machines I/Os of eight reference virtual machines In this case, the one appropriate virtual machine uses the resources of eight reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 292 reference virtual machines remain. Example 3: Web server The customer s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB memory. It uses 25 GB storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of two reference virtual machines Memory of four reference virtual machines Storage of one reference virtual machine I/Os of two reference virtual machines In this case, the one appropriate virtual machine uses the resources of four reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 296 reference virtual machines remain. Example 4: Decision-support database The database server for a customer s decision support system must move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64 GB memory. It uses 5 TB storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of 10 reference virtual machines Memory of 32 reference virtual machines Storage of 52 reference virtual machines 110 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

111 Solution Architecture Overview I/Os of 28 reference virtual machines In this case, one virtual machine uses the resources of 52 reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 248 reference virtual machines remain. Summary of examples These four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 300 reference virtual machines, and resources for 234 reference virtual machines remain in the resource pool as shown in Figure 50. Figure 50. Resource pool flexibility In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are beyond the scope of the document. Examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. Implementing the solution Overview Resource types CPU resources The solution described in this guide requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are general requirements that are independent of any particular implementation except that the requirements grow linearly with the target level of scale. This section describes some considerations for implementing the requirements. The solution defines the hardware requirements for the solution in terms of these basic resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, their use in the solution, and key implementation considerations in a customer environment. The solution defines the number of CPU cores that are required, but not a specific type or configuration. New deployments should use recent revisions of common EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 111

112 Solution Architecture Overview processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution. In any running system, monitor the utilization of resources and adapt as needed. The reference virtual machine and required hardware resources in the solution assume that there are four virtual CPUs for each physical processor core (4:1 ratio). Usually, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual server in the solution must have 2 GB of memory. In a virtual environment, it is common to provision virtual machines with more memory than is installed on the physical hypervisor server because of budget constraints. Memory over-commitment assumes that each virtual machine does not use all its allocated memory. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem via page file swapping. This solution is validated with statically assigned memory and no over-commitment of memory resources. If a real-world environment uses over-committed memory, monitor the system memory utilization and associated page file I/O activity consistently to ensure that a memory shortfall does not cause unexpected results. Network resources The solution outlines the minimum needs of the system. If the system requires additional bandwidth, add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and can add ports using EMC UltraFlex I/O modules. For reference purposes in the validated environment, each virtual machine generates 25 IOPS with an average size of 8 KB. This means that each virtual machine is generating at least 200 KB/s traffic on the storage network. For an environment rated for 300 virtual machines, this comes out to a minimum of approximately 60 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual machine migration Administrative and management operations The requirements for each of these depend on the use of the environment. It is not practical to provide precise numbers in this context. However, the network described in the solution should be sufficient to handle average workloads for the above use cases. Regardless of the network traffic requirements, always have at least two physical network connections shared for a logical network so that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. 112 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

113 Solution Architecture Overview Storage resources The storage building blocks described in this solution contain layouts for the disks used in the system validation. Each layout balances the available storage capacity with the performance capability of the drives. Consider a few factors when examining storage sizing. Specifically, the array has a collection of disks assigned to a storage pool. From that storage pool, provision CIFS shares to the Windows cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. It is acceptable to replace drives with larger capacity drives of the same type and performance characteristics, or with higher performance drives of the same type and capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. Moreover, it is acceptable to scale up using the building blocks with larger numbers of drives up to the limit defined in the VSPEX private cloud validated maximums. Observe the following best practices: Use the latest best practices guidance from EMC regarding drive placement within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best Practices for Performance. When expanding the capability of a storage pool using the building blocks described in this document, use the same type and size of drive in the pool. Create a new pool to use different drive types and sizes. This prevents uneven performance across the pool. Configure at least one hot spare for every type and size of drive on the system. Configure at least one hot spare for every 30 drives of a given type. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system and conforms to EMC published best practices. Implementation summary The requirements in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual machine. In any customer implementation, the load of a system varies over time as users interact with the system. However, if the customer virtual machines differ significantly from the reference definition, and vary in the same resource group, add more of that resource type to the system to compensate. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 113

114 Solution Architecture Overview Quick assessment of customer environment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as listed in Table 16. Table 16. Blank worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application Resource requirements Equivalent reference virtual machines N/A Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU Memory IOPS Capacity CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as perfmon in Microsoft Windows to examine the CPU utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. 114 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

115 Solution Architecture Overview In any operation that involves performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Storage performance requirements IOPS Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as Microsoft Windows perfmon, to determine memory efficiency. The storage performance requirements for an application are usually the least understood aspect of performance. Several components become important when discussing the I/O performance of a system. The first is the number of requests coming in or IOPS. Equally important is the size of the request, or I/O size -- a request for 4 KB of data is easier and faster to process than a request for 4 MB of data. That distinction becomes important with the another factor, which is the average I/O response time, or I/O latency. The reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as Microsoft Windows perfmon. Perfmon provides several counters that can help. The most common are: Logical Disk or Disk Transfer/sec Logical Disk or Disk Reads/sec Logical Disk or Disk Writes/sec Note: At the time of publication, Windows perfmon does not provide counters to expose IOPS and latency for CIFS-based VHDX storage. Monitor these areas from the VNX array as discussed in Chapter 7. The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application. I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even powers of 2, such as 4 KB, 8 KB, 16 KB, or 32 KB. The performance counter calculates a simple average; it is common to see 11 KB or 15 KB instead of the actual I/O sizes. The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application is using mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 115

116 Solution Architecture Overview If that application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400 IOPS since the reference virtual machine assumed 8 KB I/O sizes. I/O latency Storage capacity requirements Determining equivalent reference virtual machines You can use the average I/O response time, or I/O latency, to measure how quickly the storage system processes I/O requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target, and at the same time, monitor the system and reevaluate the resource pool utilization if needed. To monitor I/O latency, use the Logical Disk\Avg. Disk sec/transfer counter in Microsoft Windows perfmon. If the I/O latency is continuously over the target, reevaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended. The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year, requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. With all of the resources defined, determine an appropriate value for the equivalent reference virtual machines line by using the relationships in Table 17. Round all values up to the closest whole number. Table 17. Reference virtual machine resources Resource Value for reference virtual machine Relationship between requirements and equivalent reference virtual machines CPU 1 Equivalent reference virtual machines = resource requirements Memory 2 Equivalent reference virtual machines = (resource requirements)/2 IOPS 25 Equivalent reference virtual machines = (resource requirements)/25 Capacity 100 Equivalent reference virtual machines = (resource requirements)/100 For example, the Point of Sale system used in Example 2: Point-of-Sale system requires four CPUs, 16 GB memory, 200 IOPS, and 200 GB storage. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 18 demonstrates how that machine fits into the worksheet row. 116 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

117 Solution Architecture Overview Table 18. Example worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application Resource requirements Equivalent reference virtual machines N/A Use the highest value in the row to fill in the Equivalent reference virtual machines column. As shown in Figure 51, the example requires eight reference virtual machines. Figure 51. Required resource from the reference virtual machine pool Implementation example stage 1 A customer wants to build a virtual infrastructure to support one custom-built application, one Point of Sale system, and one web server. The customer computes the sum of the Equivalent reference virtual machines column on the right side of the worksheet as listed in Table 19 to calculate the total number of reference virtual machines required. The table shows the result of the calculation, along with the value, rounded up to the nearest whole number. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 117

118 Solution Architecture Overview Table 19. Example applications stage 1 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example Application #2: Point of sale system Example Application #3: Web server Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A N/A N/A Total equivalent reference virtual machines 14 This example requires 14 reference virtual machines. According to the sizing guidelines, one storage pool with 10 SAS drives and 2 or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5400, for up to 300 reference virtual machines. Figure 52 shows 12 reference virtual machines are available after implementing VNX5400 with 10 SAS drives and two flash drives. 118 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

119 Solution Architecture Overview Figure 52. Aggregate resource requirements stage 1 Figure 53 shows the pool configuration in this example. Figure 53. Pool configuration stage 1 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 119

120 Solution Architecture Overview Implementation example stage 2 This customer must add a decision support database to this virtual infrastructure. Using the same strategy, calculate the number of Equivalent reference virtual machines required, as shown in Table 20. Table 20. Example applications - stage 2 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of Sale system Example application #3:Web server Example application #4: Decision support database Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A N/A N/A ,120 N/A Total equivalent reference virtual machines 66 This example requires 66 reference virtual machines. According to the sizing guidelines, one storage pool with 30 SAS drives and two or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5400, for up to 300 reference virtual machines. Figure 54 shows 12 reference virtual machines available after implementing VNX5400 with 30 SAS drives and two flash drives. 120 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

121 Solution Architecture Overview Figure 54. Aggregate resource requirements - stage 2 Figure 55 shows the pool configuration in this example. Figure 55. Pool configuration stage 2 Implementation example stage 3 With business growth, the customer must implement a much larger virtual environment to support one custom built application, one Point of Sale system, two web servers, and three Decision Support System databases. Using the same strategy, calculate the number of Equivalent reference virtual machines, as shown in Table 21. Table 21. Example applications - stage 3 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of Sale system Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A N/A EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 121

122 Solution Architecture Overview Server resources Storage resources Example application #3: Web server #1 Example application #4: Decision Support System dat abase #1 Example application #5: Web server #2 Example application #6: Decision Support System database #2 Example application #7: Decision Support System database #3 Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A ,120 N/A N/A ,120 N/A ,120 N/A Total equivalent reference virtual machines 174 This example requires 174 reference virtual machines. According to the sizing guidelines, one storage pool with 70 SAS drives and 4 or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5400, for up to 300 reference virtual machines. Figure 56 shows 16 reference virtual machines are available after implementing VNX5400 with 70 SAS drives and four flash drives. 122 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

123 Solution Architecture Overview Figure 56. Aggregate resource requirements for stage 3 Figure 57 shows the pool configuration in this example. Figure 57. Pool configuration stage 3 Fine-tuning hardware resources Usually, the process described in the section Determining equivalent reference virtual machines determines the recommended hardware size for servers and storage. However, in some cases there is a need to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this guide; however, you can perform additional customization at this point. Storage resources In some applications, there is a need to separate application data from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. To achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool. With the method outlined in Determining equivalent reference virtual machines, it is easy to build a virtual infrastructure scaling from 13 reference virtual machines to 1,000 reference virtual machines with the building blocks described in VSPEX storage building blocks, while keeping in mind the recommended limits of each storage array documented in VSPEX private cloud validated maximums. Server resources For some workloads the relationship between server needs and storage needs does not match what is outlined in the Reference virtual machine. Size the server and storage layers separately in this scenario. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 123

124 Solution Architecture Overview Figure 58. Customizing server resources To do this, first total the resource requirements for the server components as shown in Table 22. In the Server Component Totals line at the bottom of the worksheet, add up the server resource requirements from the applications in the table. Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage Component Totals line at the bottom of Table 22 describes the required amount of storage. Table 22. Server resource component totals Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of Sale system Example application #3: Web server #1 Example application Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements N/A N/A N/A ,120 N/A 124 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

125 Solution Architecture Overview #4: Decision Support System database #1 Example application #5: Web server #2 Example application #6: Decision Support System database #2 Example Application #7: Decision Support System database #3 Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Server resources Storage resources N/A ,120 N/A ,120 N/A Total equivalent reference virtual machines 174 Server customization Server component totals NA Storage customization Storage component totals NA Storage component equivalent reference virtual machines NA Total equivalent reference virtual machines - storage 157 Note: Calculate the sum of the Resource Requirements row for each application, not the Equivalent reference virtual machines row, to get the Server/Storage Component Totals. In this example, the target architecture required 39 virtual CPUs and 227 GB of memory. With the stated assumptions of four virtual machines per physical processor core, and no memory over-provisioning, this translates to 10 physical processor cores and 227 GB of memory. With these numbers, the solution can be effectively implemented wi th fewer server and storage resources. Note: Keep high-availability requirements in mind when customizing the resource pool hardware. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 125

126 Solution Architecture Overview Appendix C provides a blank server resource component totals worksheet. EMC VSPEX Sizing Tool To simplify the sizing of this solution EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions. The VSPEX Sizing Tool enables you to input your resource requirements from the customer s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions while providing platform configuration information that meets those requirements. This tool can be accessed at the following location: EMC VSPEX Sizing Tool. 126 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

127 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare switches, connect network, and configure switches Prepare and configure storage array Install and configure Hyper-V hosts Install and configure SQL Server database System Center Virtual Machine Manager server deployment Summary EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 127

128 VSPEX Configuration Guidelines Overview The deployment process consists of the main stages listed in Table 23. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. The table also includes references to chapters that contain relevant procedures. Table 23. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Deployment prerequisites Gather customer configuration data Rack and cable the components Configure the switches and networks, connect to the customer network Customer configuration data Refer to the vendor documentation. Prepare switches, connect network, and configure switches 6 Install and configure the VNX Prepare and configure storage array Configure virtual machine storage Install and configure the servers Set up SQL Server (used by SCVMM) Install and configure SCVMM Prepare and configure storage array Install and configure Hyper-V hosts Install and configure SQL Server database System Center Virtual Machine Manager server deployment 128 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

129 VSPEX Configuration Guidelines Pre-deployment tasks Overview The pre-deployment tasks shown in Table 24 include procedures not directly related to environment installation and configuration, but provide needed results at the time of installation. Examples of pre-deployment tasks are collecting hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite. Table 24. Tasks for pre-deployment Task Description Reference Gather documents Gather tools Gather data Gather the related documents listed in Appendix D. These documents provide detail on setup procedures and deployment best practices for the various components of the solution. Gather the required and optional tools for the deployment. Use Table 25 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process. References: EMC documentation Table 25: Deployment prerequisites checklist Appendix B Deployment prerequisites Table 25 lists the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 9. Table 25. Deployment prerequisites checklist Requirement Description Reference Hardware Sufficient physical server capacity to host 200, 300, 600, or 1,000 virtual servers Windows Server 2012 servers to host virtual infrastructure servers Note: The existing infrastructure may already meet this requirement. Switch port capacity and capabilities as required by the virtual server infrastructure Table 8 EMC VNX5200 (200 virtual machines), VNX5400 (300 virtual machines), VNX5600 (600 virtual machines) or VNX5800 (1,000 virtual machines): Multiprotocol storage array with the EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 129

130 VSPEX Configuration Guidelines Requirement Description Reference required disk layout Software SCVMM 2012 SP1 installation media Microsoft Windows Server 2012 installation media Microsoft Windows Server 2012 installation media (optional for virtual machine guest OS) Microsoft SQL Server 2012 or newer installation media Note: The existing infrastructure may already meet this requirement. Licenses Microsoft Windows Server 2012 Standard (or higher) license keys (optional) Microsoft Windows Server 2012 R2 Datacenter Edition license keys Note: An existing Microsoft Key Management Server (KMS) may already meet this requirement. Microsoft SQL Server license key Note: The existing infrastructure may already meet this requirement. SCVMM 2012 SP1 license keys Customer configuration data Assemble information such as IP addresses and hostnames during the planning process to reduce the onsite time. Appendix B provides a table to maintain a record of relevant customer information. Add, record, or modify information as needed during the deployment process. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information. 130 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

131 VSPEX Configuration Guidelines Prepare switches, connect network, and configure switches Overview This section lists the network infrastructure requirements to support this architecture. Table 26 provides a summary of the tasks for switch and network configuration, and references for further information. Table 26. Tasks for switch and network configuration Task Description Reference Configure infrastructure network Configure VLANs Complete network cabling Configure storage array and Windows host infrastructure networking as specified in Prepare and configure storage array and Install and configure Hyper-V hosts. Configure private and public VLANs as required. Connect the switch interconnect ports. Connect the VNX ports. Connect the Windows server ports. Prepare and configure storage array Install and configure Hyper-V hosts. Your vendor s switch configuration guide Prepare network switches For validated levels of performance and high-availability, this solution requires the switching capacity listed in Appendix A. Do not use new hardware if existing infrastructure meets the requirements. Configure infrastructure network The infrastructure network requires redundant network links for each Windows host, the storage array, the switch interconnect ports, and the switch uplink ports to provide both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 59 and Figure 60 show sample redundant infrastructures for this solution. The diagrams illustrate the use of redundant switches and links to ensure that there are no single points of failure. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 131

132 VSPEX Configuration Guidelines In Figure 59 converged switches provide customers different protocol options (FC, FCoE or iscsi) for the storage network. While existing FC switches are acceptable for FC or FCoE, use 10 Gb Ethernet network switches for iscsi. Figure 59. Sample Ethernet network architecture - block variant 132 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

133 VSPEX Configuration Guidelines Figure 60 shows a sample redundant Ethernet infrastructure for file storage, and illustrates the use of redundant switches and links to ensure that no single points of failure exist in the network connectivity. Figure 60. Sample Ethernet network architecture - file variant Configure VLANs Configure jumbo frames (iscsi or SMB only) Ensure that there are adequate switch ports for the storage array and Windows hosts. Use a minimum three VLANs for the following purposes: Virtual machine networking and traffic management (These are customer-facing networks. Separate them if required) Live Migration networking (Private network) Storage networking (iscsi or SMB, private network) Use jumbo frames for iscsi and SMB protocols. Set the MTU to 9,000 on the switch ports for the iscsi or SMB storage network. Consult your switch configuration guide for instructions. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 133

134 VSPEX Configuration Guidelines Complete network cabling Ensure the following: All servers, storage arrays, switch interconnects, and switch uplinks plug into separate switching infrastructures and have redundant connections. There is a complete connection to the existing customer network. Note: Ensure that unforeseen interactions do not cause service interruptions when you connect the new equipment to the existing customer network. Prepare and configure storage array Implementation instructions and best practices may vary because of the storage network protocol selected for the solution. Each case contains the following steps: 1. Configure the VNX. 2. Provision storage to the hosts. 3. Configure FAST VP. 4. Optionally configure FAST Cache. The sections below cover the options for each step separately depending on whether one of the block protocols (FC, FCoE, iscsi), or the file protocol (CIFS) is selected For FC, FCoE, or iscsi, refer to VNX configuration for block protocols. For CIFS, refer to VNX configuration for file protocols. VNX configuration for block protocols This section describes how to configure the VNX storage array for host access using block protocols such as FC, FCoE, or iscsi. In this solution, the VNX provides data storage for Windows hosts. Table 27. Tasks for VNX configuration for block protocols Task Description Reference Prepare the VNX Set up the initial VNX configuration Provision storage for Hyper-V hosts Physically install the VNX hardware using the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNX. Create the storage areas required for the solution. EMC VNX5200 Unified Installation Guide EMC VNX5400 Unified Installation Guide EMC VNX5600 Unified Installation Guide EMC VNX5800 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide 134 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

135 VSPEX Configuration Guidelines Prepare the VNX The installation guides for VNX5200, VNX5400, VNX5600, and VNX5800 provide instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. Set up the initial VNX configuration After the initial VNX setup, configure key information about the existing environment to enable the storage array to communicate with the other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces For data connections using FC or FCoE Connect one or more servers to the VNX storage system, either directly or through qualified FC or FCoE switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. For data connections using iscsi Connect one or more servers to the VNX storage system, either directly or through qualified IP switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information: 1. Set up a storage network IP address: Logically isolate the storage network from the other networks in the solution, as described in Chapter 3. This ensures that other network traffic does not impact traffic between the hosts and the storage. 2. Enable jumbo frames on the VNX iscsi ports: Use jumbo frames for iscsi networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment: a. In Unisphere, select Settings > Network > Settings for Block. b. Select the appropriate iscsi network interface. c. Click Properties. d. Set the MTU size to 9,000. e. Click OK to apply the changes. The reference documents listed in Table 27 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 135

136 VSPEX Configuration Guidelines Provision storage for Hyper-V hosts This section describes provisioning block storage for Hyper-V hosts. To provision file storage, refer to VNX configuration for file protocols. Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter 4. a. Log in to Unisphere. b. Select the array for this solution. c. Select Storage > Storage Configuration > Storage Pools. d. Click Pools. e. Click Create. Note: The pool does not use system drives for additional storage. Table 28. Storage allocation table for block Configuration 200 virtual machines Number of pools Number of 15K SAS drives per pool Number of flash drives per pool Number of LUNs per pool LUN size (TB) Total x 7 TB LUNs 2 x 4 TB LUNs 300 virtual machines Total x 7 TB LUNs 2 x 3 TB LUNs 600 virtual machines Total x 7 TB LUNs 2 x 6 TB LUNs 1000 virtual machines Total x 7 TB LUNs Note: Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. 2. Create the hot spare disks at this point. Refer to the appropriate VNX installation guide for additional information. 136 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

137 VSPEX Configuration Guidelines Figure 40 depicts the target storage layout for 200 virtual machines. Figure 41 depicts the target storage layout for 300 virtual machines. Figure 42 depicts the target storage layout for 600 virtual machines. Figure 43 depicts the target storage layout for 1,000 virtual machines. 3. Use the pools created in step 1 to provision thin LUNs: a. Select Storage > LUNs. b. Click Create. c. Select the pool created in step 1. Always create two thin LUNs in one physical storage pool. User Capacity depends on the specific number of virtual machines. Refer to Table 28 for more information. 4. Create a storage group, and add LUNs and Hyper-V servers: a. Select Hosts > Storage Groups. b. Click Create and input a name for the new storage group. c. Select the created storage group. d. Click LUNs. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs dialog appears. e. Configure and add the Hyper-V hosts to the storage pool. VNX configuration for file protocols This section and Table 31 describe file storage provisioning tasks for Hyper-V hosts Table 29. Tasks for storage configuration for file protocols Task Description Reference Prepare the VNX Set up the initial VNX configuration Create a network interface Create a CIFS server Create a storage pool for file Create the file systems Create the SMB file share Physically install the VNX hardware with the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNX. Configure the IP address and network interface information for the CIFS server. Create the CIFS server instance to publish the storage. Create the block pool structure and LUNs to contain the file system. Establish the SMB shared file system. Attach the file system to the CIFS server to create an SMB share for Hyper-V storage. VNX5200 Unified Installation Guide VNX5400 Unified Installation Guide VNX5600 Unified Installation Guide VNX5800 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 137

138 VSPEX Configuration Guidelines Prepare the VNX The installation guides for VNX5200, VNX5400, VNX5600, and VNX5800 provide instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. Set up the initial VNX configuration After the initial VNX setup, configure key information about the existing environment to allow the storage array to communicate with the other devices in the environment. Ensure one or more servers connect to the VNX storage system, either directly or through qualified IP switches. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces Storage network IP address CIFS services and Active Directory Domain membership Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Enable jumbo frames on the VNX storage network interfaces Use Jumbo frames for storage networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment. Complete the following steps to enable jumbo frames: 1. In Unisphere, select Settings > Network > Settings for File. 2. Select the appropriate network interface from the Interfaces tab. 3. Click Properties. 4. Set the MTU size to 9, Click OK to apply the changes. The reference documents listed in Table 27 provide more information on how to configure the VNX platform. The Storage configuration guidelines section provide more information on the disk layout. Create a network interface A network interface maps to a CIFS server. CIFS servers provide access to file shares over the network Complete the following steps to create a network interface: 1. Log in to the VNX. 2. In Unisphere, select Settings > Network > Settings For File. 3. On the Interfaces tab, click Create as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

139 VSPEX Configuration Guidelines Figure 61. Network Settings for File dialog box In the Create Network Interface wizard, complete the following steps: 1. Select the Data Mover which will provide access to the file share. 2. Select the device name where the network interface will reside. Note: Run the following command as nasadmin on the Control Station to ensure that the selected device has a link connected: > server_sysconfig <datamovername> -pci This command lists the link status (UP or DOWN) for all devices on the specified Data Mover. 3. Type an IP address for the interface. 4. Type a Name for the interface. 5. Type the netmask for the interface. The Broadcast Address appears automatically after you provide the IP address and netmask. 6. Set the MTU size for the interface to 9,000. Note: Ensure that all devices on the network (switches, servers, and so on) have the same MTU size. 7. If required, specify the VLAN ID. 8. Click OK, as shown in Figure 62. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 139

140 VSPEX Configuration Guidelines Figure 62. The Create Interface dialog box Create a CIFS server A CIFS server provides access to the CIFS (SMB) file share. 1. In Unisphere, select Storage > Shared Folders > CIFS > CIFS Servers. Note: A CIFS server must exist before creating an SMB 3.0 file share. 2. Click Create. The Create CIFS Server window appears. From the Create CIFS Server window, complete the following steps: 3. Select the Data Mover on which to create the CIFS server. 4. Set the server type as Active Directory Domain. 5. Type a Computer Name for the server. The computer name must be unique within Active Directory. Unisphere automatically assigns the NetBIOS name to the computer name. 6. Type the Domain Name for the CIFS server to join. 7. Select Join the Domain. 8. Specify the domain credentials: a. Type the Domain Admin User Name. b. Type the Domain Admin Password. 9. Select Enable Local Usersto allow the creation of a limited number of local user accounts on the CIFS server. a. Set the Local Admin Password. b. Confirm the Local Admin Password. 10. Select the network interface created in step 1 to allow access to the CIFS server. 11. Click OK. The newly created CIFS server appears under the CIFS server tab as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

141 VSPEX Configuration Guidelines Figure 63. The Create CIFS Server dialog box Create storage pools for file Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums as described in Chapter 4. a. Log in to Unisphere. b. Select the array for this solution. c. Select Storage > Storage Configuration > Storage Pools > Pools. d. Click Create. Note: The pool does not use system drives for additional storage. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 141

142 VSPEX Configuration Guidelines Table 30. Storage allocation table for file Configuration 200 virtual machines Number of pools Number of 15K SAS drives per pool Number of Flash drives per pool Number of LUNs per pool Number of FS per storage pool for file LUN size (GB) FS size (TB) Total X 800GB LUNs 20 X 600GB LUNs 2 x 5 TB FS 2 x 4 TB FS 300 virtual machines Total X 800GB LUNs 20 X 400GB LUNs 4 x 7 TB FS 2 x 3 TB FS 600 virtual machines Total X 800GB LUNs 20 X 700GB LUNs 8 x 7 TB FS 2 x 6 TB FS 1000 virtual machines Total X 800GB LUNs 16 x 7 TB FS 2. Create the hot spare disks at this point. Refer to the appropriate VNX installation guide for additional information. Figure 40 depicts the target storage layout for 200 virtual machines. Figure 41 depicts the target storage layout for 300 virtual machines. Figure 42 depicts the target storage layout for 600 virtual machines. Figure 43 depicts the target storage layout for 1,000 virtual machines. 3. Provision LUNs on the pool created in step 1: a. Select Storage > LUNs. b. Click Create. 142 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

143 VSPEX Configuration Guidelines c. Select the pool created in step 1. Under LUN Properties, uncheck the Thin checkbox. For User Capacity, refer to Table 30 on the details of the size of LUNs. The Number of LUNs to create depends on the disk number in the pool. Refer to Table 30 for details on the number of LUNs needed in each pool. Note: For FAST VP implementations, assign no more than 95% of the available storage pool capacity for file. 4. Connect the LUNs to the Data Mover for file access: a. Click Hosts > Storage Groups. b. Select filestorage. c. Click Connect LUNs. d. In the Available LUNs panel, expand SP A and SP B and select all the LUNs created in the previous steps. The Selected LUNs panel appears. Click OK. 5. Rescan storage systems to detect newly-available storage. a. Click the Storage tab. b. Under the File Storage pane, click Rescan Storage Systems. c. Click OK to proceed in the window that opens. Use a new Storage Pool for File to create multiple file systems. Create file systems To create an SMB file share, complete the following tasks: 1. Create a storage pool and a network interface. 2. Create a file system. 3. Export an SMB file share from the file system. If no storage pools or interfaces exist, follow the steps in Create a network interface and Create storage pools for file to create a storage pool and a network interface. Create two thin file systems from each storage pool for file. Refer to Table 30 for details on the number of file systems. Complete the following steps to create VNX file systems for SMB file shares: 1. Log in to Unisphere. 2. Select Storage > Storage Configuration > File Systems. 3. Click Create. The File System Creation wizard appears. 4. Specify the file system details: a. Select Storage Pool. b. Type a File System Name. c. Select a Storage Pool to contain the file system. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 143

144 VSPEX Configuration Guidelines d. Select the Storage Capacity of the file system. Refer to Table 30 for detailed storage capacity. e. Select Thin Enabled. f. Multiply the number of terabytes specified for the file system in Table 30 by to get the file size in megabytes. Enter this figure in the Maximum Capacity (MB) field. g. Select the Data Mover (R/W) to own the file system. Note: on it. The selected Data Mover must have an interface defined h. Click OK as shown in Figure 64. Figure 64. The Create File System dialog box The new file system appears on the File Systems tab. 1. Click Mounts. 2. Select the created file system and then click Properties. 3. Select Set Advanced Options. 4. Select Direct Writes Enabled. 5. Select CIFS Sync Writes Enabled. 6. Click OK as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

145 VSPEX Configuration Guidelines Figure 65. The File System Properties dialog box Create the SMB file share After completing creating the file system, the SMB file share can be created. To create the share, complete the following steps: 1. From the VNX dashboard, hover over the Storage tab. 2. Select Shared folders > CIFS. 3. From the shares page click Create. The Create CIFS Share window opens. 4. Select the Data Mover on which to create the share (the same Data Mover that owns the CIFS server). 5. Specify a name for the share. 6. Specify the file system for the share. Leave the default path as is. 7. Select the CIFS server to provide access to the share as shown in Figure Optionally specify a user limit, or any comments about the share. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 145

146 VSPEX Configuration Guidelines Figure 66. The Create File Share dialog box FAST VP configuration This procedure applies to both file and block storage implementations. Complete the following steps to configure FAST VP. Assign two flash drives in each block-based storage pool: 1. In Unisphere, select the storage pool to configure for FAST VP. 2. Click Properties for a specific storage pool to open the Storage Pool Properties dialog. Figure 67 shows the tiering information for a specific FAST pool. Note: The Tier Status area shows FAST relocation information specific to the selected pool. 3. Select Scheduled from the Auto-Tiering list box. The Tier Details panel shows the exact data distribution. 146 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

147 VSPEX Configuration Guidelines Figure 67. The Storage Pool Properties dialog box You can also connect to the array-wide Relocation Schedule by clicking the button in the top right corner to access the Manage Auto-Tiering window as shown in Figure 68. Figure 68. Manage Auto-Tiering dialog box EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 147

148 VSPEX Configuration Guidelines From this status dialog, users can control the Data Relocation Rate. The default rate is Medium to minimize the impact on host I/O. Note: FAST is a completely automated tool that provides the ability to create a relocation schedule. Schedule the relocations during off-hours to minimize any potential performance impact. FAST Cache configuration You can configure FAST Cache as an option. Note: Use the flash drives listed in Sizing guidelines for FAST VP configurations as described in FAST VP configuration. FAST Cache is an optional component of this solution that provides improved performance as outlined in Chapter 3. To configure FAST Cache on the storage pools for this solution, complete the following steps: 1. Configure flash drives as FAST Cache: a. Click Properties from the Unisphere dashboard or Manage Cache in the left-hand pane of the Unisphere interface to access the Storage System Properties window as shown in Figure 69. b. Click the FAST Cache tab to view FAST Cache information. Figure 69. The Storage System Properties dialog box 148 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

149 VSPEX Configuration Guidelines c. Click Create to open the Create FAST Cache window as shown in Figure 70. The RAID Type field displays RAID 1 when the FAST Cache is created. This window also provides the option to select the drives for the FAST Cache. The bottom of the screen shows the flash drives used to create the FAST Cache. Select Manual to choose the drives manually. d. Refer to Storage configuration guidelines to determine the number of flash drives required in this solution. Note: If a sufficient number of flash drives are not available, VNX displays an error message and does not create the FAST Cache. Figure 70. The Create FAST Cache dialog box 2. Enable FAST Cache in the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. All the LUNs created in the storage pool have FAST Cache enabled or disabled. Configure the LUNS from the advanced tab on the Create Storage Pool window shown in Figure 71. After installation, FAST Cache is enabled by default at storage pool creation. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 149

150 VSPEX Configuration Guidelines Figure 71. Advanced tab in the Create Storage Pool dialog If the storage pool already exists, use the Advanced tab of the Storage Pool Properties window to configure FAST Cache as shown in Figure 72. Figure 72. Advanced tab in the Storage Pool Properties dialog Note: The VNX FAST Cache feature on does not cause an instant performance improvement. The system must collect data about access patterns, and promote frequently used information into the cache. This process can take several hours. Array performance gradually improves during this time. 150 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

151 VSPEX Configuration Guidelines Install and configure Hyper-V hosts Overview This section provides the requirements for the installation and configuration of the Windows hosts and infrastructure servers to support the architecture. Table 31 describes the required tasks. Table 31. Tasks for server installation Task Description Reference Install Windows hosts Install Hyper-V and configure Failover Clustering Configure windows hosts networking Install PowerPath on Windows Servers Plan virtual machine memory allocations Install Windows Server 2012 on the physical servers for the solution. 1. Add the Hyper-V Server role. 2. Add the Failover Clustering feature. 3. Create and configure the Hyper-V cluster. Configure Windows hosts networking, including NIC teaming and the Virtual Switch network. Install and configure PowerPath to manage multipathing for VNX LUNs Ensure that Windows Hyper-V guest memory management features are configured properly for the environment PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Install Windows hosts Follow Microsoft best practices to install Windows Server 2012 and the Hyper-V role on the physical servers for this solution. Install Hyper-V and configure failover clustering To install and configure Failover Clustering, complete the following steps: 1. Install and patch Windows Server 2012 on each Windows host. 2. Configure the Hyper-V role, and the Failover Clustering feature. 3. Install the HBA drivers, or configure iscsi initiators on each Windows host. For details, refer to EMC Host Connectivity Guide for Windows. Table 31 provides the steps and references to accomplish the configuration tasks. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 151

152 VSPEX Configuration Guidelines Configure Windows host networking To ensure performance and availability, the following network interface cards (NICs) are required: At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary). At least two 10 GbE NICs for the storage network. At least one NIC for Live Migration. Note: Enable jumbo frames for NICS that transfer iscsi or SMB data. Set the MTU to 9,000. Consult the NIC configuration guide for instruction. Install PowerPath on Windows servers Plan virtual machine memory allocations Install PowerPath on Windows servers to improve and enhance the performance and capabilities of the VNX storage array. For the detailed installation steps, refer to PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Server capacity serves two purposes in the solution: Supports the new virtualized server infrastructure. Supports the required infrastructure services such as authentication or authorization, DNS, and databases. For information on minimum infrastructure service hosting requirements, refer to Appendix A. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Memory configuration Take care to properly size and configure the server memory for this solution. This section provides an overview of memory management in a Hyper-V environment. Memory virtualization techniques enable the hypervisor to abstract physical host resources such as Dynamic Memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. With advanced processors (such as Intel processors with EPT support), this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. There are multiple techniques available within the hypervisor to maximize the use of system resources such as memory. Do not substantially over commit resources as this can lead to poor system performance. The exact implications of memory over commitment in a real-world environment are difficult to predict. Performance degradation due to resource-exhaustion increases with the amount of memory overcommitted. 152 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

153 VSPEX Configuration Guidelines Install and configure SQL Server database Overview Most customers use a management tool to provision and manage their server virtualization solution even though it is not required. The management tool requires a database backend. SCVMM uses SQL Server 2012 as the database platform. This section describes how to set up and configure a SQL Server database for the solution. Table 32 lists the detailed setup tasks. Table 32. Tasks for SQL Server database setup Task Description Reference Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install Microsoft SQL Server Configure a SQL Server for SCVMM Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements. Install Microsoft Windows Server 2012 Datacenter Edition on the virtual machine. Install Microsoft SQL Server on the designated virtual machine. Configure a remote SQL Server instance or SCVMM Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Create the virtual machine with enough computing resources on one of the Windows servers designated for infrastructure virtual machines. Use the storage designated for the shared infrastructure. Note: The customer environment may already contain a SQL Server for this role. In that case, refer to the section Configure a SQL Server for SCVMM. The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings. Use the SQL Server installation media to install SQL Server on the virtual machine. The Microsoft TechNet website provides information on how to install SQL Server. One of the installable components in the SQL Server installer is the SQL Server Management Studio (SSMS). Install this component on the SQL server directly, and on an administrator console. To change the default path for storing data files, perform the following steps: EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 153

154 VSPEX Configuration Guidelines 1. Right-click the server object in SSMS and select Database Properties. The Properties window appears. 2. Change the default data and log directories for new databases created on the server. Configure a SQL Server for SCVMM To use SCVMM in this solution, configure the SQL Server for remote connections. The requirements and steps to configure it correctly are available in the article Configuring a Remote Instance of SQL Server for VMM. Refer to the list of documents in Appendix D for more information. Note: Do not use the Microsoft SQL Server Express based database option for this solution. Create individual login accounts for each service that accesses a database on the SQL Server. System Center Virtual Machine Manager server deployment Overview This section provides information on how to configure SCVMM. Complete the tasks in Table 33. Table 33. Tasks for SCVMM configuration Task Description Reference Create the SCVMM host virtual machine Install the SCVMM guest OS Install the SCVMM server Install the SCVMM Management Console Create a virtual machine for the SCVMM Server. Install Windows Server 2012 Datacenter Edition on the SCVMM host virtual machine. Install an SCVMM server. Install an SCVMM Management Console. Create a virtual machines Install the guest operating system How to Install a VMM Management Server How to Install the VMM Console Install the SCVMM agent locally on the hosts Add a Hyper-V cluster into SCVMM Install an SCVMM agent locally on the hosts SCVMM manages. Add the Hyper-V cluster into SCVMM. Installing a VMM Agent Locally on a Host Adding and Managing Hyper-V Hosts and Host Clusters in VMM 154 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

155 VSPEX Configuration Guidelines Task Description Reference Add file share storage in SCVMM (file variant only) Add SMB file share storage to a Hyper-V cluster in SCVMM. How to Assign SMB 3.0 File Shares to Hyper-V Hosts and Clusters in VMM Create a virtual machine in SCVMM Create a virtual machine in SCVMM. Creating and Deploying Virtual Machines Perform partition alignment, and assign File Allocation Unite Size Create a template virtual machine Deploy virtual machines from the template virtual machine Using Diskpart.exe to perform partition alignment, assign drive letters, and assign file allocation unit size of virtual machine s disk drive Create a template virtual machine from the existing virtual machine. Create the hardware profile and Guest Operating System profile at this time. Deploy the virtual machines from the template virtual machine. Disk Partition Alignment Best Practices for SQL Server How to Create a Virtual Machine Template How to Create and Deploy a Virtual Machine from a Template Create a SCVMM host virtual machine To deploy the Microsoft Hyper-V server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Microsoft Hyper-V server with the customer guest OS configuration by using an infrastructure server datastore presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines SCVMM must manage. Install the SCVMM guest OS Install the SCVMM server Install the guest OS on the SCVMM host virtual machine. Install the requested Windows Server version on the virtual machine and select appropriate network, time, and authentication settings. Set up the VMM database and the default library server, and then install the SCVMM server. Refer to the Microsoft TechNet Library topic Installing the VMM Server to install the SCVMM server. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 155

156 VSPEX Configuration Guidelines Install the SCVMM Management Console Install the SCVMM agent locally on a host SCVMM Management Console is a client tool to manage the SCVMM server. Install the VMM Management Console on the same computer as the VMM server. Refer to the Microsoft TechNet Library topic Installing the VMM Administrator Console to install the SCVMM Management Console. If the hosts must be managed on a perimeter network, install a VMM agent locally on the host before adding it to VMM. Optionally, install a VMM agent locally on a host in a domain before adding the host to VMM. Refer to the Microsoft TechNet Library topic Installing a VMM Agent Locally on a Host to install a VMM agent locally on a host. Add a Hyper-V cluster into SCVMM Add file share storage to SCVMM (file variant only) Create a virtual machine in SCVMM Add the deployed Microsoft Hyper-V cluster to SCVMM. SCVMM manages the Hyper-V cluster. Refer to the Microsoft TechNet Library topic How to Add a Host Cluster to VMM to add the Hyper-V cluster. To add file share storage to SCVMM, complete the following steps: 1. Open the VMs and Services workspace. 2. In the VMs and Services pane, right-click the Hyper-V Cluster name. 3. Click Properties. 4. In the Properties window, click File Share Storage. 5. Click Add, and then add the file share storage to SCVMM. Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, then install the software, and change the Windows and application settings. Refer to the Microsoft TechNet Library topic How to Create a Virtual Machine with a Blank Virtual Hard Disk to create a virtual machine. Perform partition alignment, and assign File Allocation Unite Size Create a template virtual machine Perform disk partition alignment on virtual machines whose operation system is prior to Windows Server It is recommended to align the disk drive with an offset of 1,024 KB, and format the disk drive with a file allocation unit (cluster) size of 8 KB. Refer to the Microsoft TechNet Library topic Disk Partition Alignment Best Practices for SQL Server to perform partition alignment, assign drive letters, and assign file allocation unit size using diskpart.exe Converting a virtual machine into a template removes the virtual machine. Backup the virtual machine, because the virtual machine may be destroyed during template creation. Create a hardware profile and a Guest Operating System profile when creating a template. Use the profiler to deploy the virtual machines. 156 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

157 VSPEX Configuration Guidelines Refer to the Microsoft TechNet Library topic How to Create a Template from a Virtual Machine. Deploy virtual machines from the template virtual machine The deployment wizard enables you to save the PowerShell scripts and reuse them to deploy other virtual machines with the same configuration. Refer to the Microsoft TechNet Library topic How to Deploy a Virtual Machine. Summary This chapter presented the required steps to deploy and configure the various aspects of the VSPEX solution, including the physical and logical components. At this point, the VSPEX solution is fully functional. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 157

158 VSPEX Configuration Guidelines 158 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

159 Chapter 6 Verifying the Solution This chapter presents the following topics: Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 159

160 Verifying the Solution Overview This chapter provides a list of items to review after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure the configuration meets core availability requirements. Complete the tasks listed in 0. Tasks for testing the installation Task Description Reference Post-install checklist Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. Hyper-V : How many network cards do I need? Deploy and test a single virtual server Verify redundancy of the solution components Verify that each Hyper-V host has access to the required datastores and VLANs. Verify that the Live Migration interfaces are configured correctly on all Hyper-V hosts. Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface. Perform a reboot for each storage processor in turn, and ensure that the storage connectivity is maintained. Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact. On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host. Using a VNXe System with Microsoft Windows Hyper-V Virtual Machine Live Migration Overview Deploying Hyper-V Hosts Using Microsoft System Center 2 Machine Manager N/A Vendor documentation Creating a Hyper-V Host Cluster in VMM Overview 160 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

161 Verifying the Solution Post-install checklist The following configuration items are critical to the functionality of the solution. On each Windows Server, verify the following items prior to deployment into production: The VLAN for virtual machine networking is configured correctly. The storage networking is configured correctly. Each server can access the required Cluster Shared Volumes/Hyper-V SMB shares. A network interface is configured correctly for Live Migration. Deploy and test a single virtual server Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to login to it. Verify the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures. On a Hyper-V host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host. Block environments Complete the following steps to perform a reboot of each VNX storage processor in turn and verify that connectivity to the LUNs is maintained throughout each reboot: 1. Log in to the Control Station with administrator credentials. 2. Navigate to /nas/sbin. 3. Reboot SP A by using the./navicli -h spa rebootsp command. 4. During the reboot cycle, check for the presence of datastores on Windows hosts. 5. When cycle completes, reboot SP B by using the /navicli h spb rebootsp command. 6. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 161

162 Verifying the Solution File environments Perform a failover of each VNX Data Mover in turn and verify that connectivity to SMB shares is maintained and that connections to CIFS file systems are reestablished. For simplicity, use the following approach for each Data Mover: Note: Optionally, reboot the Data Movers through the Unisphere interface. 1. From the Control Station prompt, run the server_cpu <movername> -reboot command, where <movername> is the name of the Data Mover. 2. To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure. 3. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. 162 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

163 Chapter 7 System Monitoring This chapter presents the following topics: Overview 164 Key areas to monitor VNX resources monitoring guidelines EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 163

164 System Monitoring Overview System monitoring of the VSPEX environment is the same as monitoring any core IT system; it is a relevant and core component of administration. The monitoring levels involved in a highly virtualized infrastructure such as a VSPEX environment are somewhat more complex than a purely physical infrastructure, as the interaction and interrelationships between various components can be subtle and nuanced. However, those who are experienced in administering physical environments should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and data flows. The following business requirements drive the need for proactive, consistent monitoring of the environment: Stable, predictable performance Sizing and capacity needs Availability and accessibility Elasticity : the dynamic addition, subtraction, and modification of workloads Data protection If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are at the end of this chapter. Key areas to monitor Since VSPEX Proven Infrastructures comprise end-to-end solutions, system monitoring includes three discrete, but highly interrelated areas: Servers, including virtual machines and clusters Networking Storage This chapter focuses primarily on monitoring key components of the storage infrastructure, the VNX array, but briefly describes other components as well. Performance baseline When a workload is added to a VSPEX deployment, server, storage, and networking resources are consumed. As additional workloads are added, modified, or removed, resource availability and more importantly, capabilities change, which impact all other workloads running on the platform. Customers must fully understand their workload characteristics on all key components prior to deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined reference virtual machine. 164 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

165 System Monitoring Deploy the first workload, and then measure the end-to-end resource consumption along with platform performance. This removes the guesswork from sizing activities and ensures that initial assumptions were valid. As additional workloads deploy, rerun the benchmarks to determine cumulative load and impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that oversubscription does not negatively impact overall system performance. Run these baselines consistently, to ensure the platform as a whole, and the virtual machines themselves operate as expected. The following section discusses which components should comprise a core performance baseline. Servers The key resources to monitor from a server perspective include the use of: Processors Memory Disk (local, NAS, and SAN) Networking Monitor these areas from both a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). Depending on your operating system, there are tools available to monitor and capture this data. For example, if your VSPEX deployment uses Windows servers as the hypervisor, you can use Windows perfmon to monitor and log these metrics. Follow your vendor s guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application. Detailed information about this tool is available from the Microsoft TechNet Library topic Using Performance Monitor. Keep in mind that each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of reference virtual machines deployed and their defined workload. Networking Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and if network file or block protocols such as NFS, CIFS, SMB, iscsi, and FCoE are implemented, at the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies and IOPS size. Capture additional data from network card or HBA utilities. From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths and inter switch link (ISL) utilization. Networking storage protocols are discussed in the following section. For detailed monitoring documentation, refer to your hypervisor or operating system vendor. Storage Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 165

166 System Monitoring the VNX storage arrays provide an easy yet powerful way to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus on, including: Capacity IOPS Latency SP utilization For CIFS, SMB, or NFS protocols, the following additional components should be monitored: Data Mover, CPU, and memory usage File system latency Network interfaces throughput in and throughput out Additional considerations (though primarily from a tuning perspective) include: I/O size Workload characteristics Cache utilization These factors are outside the scope of this document; however, storage tuning is an essential component of performance optimization. EMC offers the following additional guidance on the subject through EMC Online Support: in the EMC VNX Unified Best Practices for Performnace-Applied Best Practices Guide. VNX resources monitoring guidelines Monitor the VNX with the EMC Unisphere GUI by opening an HTTPS session to the Control Station IP. Monitoring is divided into these parts: Monitoring block storage resources Monitoring file storage resources Monitoring block storage resources This section explains how to use Unisphere to monitor block storage resource usage that includes capacity, IOPS, and latency. Capacity In Unisphere, two panels display capacity information. These two panels provide a quick assessment to overall free space available within the configured LUNs and underlying storage pools. For block, sufficient free storage should remain in the configured pools to allow for anticipated growth and activities such as snapshot creation. It is essential to have a free buffer, especially for thin LUNs because out-ofspace conditions usually lead to undesirable behaviors on affected host systems. As such, configure threshold alerts to warn storage administrators when capacity use rises above 80 percent. In that case, auto-expansion may need to be adjusted or 166 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

167 System Monitoring additional space allocated to the pool. If LUN utilization is high, reclaim space or allocate additional space. To set capacity threshold alerts for a specific pool, complete the following steps: 1. Select that pool and select Properties > Advanced. 2. In the Storage Pool Alerts area, choose a number for Percent Full Threshold of this pool, as shown in Figure 73. Figure 73. Storage Pool Alerts area To drill-down into capacity for block, complete the following steps: 1. In Unisphere, select the VNX system to examine. 2. Select Storage > Storage > Configurations > Storage Pools. This opens the Storage Pools panel. 3. Examine the columns Free Capacity and % Consumed, as shown in Figure 74. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 167

168 System Monitoring Figure 74. Storage Pools panel Monitor capacity at the storage pool and LUN levels: 1. Click Storage > LUNs. The LUN Properties dialog box appears. 2. Select a LUN to examine and click Properties, which displays detailed LUN information, as shown in Figure Verify the LUN Capacity area of the dialog box. User Capacity is the total physical capacity available to all thin LUNs in the pool. Consumed Capacity is the total physical capacity currently assigned to all thin LUNs. 168 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

169 System Monitoring Figure 75. LUN Properties dialog box Examine capacity alerts and all other system events by opening the Alerts panel and the SP Event Logs panel, both of which are accessed under the Monitoring and Alerts panel, as shown in Figure 76. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 169

170 System Monitoring Figure 76. Monitoring and Alerts panel IOPS The effects of an I/O workload serviced by an improperly configured storage system, or one whose resources are exhausted, can be felt system wide. Monitoring the IOPS that the storage array services includes looking at metrics from the host ports in the SPs, along with requests serviced by the back-end disks. The VSPEX solutions are carefully sized to deliver a certain performance level for a particular workload level. Ensure that IOPS are not exceeding design parameters. Statistical reporting for IOPS (along with other key metrics) can be examined using the Statistics for Block panel by selecting VNX > System > Monitoring and Alerts > Statistics for Block. Monitor the statistics online or offline using the Unisphere Analyzer, which requires a license. Another metric to examine is Total Bandwidth (MB/s). An 8 Gbps Front End SP port can process 800 MB per second. The average bandwidth must not exceed 80 percent of the link bandwidth under normal operating conditions. IOPS delivered to the LUNs are often more than those delivered by the hosts. This is particularly true with thin LUNs, as there is additional metadata associated with managing the I/O streams. Unisphere Analyzer shows the IOPS on each LUN as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

171 System Monitoring Figure 77. IOPS on the LUNs Certain RAID levels also impart write-penalties that create additional back-end IOPS. Examine the IOPS delivered to (and serviced from) the underlying physical disks, which can also be viewed in the Unisphere Analyzer in Figure 78. The guidelines for drive performance are 180 IOPS for 15k RPM SAS drives 120 IOPS for 10k RPM SAS drives 80 IOPS for NL SAS drives EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 171

172 System Monitoring Figure 78. IOPS on the disks Latency Latency is the byproduct of delays in processing I/O requests. This context focuses on monitoring storage latency, specifically block-level I/O. Using similar procedures from a previous section, view the latency at the LUN level as shown in Figure 79. Figure 79. Latency on the LUNs 172 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

173 System Monitoring Latency can be introduced anywhere along the I/O stream, from the application layer, through the transport, and out to the final storage devices. Determining precise causes of excessive latency requires a methodical approach. Excessive latency in an FC network is uncommon. Unless there is a defective component such as an HBA or cable, delays introduced in the network fabric layer are normally a result of misconfigured switching fabrics. An overburdened storage array can also cause latency within an FC environment. Focus primarily on the LUNs and the underlying disk pools ability to service I/O requests. Requests that cannot be serviced are queued, which introduces latency. The same paradigm applies to Ethernet-based protocols such as iscsi and FCoE. However, additional factors come into place because these storage protocols use Ethernet as the underlying transport. Isolate the network traffic (either physical or logical) for storage, and preferably implement Quality of Service (QoS) in a shared or converged fabric. If network problems are not introducing excessive latency, examine the storage array. In addition to overburdened disks, excessive SP utilization can also introduce latency. SP utilization levels greater than 80 percent indicate a potential problem. Background processes such as replication, deduplication, and snapshots all compete for SP resources. Monitor these processes to ensure they do not cause SP resource exhaustion. Possible mitigation techniques include staggering background jobs, setting replication limits, and adding more physical resources or rebalancing the I/O workloads. Growth may also mandate moving to more powerful hardware. For SP metrics, examine the data under the SP tab of the Unisphere Analyzer, as shown in Figure 80. Review metrics such as Utilization %, Queue Length and Response Time (ms). High values for any of these metrics indicate the storage array is under duress and likely requires mitigation. EMC best practices recommend a threshold of 70% utilization, response time of 20 ms, and queue length of 10. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 173

174 System Monitoring Figure 80. SP utilization Monitoring file storage resources File-based protocols such as NFS and CIFS/SMB involve additional management processes beyond those for block storage. Data Movers, hardware components that provide an interface between NFS, CIFS, or SMB users, and the SPs, provide these management services for VNX Unified systems. Data Movers process file protocol requests on the client side, and convert the requests to the appropriate SCSI block semantics on the array side. The additional components and protocols introduce additional monitoring requirements such as Data Mover network link utilization, memory utilization, and Data Mover processor utilization. To examine Data Mover metrics in the Statistics for File panel, select VNX > System > Monitoring and Alerts > Statistics for File, as shown in Figure 81. By clicking the Data Mover link, the following summary metrics are displayed as shown in Figure 81. Usage levels in excess of 80 percent indicate potential performance concerns and likely require mitigation through Data Mover reconfiguration, additional physical resources, or both. 174 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

175 System Monitoring Figure 81. Data Mover statistics Select Network Device from the Statistics panel to observe front-end network statistics. The Network Device Statistics window appears as shown in Figure 82. If throughput figures exceed 80 percent of the link bandwidth to the client, configure additional links to relieve the network saturation. Figure 82. Front-end Data Mover network statistics Capacity Similar to block storage monitoring, Unisphere has a statistics panel for file storage. Select Storage > Storage Configurations > Storage Pools for File to check file storage space utilization at the pool level as shown in Figure 83. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 175

176 System Monitoring Figure 83. Storage Pools for File panel Monitor capacity at the pool and file system levels. 1. Select Storage > File Systems. The File Systems window appears, as shown in Figure 84. Figure 84. File Systems panel 2. Select a file system to examine and click Properties, which displays detailed file system information, as shown in Figure Examine the File Storage area for Used and Free capacity. 176 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

177 System Monitoring Figure 85. File System Properties window IOPS In addition to monitoring block storage IOPS, Unisphere also provides the ability to monitor file system IOPS. Select System > Monitoring and Alerts > Statistics for File > File System I/O as shown in Figure 86. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 177

178 System Monitoring Figure 86. File System I/O Statistics window Latency To observe file system latency, select System > Monitoring and Alerts > Statistics for File > All Performance in Unisphere, and examine the value for CIFS:Ops/sec as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to

179 System Monitoring Figure 87. CIFS Statistics window Summary Consistent and thorough monitoring of the VSPEX Proven Infrastructure is a best practice. Having baseline performance data helps to identify problems, while monitoring key system metrics helps to ensure that the system functions optimally and within designed parameters. The monitoring process can extend through integration with automation and orchestration tools from key partners such as Microsoft with its System Center suite of products. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 179

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This document

More information

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNX and EMC

More information

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNXe3200,

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. November 2014 Copyright 2014 EMC Corporation.

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract

More information

Windows Server 2012 授 權 說 明

Windows Server 2012 授 權 說 明 Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor

More information

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014 DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTIONS FOR VSPEX PRIVATE CLOUD EMC VSPEX December 2014 Copyright 2013-2014 EMC Corporation. All rights reserved. Published in USA. Published December,

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP Enabled By EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This describes how to design virtualized Oracle Database resources on

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for file, block, and object storage MCx multi-core optimization unlocks the power of flash

More information

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE White Paper EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE EMC Next-Generation VNX, EMC Storage Integrator for Windows Suite, Microsoft System Center 2012 SP1 Reduce storage

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User-Computing solution for Citrix XenDesktop 7.5.

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE.

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE. EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications Essentials Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support Agreement Joint Use Of Technology CEO

More information

Veritas Storage Foundation High Availability for Windows by Symantec

Veritas Storage Foundation High Availability for Windows by Symantec Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE

MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE White Paper MICROSOFT SHAREPOINT SERVER: BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE EMC VNX Family, EMC Symmetrix VMAX Systems, and EMC Xtrem Server Products Design and sizing best practices

More information

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET STORAGE CENTER WITH STORAGE CENTER DATASHEET THE BENEFITS OF UNIFIED AND STORAGE Combining block and file-level data into a centralized storage platform simplifies management and reduces overall storage

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

Nimble Storage for VMware View VDI

Nimble Storage for VMware View VDI BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important

More information

Best Practices for Microsoft

Best Practices for Microsoft SCALABLE STORAGE FOR MISSION CRITICAL APPLICATIONS Best Practices for Microsoft Daniel Golic EMC Serbia Senior Technology Consultant Daniel.golic@emc.com 1 The Private Cloud Why Now? IT infrastructure

More information

Three Paths to the Virtualized Private Cloud

Three Paths to the Virtualized Private Cloud The Essential Guide to Virtualizing Microsoft Applications on EMC VSPEX For organizations running mission-critical Microsoft enterprise applications like Microsoft Exchange, Microsoft SharePoint, and Microsoft

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

How To Backup With Ec Avamar

How To Backup With Ec Avamar BACKUP AND RECOVERY FOR MICROSOFT-BASED PRIVATE CLOUDS LEVERAGING THE EMC DATA PROTECTION SUITE A Detailed Review ABSTRACT This white paper highlights how IT environments which are increasingly implementing

More information

How To Protect Data On Network Attached Storage (Nas) From Disaster

How To Protect Data On Network Attached Storage (Nas) From Disaster White Paper EMC FOR NETWORK ATTACHED STORAGE (NAS) BACKUP AND RECOVERY Abstract This white paper provides an overview of EMC s industry leading backup and recovery solutions for NAS systems. It also explains

More information

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland Introducing Markus Erlacher Technical Solution Professional Microsoft Switzerland Overarching Release Principles Strong emphasis on hardware, driver and application compatibility Goal to support Windows

More information

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

EMC RECOVERPOINT FAMILY

EMC RECOVERPOINT FAMILY EMC RECOVERPOINT FAMILY Cost-effective local and remote data protection and disaster recovery solution ESSENTIALS Maximize application data protection and disaster recovery Protect business applications

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe, and EMC Next-Generation Backup EMC VSPEX Abstract This

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center White Paper Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center Provide accelerated data access and an immediate performance boost of businesscritical applications with caching

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013

More information

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read

More information

Microsoft SMB File Sharing Best Practices Guide

Microsoft SMB File Sharing Best Practices Guide Technical White Paper Microsoft SMB File Sharing Best Practices Guide Tintri VMstore, Microsoft SMB 3.0 Protocol, and VMware 6.x Author: Neil Glick Version 1.0 06/15/2016 @tintri www.tintri.com Contents

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

How To Get A Storage And Data Protection Solution For Virtualization

How To Get A Storage And Data Protection Solution For Virtualization Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions

More information

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER ESSENTIALS Mitigate project risk with the proven leader, many of largest EHR sites run on EMC storage Reduce overall storage costs with automated

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information