EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD

Size: px
Start display at page:

Download "EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD"

Transcription

1 Proven Infrastructure EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNX and EMC next - Generation Backup EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with Brocade VDX networking, Microsoft Hyper-V, EMC Next-Generation VNX, and EMC next generation Backup for up to 1,000 virtual machines. May, 2014

2 Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Published May 2014 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website Brocade Communications Systems, Inc. All Rights Reserved. ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. Machines Enabled by Brocade Network Fabrics, and EMC Next- Generation VNX and EMC Powered Backup 2 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

3 Contents Chapter 1 Executive Summary 17 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 21 Introduction Virtualization Compute Network Storage EMC Next-Generation VNX EMC backup and recovery Chapter 3 Solution Technology Overview 31 Overview Summary of key components Virtualization Overview Microsoft Hyper-V Virtual Fibre Channel ports Microsoft System Center Virtual Machine Manager High availability with Hyper-V Failover Clustering Hyper-V Replica Hyper-V snapshot Cluster-Aware Updating EMC Storage Integrator Compute Network Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 3

4 Contents Overview Brocade 6510 Fibre Channel switch for Block Based Storage Brocade VDX Ethernet Fabric switch for file based storage Storage Overview EMC VNX family EMC VNX Snapshots EMC VNX SnapSure EMC VNX Virtual Provisioning Windows Offloaded Data Transfer EMC PowerPath EMC FAST Cache VNX file shares ROBO SMB 3.0 features Overview SMB versions and negotiations VNX and VNXe storage support SMB 3.0 VHD/VHDX storage support SMB 3.0 Continuous Availability SMB Multichannel SMB 3.0 Copy Offload SMB 3.0 BranchCache SMB 3.0 Remote VSS SMB 3.0 encryption SMB 3.0 PowerShell cmdlets SMB 3.0 Directory Leasing Summary of feature defaults Backup and recovery Overview EMC Avamar deduplication EMC Data Domain deduplication storage systems VMware vsphere data protection Continuous Availability EMC RecoverPoint EMC VNX Replicator Other technologies Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

5 Contents EMC XtremSW Cache Chapter 4 Solution Architecture Overview 73 Overview Solution architecture Overview Logical architecture Key components Hardware resources Software resources Server configuration guidelines Overview Hyper-V memory virtualization Memory configuration guidelines Network configuration guidelines Overview VLAN Zoning (Block Storage FC only) Enable jumbo frames (iscsi or SMB only) Link aggregation (SMB only) Brocade Virtual Link Aggregation Group (vlag) Brocade Inter-Switch Link (ISL) Trunks Equal-Cost Multipath (ECMP) Pause Flow Control Storage configuration guidelines Overview Hyper-V storage virtualization for VSPEX VSPEX storage building blocks VSPEX private cloud validated maximums High-availability and failover Overview Virtualization layer Compute layer Brocade Network layer Storage layer Validation test profile Profile characteristics Backup and recovery configuration guidelines Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 5

6 Contents Overview Backup characteristics Backup layout Sizing guidelines Reference workload Overview Defining the reference workload Applying the reference workload Overview Example 1: Custom-built application Example 2: Point-of-Sale system Example 3: Web server Example 4: Decision-support database Summary of examples Implementing the solution Overview Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Quick assessment of customer environment Overview CPU requirements Memory requirements Storage performance requirements IOPS I/O size I/O latency Storage capacity requirements Determining equivalent reference virtual machines Fine-tuning hardware resources EMC VSPEX Sizing Tool Chapter 5 VSPEX Configuration Guidelines 133 Overview Pre-deployment tasks Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

7 Contents Overview Deployment prerequisites Customer configuration data Prepare, connect and configure Brocade network switches Overview Prepare Brocade Storage Network Infrastructure Complete Network Cabling Configure Brocade VDX 6740 switch (File Storage) Step 1: Verify and apply Brocade VDX NOS licenses Step 2: Configure logical chassis VCS ID and RBridge IDs on the VDXs 145 Step 3: Assign Switch Name Step 4: Brocade VCS Fabric ISL Port Configuration Step 5: Create required VLANs Step 6: Create vlags for Microsoft Hyper-V hosts Step 7: Create vlags for VNX portsstep 7: Configure Switch Interfaces for VNXe155 Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks Step 9 -Configure MTU and Jumbo Frames Step 10 Enable Flow Control Support Step 11- Auto QOS for NAS Configure Brocade 6510 Switch storage network (Block Storage) Step 1: Initial Switch Configuration Step 2: FC Switch Licensing Step 3: FC Zoning Configuration Step 4: Switch Management and Monitoring Prepare and configure storage array VNX configuration for block protocols VNX configuration for file protocols FAST VP configuration FAST Cache configuration Install and configure Hyper-V hosts Overview Install Windows hosts Install Hyper-V and configure failover clustering Configure Windows host networking Install PowerPath on Windows servers Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 7

8 Contents Plan virtual machine memory allocations Install and configure SQL Server database Overview Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Configure a SQL Server for SCVMM System Center Virtual Machine Manager server deployment Overview Create a SCVMM host virtual machine Install the SCVMM guest OS Install the SCVMM server Install the SCVMM Management Console Install the SCVMM agent locally on a host Add a Hyper-V cluster into SCVMM Add file share storage to SCVMM (file variant only) Create a virtual machine in SCVMM Perform partition alignment, and assign File Allocation Unite Size Create a template virtual machine Deploy virtual machines from the template virtual machine Summary Chapter 6 Verifying the Solution 197 Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components Block environments File environments Chapter 7 System Monitoring 201 Overview Key areas to monitor Performance baseline Servers Brocade Networking Storage VNX resources monitoring guidelines Monitoring block storage resources Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

9 Contents Monitoring file storage resources Summary Chapter 8 Validation with Microsoft Fast Track v3 219 Overview Business case for validation Process requirements Step 1: Core prerequisites Step 2: Select the VSPEX Proven Infrastructure platform Step 3: Define additional Microsoft Hyper-V Fast Track Program components Step 4: Build a detailed bill of materials Step 5: Test the environment Step 6: Document and publish the solution Additional resources Appendix A Bill of Materials 225 Bill of materials Appendix B Customer Configuration Data Sheet 233 Customer configuration data sheet Appendix C Server Resources Component Worksheet 237 Server resources component worksheet Appendix D References 239 References EMC documentation Brocade documentation Other documentation Appendix E About VSPEX 243 About VSPEX Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 9

10

11 Figures Figure 1. Next-Generation VNX with multicore optimization Figure 2. Active/active processors increase performance, resiliency, and efficiency Figure 3. New Unisphere Management Suite Figure 4. EMC backup and recovery solutions Figure 5. VSPEX private cloud components Figure 6. Compute layer flexibility Figure 7. Figure 8. Example of highly available Brocade Block Based storage network design Brocade VDX with VCS Fabrics in a highly available file based storage network design Figure 9. Storage pool rebalance progress Figure 10. Thin LUN space utilization Figure 11. Examining storage pool space utilization Figure 12. Defining storage pool utilization thresholds Figure 13. Defining automated notifications - for block Figure 14. SMB 3.0 baseline performance comparison point Figure 15. SMB 3.0 Continuous Availability Figure 16. CA application performance Figure 17. SMB Multichannel fault tolerance Figure 18. Multichannel network throughput Figure 19. Copy Offload Figure 20. Enabling the Encrypt Data parameter Figure 21. Enabling encryption: Client CPU utilization Figure 22. Enabling encryption: Data Mover CPU utilization Figure 23. PowerShell execution of Show Shares Figure 24. PowerShell execution of Get-SmbServerConfiguration Figure 25. SMB 3.0 Directory Leasing Figure 26. Logical architecture for block storage Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 11

12 Contents Figure 27. Logical architecture for file storage Figure 28. Hypervisor memory consumption Figure 29. Required Brocade networks for block storage Figure 30. Required Brocade networks for file storage Figure 31. Hyper-V virtual disk types Figure 32. Building block for 13 virtual servers Figure 33. Building block for 125 virtual servers Figure 34. Storage layout for 300 virtual machines using VNX Figure 35. Storage layout for 600 virtual machines using VNX Figure 36. Storage layout for 1,000 virtual machines using VNX Figure 37. Maximum scale levels and entry points of different arrays Figure 38. High availability at the virtualization layer Figure 39. Redundant power supplies Figure 40. Brocade Network layer high availability (VNX) block storage network variant Figure 41. Brocade Network layer high availability (VNX) file storage. 108 Figure 42. VNX series HA components Figure 43. Resource pool flexibility Figure 44. Required resource from the reference virtual machine pool 121 Figure 45. Aggregate resource requirements stage Figure 46. Pool configuration stage Figure 47. Aggregate resource requirements - stage Figure 48. Pool configuration stage Figure 49. Aggregate resource requirements for stage Figure 50. Pool configuration stage Figure 51. Customizing server resources Figure 52. Sample Brocade network architecture File storage Figure 53. Sample Brocade network architecture Block storage Figure 54. Port types Figure 55. Port Groups of the VDX Figure 56. Port Groups of the VDX 6740T and Brocade VDX6740T-1G Figure 57. Creating VLANs Figure 58. Example VCS/VDX network topology with Infrastructure connectivity Figure 59. Network Settings for File dialog box Figure 60. The Create Interface dialog box Figure 61. The Create CIFS Server dialog box Figure 62. The Create File System dialog box Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

13 Contents Figure 63. The File System Properties dialog box Figure 64. The Create File Share dialog box Figure 65. The Storage Pool Properties dialog box Figure 66. Manage Auto-Tiering dialog box Figure 67. The Storage System Properties dialog box Figure 68. The Create FAST Cache dialog box Figure 69. Advanced tab in the Create Storage Pool dialog Figure 70. Advanced tab in the Storage Pool Properties dialog Figure 71. Storage Pool Alerts area Figure 72. Storage Pools panel Figure 73. LUN Properties dialog box Figure 74. Monitoring and Alerts panel Figure 75. IOPS on the LUNs Figure 76. IOPS on the disks Figure 77. Latency on the LUNs Figure 78. SP utilization Figure 79. Data Mover statistics Figure 80. Front-end Data Mover network statistics Figure 81. Storage Pools for File panel Figure 82. File Systems panel Figure 83. File System Properties window Figure 84. File System I/O Statistics window Figure 85. CIFS Statistics window Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 13

14 Contents 14 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

15 Tables Table 1. VNX customer benefits Table 2. Thresholds and settings under VNX OE Block Release Table 3. SMB dialect used between client and server Table 4. Storage migration improvement with Copy Offload Table 5. Microsoft PowerShell cmdlets Table 6. EMC-provided PowerShell cmdlets Table 7. Default status of SMB 3.0 features Table 8. Solution hardware Table 9. Solution software Table 10. Hardware resources for compute layer Table 11. Hardware resources for network Table 12. Hardware resources for storage Table 13. Number of disks required for different number of virtual machines Table 14. Profile characteristics Table 15. Virtual machine characteristics Table 16. Blank worksheet row Table 17. Reference virtual machine resources Table 18. Example worksheet row Table 19. Example applications stage Table 20. Example applications - stage Table 21. Example applications - stage Table 22. Server resource component totals Table 23. Deployment process overview Table 24. Tasks for pre-deployment Table 25. Deployment prerequisites checklist Table 26. Tasks for switch and network configuration Table 27. Brocade VDX 6740 Configuration Steps Table 28. Brocade switch default settings Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 15

16 Contents Table 29. Brocade 6510 FC switch Configuration Steps Table 30. Brocade switch default settings Table 31. Tasks for VNX configuration for block protocols Table 32. Storage allocation table for block Table 33. Tasks for storage configuration for file protocols Table 34. Storage allocation table for file Table 35. Tasks for server installation Table 36. Tasks for SQL Server database setup Table 37. Tasks for SCVMM configuration Table 38. Tasks for testing the installation Table 39. Hyper-V Fast Track component classification Table 40. Table 41. Table 42. List of components used in the VSPEX solution for 300 virtual machines List of components used in the VSPEX solution for 600 virtual machines List of components used in the VSPEX solution for 1,000 virtual machines Table 43. Common server information Table 44. Hyper-V server information Table 45. Array information Table 46. Brocade Network infrastructure information Table 47. VLAN information Table 48. Service accounts Table 49. Blank worksheet for determining server resources Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

17 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 17

18 Executive Summary Introduction Target audience Document purpose VSPEX with Brocade networking solutions, validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, backup, networking and storage layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT Transformation by enabling faster deployments, choice, greater efficiency, and lower risk. This document is a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server that meet or exceed the stated minimums. Brocade networking solutions are defined for the networking requirements for each of the VSPEX reference architectures covered in this document. The readers of this document should have the necessary training and background to install and configure Microsoft Hyper-V, Brocade VDX Ethernet Fabric or Connectrix-B Fibre Channel series switches, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the custom installation. Users focusing on selling and sizing a Microsoft Hyper-V private cloud infrastructure must pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This proven infrastructure guide includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system. The VSPEX private cloud architecture provides the customer with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the Microsoft Hyper-V virtualization layer backed by the highly available Brocade fabrics network switch series, and VNX family of storage. The compute components, which are defined by the VSPEX partners, are laid out to be redundant and 18 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

19 Executive Summary Business needs sufficiently powerful to handle the processing and data needs of the virtual machine environment. The 300, 600, and 1000 virtual machine environments are based on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective when deployed. For smaller environments, solutions for up to 100 virtual machines based on the EMC VNXe series arrays are described in EMC VSPEX Private Cloud: Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines. A private cloud architecture is a complex system offering. This document facilitates its setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, validation tests and monitoring instructions ensure that your customer s system is running correctly. Following the instructions in this document ensures an efficient and expedited journey to the cloud. VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, greater efficiency, and lower risk. Business applications are moving into consolidated compute, network, and storage environments. The EMC VSPEX Private Cloud with Brocade using Microsoft Hyper-V reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The business needs for the VSPEX private cloud for Microsoft Hyper-V architectures are: Provide an end-to-end virtualization solution to use the capabilities of the unified infrastructure components. Provide a VSPEX private cloud solution for Microsoft Hyper-V to efficiently virtualize up to 1000 virtual machines for varied customer use cases. Provide a reliable, flexible, and scalable reference design. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 19

20 Executive Summary 20 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

21 Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Network Storage Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 21

22 Solution Overview Introduction Virtualization Compute The EMC VSPEX with Brocade Networking Solution for Private Cloud for Microsoft Hyper-V provides complete system architecture capable of supporting up to 1,000 virtual machines with a redundant server, network topology, and highly available storage. The core components that make up this particular solution are virtualization, compute, storage, and Brocade networking. Microsoft Hyper-V is a leading virtualization platform in the industry. For years, Hyper-V has provided flexibility and cost savings to end users by consolidating large, inefficient server farms into nimble, reliable cloud infrastructures. Features such as Live Migration which enables a virtual machine to move between different servers with no disruption to the guest operating system, and Dynamic Optimization which performs Live Migration automatically to balance loads, make Hyper-V a solid business choice. With the release of Windows Server 2012, a Microsoft virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). VSPEX provides the flexibility to design and implement the customer s choice of server components. The infrastructure must conform to the following attributes: Sufficient cores and memory to support the required number and types of virtual machines. Sufficient network connections to enable redundant connectivity to the system switches. Excess capacity to withstand a server failure and failover in the environment. Network Brocade VDX Ethernet Fabric and Fibre Channel Fabric switch technology enable the implementation of high performance, efficient, and resilient networks validated with the VSPEX proven architectures. Brocade Ethernet and Fibre Channel fabrics provide an open standards based solution that unleashes the full potential of high-density server virtualization, private cloud architectures, and EMC VNX storage. 22 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

23 Solution Overview Brocade VDX Ethernet Fabrics networking solutions provides the following attributes: Offers flexibility to deploy 1000BASE-T and upgrade to 10GBASE-T for higher bandwidth Delivers high performance and reduces network congestion with 10 Gigabit Ethernet (GbE) ports, low latency, and 24 MB deep buffers Improves capacity with the ability to create up to a 160 GbE uplink with Brocade ISL Trunking Manages an entire multitenant Brocade VCS fabric as a single switch with Brocade VCS Logical Chassis Provides efficiently load-balanced multipathing at Layers 1, 2, and 3, including multiple Layer 3 gateways Simplifies Virtual Machine (VM) mobility and management with automated, dynamic port profile configuration and migration Supports Software-Defined Networking (SDN) technologies within data, control, and management planes The Brocade 6510 Fibre Channel Fabric switch is the purpose-built, data center-proven network infrastructure for storage, delivering unmatched reliability, simplicity, and 4/8/16 Gbps performance. The Brocade 6510 Fibre Channel Fabrics networking solution provides the following attributes: Provides exceptional price/performance value, combining flexibility, simplicity, and enterprise-class functionality in a 48-port switch Enables fast, easy, and cost-effective scaling from 24 to 48 ports using Ports on Demand (PoD) capabilities Simplifies management through Brocade Fabric Vision technology, reducing operational costs and optimizing application performance Simplifies deployment and supports high-performance fabrics by using Brocade ClearLink Diagnostic Ports (D_Ports) to identify optic and cable issues Simplifies and accelerates deployment with the Brocade EZSwitchSetup wizard and Dynamic Fabric Provisioning (DFP) Maximizes availability with redundant, hot-pluggable components and non-disruptive software upgrades Simplifies server connectivity by deploying as a full-fabric switch or a Brocade Access Gateway Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 23

24 Solution Overview Storage The EMC VNX storage family is the leading shared storage platform in the industry. VNX provides both file and block access with a broad feature set, which makes it an ideal choice for any Private Cloud implementation. VNX storage includes the following components sized for the stated reference architecture workload: Host adapter ports (For block) Provide host connectivity through fabric to the array. Storage processors The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays. Disk drives Disk spindles and solid state drives that contain the host or application data and their enclosures. Data Movers (For file) Front-end appliances that provide file services to hosts (optional if CIFS services are provided). Note: The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables Common Internet File System (CIFS-SMB) and Network File System (NFS) protocols on the VNX. The Microsoft Hyper-V private cloud solutions for 300, 600, and 1,000 virtual machines described in this document are based on the EMC VNX5400, EMC VNX5600 and the EMC VNX5800 storage arrays respectively. The VNX5400 array can support a maximum of 250 drives, the VNX5600 can host up to 500 drives, and the VNX5800 can host up to 750 drives. The EMC VNX series supports a wide range of business-class features that are ideal for the private cloud environment, including: EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP ) EMC FAST Cache File-level data deduplication and compression Block deduplication Thin provisioning Replication Snapshots or checkpoints File-level retention Quota management 24 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

25 Solution Overview EMC Next- Generation VNX Features and Enhancements The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s virtualized application environments. VNX includes many features and enhancements designed and built upon the first generation s success. These features and enhancements include: More capacity with multicore optimization with Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx) Greater efficiency with a flash-optimized hybrid array Better protection by increasing application availability with active/active storage processors Easier administration and deployment by increasing productivity with a new Unisphere Management Suite VSPEX is built with the next generation of VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-optimized hybrid array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flashoptimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. Data is typically used most frequently at the time it is created; therefore new data is first stored on flash drives for the best performance. As that data ages and becomes less active over time, FAST VP moves the data from high-performance to high-capacity drives automatically, based on customer-defined policies. EMC has enhanced this functionality with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multi-level cell (emlc) technology to lower the cost per gigabyte. FAST Cache dynamically absorbs unpredicted spikes in system workloads. All VSPEX use cases benefit from the increased efficiency. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 25

26 Solution Overview even greater return on their investment. VNX provides out-of-band, blockbased deduplication that can dramatically lower the costs of the flash tier. VNX Intel MCx Code Path Optimization The advent of flash technology has been a catalyst in totally changing the requirements of midrange storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNX data services across all cores up to 32, as shown in Figure 1. The VNX series with MCx has dramatically improved the file performance for transactional applications like databases or virtual machines over network-attached storage (NAS). Figure 1. Next-Generation VNX with multicore optimization Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is key to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors. VNX performance Performance enhancements VNX storage, enabled with the MCx architecture, is optimized for FLASH 1 st and provides unprecedented overall performance, optimizing for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and providing optimal capacity efficiency (cost per GB). 26 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

27 Solution Overview VNX provides the following performance improvements: Up to four times more file transactions when compared with dual controller arrays Increased file performance for transactional applications by up to three times, with a 60 percent better response time Up to four times more Oracle and Microsoft SQL Server OLTP transactions Up to six times more virtual machines Active/active array storage processors The new VNX architecture provides active/active array storage processors, as shown in Figure 2, which eliminate application timeouts during path failover since both paths are actively serving I/O. Figure 2. Active/active processors increase performance, resiliency, and efficiency Load balancing is also improved and applications can achieve an up to two times improvement in performance. Active/active for block is ideal for applications that require the highest levels of availability and performance, but do not require tiering or efficiency services like compression, deduplication, or snapshot. With this VNX release, VSPEX customers can use virtual Data Movers (VDMs) and VNX Replicator to perform automated and high-speed file system migrations between systems. This process migrates all snaps and settings automatically, and enables the clients to continue operation during the migration. Note: The active/active processors are only available for classic logical unit numbers (LUNs), not for pool LUNs. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 27

28 Solution Overview Unisphere Management Suite The new Unisphere Management Suite extends Unisphere s easy-to-use, interface to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 3, the suite also includes Unisphere Remote for centrally managing up to thousands of VNX and VNXe systems with new support for XtremSW Cache. Figure 3. New Unisphere Management Suite Virtualization Management EMC Storage Integrator EMC Storage Integrator (ESI) is targeted towards the Windows and Application administrator. ESI is easy to use, delivers end-to end monitoring, and is hypervisor agnostic. Administrators can provision in both virtual and physical environments for a Windows platform, and troubleshoot by viewing the topology of an application from the underlying hypervisor to the storage. Microsoft Hyper-V With Windows Server 2012, Microsoft provides Hyper-V 3.0, an enhanced hypervisor for private cloud that can run on NAS protocols for simplified connectivity. Offloaded Data Transfer The Offloaded Data Transfer (ODX) feature of Microsoft Hyper-V enables data transfers during copy operations to be offloaded to the storage array, freeing up host cycles. For example, using ODX for a live migration of a SQL Server virtual machine doubled performance, decreased migration time by 50 percent, reduced CPU on the Hyper-V sever by 20 percent, and eliminated network traffic. 28 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

29 Solution Overview EMC backup and recovery EMC backup and recovery solutions, EMC Avamar and EMC Data Domain, deliver the protection confidence needed to accelerate the deployment of VSPEX Private Clouds. Optimized for virtual environments, EMC backup and recovery reduces backup times by 90 percent and increases recovery speeds by 30 times, even offering virtual machines instant access for worry-free protection. EMC backup appliances add another layer of assurance with end-to-end verification and self-healing to ensure successful recoveries. Our solutions also deliver big saving. With industry-leading deduplication, you can reduce backup storage by 10 to 30 times, backup management time by 81 percent, and WAN bandwidth by 99 percent for efficient disaster recovery, delivering a seven-month payback period on average. You will be able to scale storage easily and efficiently as your environment grows. Figure 4. EMC backup and recovery solutions EMC backup and recovery solutions used in this VSPEX solution include EMC Avamar deduplication software and system, EMC Data Domain deduplication storage system. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 29

30 Solution Overview 30 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

31 Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview Summary of key components Virtualization Compute Network Storage SMB 3.0 features Backup and recovery Continuous Availability Other technologies Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 31

32 Solution Technology Overview Overview This solution uses the EMC VNX series, Brocade network Fabric switches, and Microsoft Hyper-V to provide storage and server hardware consolidation in a private cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 5 depicts the solution components. Figure 5. VSPEX private cloud components The following sections describe the components in more detail. 32 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

33 Solution Technology Overview Summary of key components This section briefly describes the key components of this solution. Virtualization The virtualization layer decouples the physical implementation of resources from the applications that use them. The application s view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. Compute The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the customer to implement the solution by using any server hardware that meets these requirements. Network Brocade VDX Ethernet Fabric or Connectrix-B Fibre Channel Fabric switches with Brocade Fabric networking technology connect the users of the private cloud to existing customer infrastructure with the compute and storage resources of the VSPEX solution. EMC VSPEX reference architecture with Brocade network Fabric switches provides the required connectivity and scalability. The EMC VSPEX with Brocade networking solutions enables the customer to implement a solution that provides a cost effective, resilient, and operationally efficient virtualization platform. Storage The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the private cloud can be implemented. The VNX used in this solution provides high-performance data storage while maintaining high availability. Backup and recovery The backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable. Solution architecture provides details on all the components that make up the reference architecture. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 33

34 Solution Technology Overview Virtualization Overview The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and allows the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. Microsoft Hyper-V Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server Hyper-V virtualizes computer hardware resources, such as CPU, memory, storage, and networking. This transformation creates fully functional virtual machines that run their own operating systems and applications like physical computers. Hyper-V works with Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure. Live migration and live storage migration enable seamless movement of virtual machines or virtual machines files between Hyper-V servers or storage systems transparently and with mimimal performance impact. Virtual Fibre Channel ports Windows Server 2012 provides virtual Fibre Channel (FC) ports within a Hyper-V guest operating system. The virtual FC port uses the standard N- port ID virtualization (NPIV) process to address the virtual machine WWNs within the Hyper-V host s physical host bus adapter (HBA). This provides virtual machines with direct access to external storage arrays over FC, enables clustering of guest operating systems over FC, and offers an important new storage option for the hosted servers in the virtual infrastructure. Virtual FC in Hyper-V guest operating systems also supports related features, such as virtual SANs, live migration, and multipath I/O (MPIO). Prerequisites for virtual FC include: One or more installations of Windows Server 2012 with the Hyper-V role One or more FC HBAs installed on the server, each with an appropriate HBA driver that supports virtual FC NPIV-enabled SAN Virtual machines using the virtual FC adapter must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 as the guest operating system. 34 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

35 Solution Technology Overview Microsoft System Center Virtual Machine Manager Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform for the virtualized data center. SCVMM allows administrators to configure and manage the virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment. High availability with Hyper-V Failover Clustering Hyper-V Replica The Windows Server 2012 Failover Clustering feature provides highavailability in Hyper-V. High availability is impacted by both planned and unplanned downtime, and Failover Clustering significantly increases the availability of virtual machines during planned and unplanned downtimes. Configure Windows Server 2012 Failover Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual machines between cluster nodes. The advantages of this configuration are: Enables migration of virtual machines to a different cluster node if the cluster node where they reside must be updated, changed, or rebooted. Allows other members of the Windows Failover Cluster to take ownership of the virtual machines if the cluster node where they reside suffers a failure or significant degradation. Minimizes downtime due to virtual machine failures. Windows Server Failover Cluster detects virtual machine failures and automatically takes steps to recover the failed virtual machine. This allows the virtual machine to be restarted on the same host server, or migrated to a different host server. Hyper-V Replica was introduced in Windows Server 2012 to provide asynchronous virtual machine replication over the network from one Hyper-V host at a primary site to another Hyper-V host at a replica site. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates the changes to the replica server over the network with HTTP and HTTPS. The amount of network bandwidth required is based on the transfer schedule and data change rate. If the primary Hyper-V host fails, you can manually fail over the production virtual machines to the Hyper-V hosts at the replica site. Manual failover brings the virtual machines back to a consistent point from which they can be accessed with minimal impact on the business. After recovery, the primary site can receive changes from the replica site. You can perform a planned failback to manually revert the virtual machines back to the Hyper-V host at the primary site. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 35

36 Solution Technology Overview Hyper-V snapshot A Hyper-V snapshot creates a consistent point-in-time view of a virtual machine. Snapshots function as source for backups or other use cases. Virtual machines do not have to be running to take a snapshot. Snapshots are completely transparent to the applications running on the virtual machine. The snapshot saves the point-in-time status of the virtual machine, and enables users to revert the virtual machine to a previous point-in-time if necessary. Note: Snapshots require additional storage space. The amount of additional storage space depends on the frequency of data change on the virtual machine and the number of snapshots being retained. Cluster-Aware Updating EMC Storage Integrator Cluster-Aware Updating (CAU) was introduced in Windows Server It provides a way of updating cluster nodes with little or no disruption. CAU transparently performs the following tasks during the update process: 1. Puts one cluster node into maintenance mode and takes it offline (virtual machines are live-migrated to other cluster nodes). 2. Installs the updates. 3. Performs a restart if necessary. 4. Brings the node back online (migrated virtual machines are moved back to the original node). 5. Updates the next node in the cluster. The node managing the update process is called the Orchestrator. The Orchestrator can work in a couple of different modes: Self-updating mode: The Orchestrator runs on the cluster node being updated. Remote-updating mode: The Orchestrator runs on a standalone Windows operating system, and remotely manages the cluster update. CAU is integrated with Windows Server Update Service (WSUS). Powershell allows automation of the CAU process. EMC Storage Integrator (ESI) is an agentless, free plug-in that enables application-aware storage provisioning for Microsoft Windows Server applications, Hyper-V, VMware, and Xen Server environments. Administrators can provision block and file storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI. ESI supports the following functions: Provisioning, formatting, and presenting drives to Windows servers Provisioning new cluster disks, and automatically adding them to the cluster Provisioning shared CIFS storage, and mounting it to Windows servers Provisioning SharePoint storage, sites, and databases in a single wizard 36 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

37 Solution Technology Overview Compute The choice of a server platform for an VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX documents the minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution. In the example shown in Figure 6, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this by using white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Figure 6. Compute layer flexibility Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 37

38 Solution Technology Overview Network The first customer needs four of the chosen servers, while the other customer needs two. Note: To enable high-availability at the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. Use the following best practices in the compute layer: Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you implement high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimal-downtime upgrades and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores, and RAM per core to meet the needs of the target environment. Overview VSPEX Proven Infrastructure with Brocade networking solution provides the required redundant network links for each vsphere host, the storage array, and the switch interconnect ports, and the switch uplink ports. Brocade networking solutions provides options with Connectrix-B 6510 Fibre Channel switches for block storage and VDX 6740 Ethernet Fabric switches for file storage connectivity between compute and storage. The Brocade network is designed in the VSPEX reference architecture for block and file based storage traffic types to optimize throughput, manageability, application separation, high availability, and security. The storage network solution is implemented with redundant network links for each host, and VNX storage array. If a link is lost with any of the Brocade network infrastructure ports, the link fails over to another port. All network traffic is distributed across the active links. Figure 7 and Figure 8 depict examples of this highly available Brocade storage network topology. 38 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

39 Solution Technology Overview Brocade 6510 Fibre Channel switch for Block Based Storage The Brocade 6510 with Gen 5 Fibre Channel Technology simplifies the storage network infrastructure through innovative technologies and supports the VSPEX highly virtualized topology design. The Brocade validated network solution simplifies server connectivity by deploying as full-fabric switch and enables fast, easy effective scaling from 24 to 48 Ports on Demand (PoD). The Brocade 6510 Fibre Channel switches maximizes availability with redundant architecture for Block Based storage traffic and hot-pluggable components and non-disruptive upgrades. For block, the EMC VNX a unified storage platform is attached to a highly available Brocade storage network by two ports per storage processor. If a link is lost on the storage processor front end port, the link fails over to another port. All storage network traffic is distributed across the active links. Figure 7 Depicts an example of the Brocade network topology for file based storage. Figure 7. Example of highly available Brocade Block Based storage network design Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 39

40 Solution Technology Overview Brocade 6510 Fibre Channel switches provide high availability for the VSPEX SAN infrastructure. Active active links for all traffic from the virtualized compute servers to the EMC VNX storage arrays. The Brocade 6510 Switch meets the demands of hyper-scale, private cloud VSPEX storage traffic environments with market-leading Gen 5 Fibre Channel technology and capability that supports the VSPEX virtualized architecture. The failure of a link in a route causes the network to reroute any traffic that was using that particular link as long as an alternate path is available. Brocade Fabric Shortest Path First (FSPF) is a highly efficient routing algorithm that reroutes around failed links in less than a second. ISL Trunking improves on this concept by helping to prevent the loss of the route. A link failure merely reduces the available bandwidth of the logical ISL trunk. In other words, a failure does not completely break the pipe, but simply makes the pipe narrower. As a result, data traffic is much less likely to be affected by link failures, and the bandwidth automatically increases when the link is repaired Brocade VDX Ethernet Fabric switch for file based storage The Brocade VDX with VCS Fabrics helps simplify networking infrastructures through innovative technologies and VSPEX infrastructure topology design. The Brocade validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security with file storage traffic. Brocade VDX 6740 switches support this strategy by simplifying network architecture while increasing network performance and resiliency with Ethernet fabrics. Brocade VDX with VCS Fabric technology supports active active links for all traffic from the virtualized compute servers to the EMC VNXe storage arrays. This validated solution for file storage with the EMC unified storage platforms attaches to the highly available Brocade network by using link aggregation. Link aggregation enables multiple active (MAC) Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on the VNX array, combining multiple Ethernet ports into a single virtual device. If a link is lost on the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Figure 8 depicts an example of the Brocade network topology for file based storage. 40 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

41 Solution Technology Overview Figure 8. Brocade VDX with VCS Fabrics in a highly available file based storage network design The Brocade VDX 6740 Ethernet Fabric switches provide file based connectivity at 10 GbE in between the compute and VNX storage. The Brocade VDX with VCS Fabric technology helps simplify networking infrastructures through innovative technologies for the VSPEX File storage network topology design. The Brocade network validated solution supports segregated network traffic of VSPEX reference architecture for SMB 3.0 File storage traffic. Brocade VDX switches enable a storage network with high availability and redundancy by using link aggregation for EMC VNX storage array. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 41

42 Solution Technology Overview Storage Overview The storage layer is also a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in data center storage processing systems. This increases storage efficiency, management flexibility, and reduces total cost of ownership. In this VSPEX solution, EMC VNX series arrays provide features and performance to enable and enhance any virtualization environment. EMC VNX family The EMC VNX family is optimized for virtual applications; and delivers industry-leading innovation and enterprise capabilities for file and block storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. Intel Xeon processors power the VNX series for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, highscalability requirements of midsize and large enterprises. Table 1 shows the customer benefits that are provided by the VNX series. Table 1. VNX customer benefits Feature Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies High availability, designed to deliver five 9s availability Automated tiering with FAST VP (Fully Automated Storage Tiering for Virtual Pools) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere for a single management interface for all NAS, SAN, and replication needs Up to three times improvement in performance with the latest Intel Xeon Multicore processor technology, optimized for flash Different software suites and packs are also available for the VNX series, which provide multiple features for enhanced protection and performance. 42 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

43 Solution Technology Overview Software suites The following VNX software suites are available: Software packs FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously. Local Protection Suite Practices safe data protection and repurposing. Remote Protection Suite Protects data against localized failures, outages, and disasters. Application Protection Suite Automates application copies and proves compliance. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. The following VNX software packs are available: Total Efficiency Pack Includes all five software suites. Total Protection Pack Includes local, remote, and application protection suites. EMC VNX Snapshots VNX Snapshots is a software feature that creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation, and local rapid restores. VNX Snapshots improves on the existing ENC VNX SnapView snapshot functionality by integrating with storage pools. Note: LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView snapshots. This limitation exists because VNX Snapshots requires pool space as part of its technology. VNX Snapshots supports 256 writeable snapshots per pool LUN. It supports branching, also called Snap of a Snap, as long as the total number of snapshots for any primary LUN is less than 256, which is a hard limit. VNX Snapshots uses redirect on write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy on first write (COFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot. This release also supports consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 43

44 Solution Technology Overview EMC VNX SnapSure EMC VNX SnapSure is an EMC VNX File software feature that enables you to create and manage checkpoints that are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks. When a block within the PFS is modified, a copy containing the block s original contents is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. A checkpoint reflects the state of a PFS at the time the checkpoint was created. SnapSure supports these types of checkpoints: Read-only checkpoints Read-only file systems created from a PFS Writeable checkpoints Read/write file systems created from a read-only checkpoint SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note: Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint. For more detailed information, refer to the document Using VNX SnapSure. EMC VNX Virtual Provisioning EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, VNX Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and User Capacity Threshold setting. 44 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

45 Solution Technology Overview Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. You can monitor the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure 9. Figure 9. Storage pool rebalance progress LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family has the capability to expand a pool LUN without disrupting user access. You can expand pool LUNs with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data-protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. LUN shrink Use LUN shrink to reduce the capacity of existing thin LUNs. VNX can shrink a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process involves these steps: 1. Shrink the file system from Windows Disk Management. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 45

46 Solution Technology Overview 2. Shrink the pool LUN using a command window and the DISKRAID utility. The utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package. The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is complete, any other LUN in that pool can use the reclaimed space. For more detailed information on LUN expansion/shrinkage, refer to EMC VNX Virtual Provisioning Applied Technology White Paper. Alerting the user through the Capacity Threshold setting You must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed and capacity shortages can be avoided. Figure 10 explains why provisioning with thin pools requires monitoring. Figure 10. Thin LUN space utilization Monitor the following values for thin pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host-reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. 46 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

47 Solution Technology Overview Total allocation must never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. Figure 11 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free, Percent Full, Total Allocation, Total Subscription of physical capacity, Percent Subscribed and Oversubscribed By of virtual capacity. Figure 11. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization, and be alerted when thresholds are reached, set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by selecting Advanced in the Storage Pool Properties dialog box, as seen in Figure 12. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 47

48 Solution Technology Overview Figure 12. Defining storage pool utilization thresholds View alerts by Alert in Unisphere. Figure 13 shows the Unisphere Event Monitor wizard, where you can also select the option of receiving alerts through , a paging service, or an SNMP trap. Figure 13. Defining automated notifications - for block 48 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

49 Solution Technology Overview Table 2 lists the information about thresholds and their settings. Table 2. Thresholds and settings under VNX OE Block Release 33 Threshold type Threshold range Threshold default Alert severity Side effect User settable 1%-84% 70% Warning None Built-in N/A 85% Critical Clears user settable alert If you allow total allocation to exceed 90 percent of total capacity, you are at risk of running out of space and affecting all applications that use thin LUNs in the pool. Windows Offloaded Data Transfer Windows Offloaded Data Transfer (ODX) provides the ability to offload data transfer from the server to the storage arrays. This feature is enabled by default in Windows Server VNX arrays are compatible with Windows ODX on Windows Server ODX supports the following protocols: iscsi Fibre Channel (FC) FC over Ethernet (FCoE) Server Message Block (SMB) 3.0 The following data-transfer operations currently support ODX: Transferring large amounts of data via the Hyper-V Manager, such as creating a fixed size VHD, merging a snapshot, or converting VHDs Copying files in File Explorer Using the Copy commands in Windows PowerShell Using the Copy commands in the Windows command prompt Because ODX offloads the file transfer to the storage array, host CPU and network utilization are significantly reduced. ODX minimizes latencies and improves the transfer speed by using the storage array for data transfer. This is especially beneficial for large files, such as database or video files. ODX is enabled by default in Windows Server 2012, so when ODXsupported file operations occur, data transfers automatically offloaded to the storage array. The ODX process is transparent to users. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 49

50 Solution Technology Overview EMC PowerPath EMC PowerPath is a host-based software package that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It offers the following benefits for the VSPEX Proven Infrastructure: Standardized data management across physical and virtual environments. Automated multipathing policies and load balancing to provide predictable and consistent application availability and performance across physical and virtual environments. Improved service-level agreements by eliminating application impact from I/O failures. EMC FAST Cache EMC FAST Cache, a part of the EMC FAST Suite, enables flash drives to function as an expanded cache layer for the array. FAST Cache is an array-wide, nondisruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of highly active data to flash drives. This dramatically improves the response time for the active data and reduces data hot spots that can occur within a LUN. The FAST Cache feature is an optional component of this solution. VNX file shares In many environments it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. For more information, refer to the document Configuring and Managing CIFS on VNX. ROBO Organizations with remote office and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments need to balance the benefits of local support with the need to maintain central control. Local Systems and storage should be easy for local personnel to administer, but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also leverage Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining simplicity of operation and unified storage functionality for local managers. 50 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

51 Solution Technology Overview SMB 3.0 features BranchCache is a feature that allows clients to cache data stored on SMB 3.0 shares locally at the branch office. With BranchCache capability, remote users that access file shares can cache files locally, which helps future lookups, reduces network traffic, and improves scalability and performance. For more information on BranchCache, refer to SMB 3.0 features. Overview SMB 3.0 supports Hyper-V and Microsoft SQL Server storage. Microsoft also introduced several key features that improve the performance of these applications, and simplify application management tasks. This section describes SMB 3.0 features supported on VNX storage arrays, and indicates how these features affect the performance of applications or data stored on SMB 3.0 file shares. For more information, refer to the EMC VNX Series: Introduction to SMB 3.0 Support White Paper. SMB versions and negotiations The SMB protocol follows the client-server model. The protocol level is negotiated by client request and server response when establishing a new SMB connection. The SMB versions for various Windows operating systems are as follows: CIFS Windows NT 4.0 SMB 1.0 Windows 2000, Windows XP, Windows Server 2003, and Windows Server 2003 R2 SMB 2.0 Windows Vista (SP1 or later) and Windows Server 2008 SMB 2.1 Windows 7 and Windows Server 2008 R2 SMB 3.0 Windows 8 and Windows Server 2012 Before establishing a session between the client and server, a common SMB dialect is negotiated. Table 3 shows the common dialect used based on the SMB versions supported by the client and server. Table 3. SMB dialect used between client and server Client-server SMB 3.0 SMB 2.1 SMB 2.0 SMB 3.0 SMB 3.0 SMB 2.1 SMB 2.0 SMB 2.1 SMB 2.1 SMB 2.1 SMB 2.0 SMB 2.0 SMB 2.0 SMB 2.0 SMB 2.0 SMB 1.0 SMB 1.0 SMB 1.0 SMB 1.0 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 51

52 Solution Technology Overview For more information on SMB versions and negotiations, refer to the Microsoft TechNet technical document entitled Server Message Block (SMB) Protocol Versions 2 and 3. VNX and VNXe storage support All features mentioned in this document are supported in the latest releases of VNX operating environment (OE) for File and VNXe OE. SMB 3.0 VHD/VHDX storage support With Virtual Hard Disk file format (VHD and VHDX) storage support, Hyper-V can store virtual machines, and files such as configuration files, virtual hard drives, and snapshots on SMB 3.0 shares. This applies to standalone and clustered servers. Feature benefit With SMB 3.0 support for storing Hyper-V virtual machines, Microsoft supports block storage protocols and file storage protocols. This provides Hyper-V users with additional storage options to store Hyper-V virtual machine files. Baseline comparison point Support for VHD and VHDX files on a VNX storage array is enabled by default, without the need for additional configuration. Figure 14 shows the performance of 100 Hyper-V reference virtual machines on VNX SMB 3.0 file shares. Each virtual machine was driving 25 IOPS. The acceptable latency limit is 20 ms, and the average latency observed during the test was 12 ms. Figure 14. SMB 3.0 baseline performance comparison point Note: This performance result serves as a baseline comparison point for all other SMB 3.0 features discussed later in this chapter. 52 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

53 Solution Technology Overview SMB 3.0 Continuous Availability The SMB 3.0 Continuous Availability (CA) feature ensures the transparent failover of the file server (serviced by the VNX storage array) when faults occur. It enables clients connected to SMB 3.0 shares to transparently reconnect to another file server node when one node fails. All open file handles from the faulted server node are transferred to the new server node, which eliminates application errors. Figure 15 shows the sequence of events for a Data Mover failover with CA enabled: 1. The client (Windows Server 2012) requests a persistent handle by opening a file with associated leases and locks on a CIFS share. 2. The CIFS server saves the open state and persistent handle to disk. 3. If the primary Data Mover (Data Mover 2) fails, it fails over to the standby Data Mover (Data Mover 3). 4. The Data Mover reads and restores the persistent open state from the disk before starting the CIFS service. 5. Using the persistent handle, the client re-establishes the connection to the same CIFS server, and recovers the same context associated with the open file as before the failover occurred. Figure 15. SMB 3.0 Continuous Availability Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 53

54 Solution Technology Overview Feature benefit When a Data Mover fails, clients accessing SMB 3.0 shares created with CA do not perceive any application errors. Instead, they experience a small I/O delay due to the primary Data Mover failing over to the standby Data Mover. After the failover, the application may experience a brief spike in latency but soon resumes normal operation. Enabling the feature This feature is required for Hyper-V environments. To enable this feature, run the following commands from the VNX Control Station. 1. To mount the file system through which the share will be exported with the smbca option: server_mount <server_name> -o smbca <fsname> /<fsmountpoint> 2. To export the share with the CA option: server_export <server_name> -P cifs n <sharename> o type=ca /<fsmountpoint> Performance impact This feature does not impact storage, server, or network performance. The only time that performance changes is after a failover or failback operation, when there is a spike in IOPS and latency for a brief period before normal operation resumes. Figure 16 shows the performance of VDbench on host when the primary Data Mover panics. There is an I/O delay during the failover operation. When the failover completes, the standby is active, and the VDbench returns to normal operation after a short spike in I/O and latency. Figure 16. CA application performance 54 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

55 Solution Technology Overview SMB Multichannel The SMB Multichannel feature utilizes multiple network interfaces and connections to provide higher throughput and fault tolerance. This is achieved without any additional configuration steps for the network interfaces. Feature benefits SMB Multichannel provides network high-availability. If one of the network interface cards (NICs) fails, the applications and clients continue operating at a lower throughput potential without any errors. SMB Multichannel is automatically configured. All network paths are automatically detected, and connections are added dynamically. SMB Multichannel works as follows: Multichannel connections on a single NIC for improved throughput: SMB Multichannel does not provide any additional throughput if the single NIC does not support RSS Receive Side Scaling (RSS). RSS allows multiple connections to spread across the CPU cores automatically and hence can distribute load between the CPU cores by creating multiple TCP/IP connections. Multichannel connections on multiple NICs for improved throughput: SMB Multichannel creates multiple TCP/IP sessions one for each available interface. If the NICs are RSS-capable, many TCP/IP connections per NIC are created. Enabling the feature SMB Multichannel is enabled by default on the VNX storage array. No parameter needs to be set on the system to use this feature. This feature is also enabled by default on Windows 8 and Windows 2012 clients. Performance impact SMB Multichannel provides additional network throughput by creating more TCP/IP connections (at least one per NIC). If the network is underutilized, no performance degradation is observed when one NIC fails. However, if the network is being heavily utilized, the application continues functioning at a lower throughput. Figure 17 shows the network-resiliency test result on an SMB 3.0 client when one out of two NICs is disabled. The application does not experience any errors or faults, and continues to perform normally even when the interface is enabled again. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 55

56 Solution Technology Overview Figure 17. SMB Multichannel fault tolerance The application does not have an impact on performance because the network was not the bottleneck during the test. If it were a bottleneck, the response time would have been higher. However, the application would have continued functioning without any errors if the higher response time was acceptable. Figure 18 shows the SMB 3.0 client s network throughput on both interfaces. Figure 18. Multichannel network throughput Each SMB 3.0 client in the test environment has two network interfaces. When one interface is disabled, the surviving interface services the traffic. This is evident from the chart, which shows the throughput doubling on one NIC, and the throughput dropping to zero on the disabled NIC. After the disabled NIC is enabled again, the load balances equally on both NICs. 56 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

57 Solution Technology Overview SMB 3.0 Copy Offload Copy Offload enables the array to copy large amounts of data without involving server, network, or CPU resources. The server offloads the copy operation to the physical array where the data resides. Note: Copy Offload requires that the source and the destination file system be on the same Data Mover. Figure 19. Copy Offload Feature benefits Copy Offload enables faster data transfer from source to destination because it does not use any client CPU cycles. This feature is most beneficial for the following operations: Deployment operations: Deploy multiple virtual machines faster. The baseline VHDX can reside on an SMB 3.0 share, with new virtual machines deployed on SMB 3.0 shares with Hyper-V Manager, by pointing to the baseline VHDX. Cloning operations: Clone virtual machines from one SMB 3.0 share to another in minutes. Migration operations: Migrate virtual machines between file shares on the same Data Mover in 10 minutes, as opposed to almost 40 minutes without the Copy Offload feature. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 57

58 Solution Technology Overview Table 4 shows the time taken to move virtual machine storage with and without the Copy Offload feature. Table 4. Storage migration improvement with Copy Offload Number of virtual machines (100 GB each) Time spent for storage migration with Copy Offload enabled Time spent for storage migration with Copy Offload disabled 1 10 mins 37 mins 2 13 mins 82 mins 5 26 mins More than 4 hours mins More than 8 hours Enabling the feature This feature is enabled by default on the VNX storage array, Windows 8, and Windows Server 2012 clients. Performance impact Because the array handles the entire copy operation, the Copy Offload feature increases the utilization of the Data Mover CPU and other array resources. The performance of the feature is limited by the array read/write bandwidth. SMB 3.0 BranchCache BranchCache enables clients to cache data stored on SMB 3.0 shares locally at the branch office. The cached content is encrypted between peers, clients, and hosted cache servers. This feature was first introduced with Windows 7 and Windows 2008 R2. SMB 3.0 supports BranchCache v2. Implement BranchCache in one of two modes: Distributed cache mode: Distributes cache between the client computers at the branch office. Hosted cache mode: Maintains cached content on a separate computer at the branch office. For more information on BranchCache, refer to the Microsoft TechNet Library topic Branch Cache Overview. Feature benefit With BranchCache capability, remote users who access file shares can cache files locally at the branch office. This helps future lookups, reduces network traffic, and improves scalability and performance. 58 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

59 Solution Technology Overview Enabling the feature The Branch Cache feature is not enabled by default on the VNX storage array. Run the following command on the VNX Control Station to enable BranchCache: server_cifs <server_name> smbhash service enable To create the share with type=hash, run the following command: server_export <server_name> -o type=hash On a DC of a Windows Server 2012 domain where the VNX is connected, edit the default domain policy as follows to activate: Computer Configuration\Policies\Administrative Templates\Network\Lanman Server\Hash Publication for BranchCache. Performance impact This feature reduces network traffic, as the cached data is available locally at the branch office. Client performance also improves due to faster access to data, but there is some overhead involved to encrypt and decrypt data between BranchCache members. SMB 3.0 Remote VSS Remote VSS (RVSS) is a Remote Procedure Call (RPC) based protocol, which enables application-consistent shadow copies of VSS-aware server applications. RVSS stores data on SMB 3.0 file shares. RVSS supports application backup across multiple file servers and shares. VSS-aware backup applications can perform snapshots of server applications that store data on the VNX CIFS shares. Hyper-V has the ability to store virtual machine files on CIFS shares, and RVSS can take point-intime copies of the share contents. Some examples of shadow copy uses are: Creating backups Recovering data Testing scenarios Mining data Feature benefit RVSS uses the existing Microsoft VSS infrastructure to integrate with VSSaware backup software and applications. Backup applications read directly from shadow-copy file shares instead of involving the server application computer. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 59

60 Solution Technology Overview Enabling the feature RVSS is enabled by default on the VNX storage array, without a need for additional configuration. Performance impact RVSS increases the load on the VNX storage array because it takes application-consistent copies (or snapshots) of applications running on the file shares. SMB 3.0 encryption SMB 3.0 allows in-flight, end-to-end encryption of data, and protects it on untrusted networks. Enable this feature for an individual share, or for the entire CIFS server node. This feature only works with SMB 3.0 clients. If the share is encrypted, deny access, or allow unencrypted access for non-smb 3.0 clients. Feature benefit SMB encryption does not require any additional software or hardware. It protects data on the network from attacks and eavesdropping. Enabling the feature This feature is not enabled by default on the VNX storage array. Enabling encryption on all shares To configure encryption on all shares, set the Encrypt Data parameter in the VNX CIFS server registry to 0x1.To configure this parameter, complete the following steps: 1. Open the Registry Editor (regedit.exe) on a computer. 2. Select File > Connect Network Registry. 3. Enter the hostname or IP address of the CIFS server, and click Check Names. 4. When the server is recognized, click OK to close the window. 5. Edit the Encrypt Data parameter (0x1 is enabled, and 0x0 is disabled) under HKEY\System\CurrentControlSet\Services\LanmanServer\Paramet ers as shown in Figure Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

61 Solution Technology Overview Figure 20. Enabling the Encrypt Data parameter By default only SMB 3.0 clients can access encrypted VNX file shares. In order to allow pre-smb 3.0 clients to access encrypted shares, the RejectUnencryptedAccess value under the VNX CIFS server registry location shown in Figure 20 must be set to 0x0. Enabling encryption on a specific share To enable encryption for a particular share, run the following command on the VNX Control Station: server_export <server_name> -P cifs n <sharename> o type=encrypted /<fsmountpoint> Performance impact With encryption enabled on the shares, Data Mover CPU, and SMB 3.0 client utilization increases because encryption and decryption require additional overhead. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 61

62 Solution Technology Overview Figure 21 shows an increase in CPU utilization with encryption enabled on the SMB 3.0 shares. Figure 21. Enabling encryption: Client CPU utilization Figure 22 shows the increase in Data Mover utilization with encryption enabled on the SMB 3.0 shares. Figure 22. Enabling encryption: Data Mover CPU utilization SMB 3.0 PowerShell cmdlets SMB 3.0 PowerShell cmdlets are PowerShell commands that allow file share management through Windows PowerShell CLI. SMB 3.0 Windows Powershell cmdlets use WMIv2 classes, so not all commands are compatible with VNX-hosted file shares. However, VNX provides a set of PowerShell commands to install and execute from a Windows 8 or Server2012 client. Download these commands from EMC Online Support. For more information on Windows PowerShell commands for SMB 3.0, refer to the Microsoft TechNet topic SMB Share CMDlets in Windows PowerShell. 62 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

63 Solution Technology Overview Table 5 lists Microsoft SMB 3.0 PowerShell cmdlets to execute from the clients. Table 5. Microsoft PowerShell cmdlets Command Get-SmbServerNetworkInterface Get-SmbServerConfiguration Get- SmbMultichannelConnection New-SmbMultichannelConstraint Get-SmbMultichannelConstraint Update- SmbMultichannelConnection Remove- SmbMultichannelConstraint Get-SmbMapping Remove-SmbMapping New-SmbMapping Get-SmbConnection Get-SmbClientNetworkInterface Get-SmbClientConfiguration Description Lists the network interfaces available to the SMB server Lists the SMB server configuration Lists the connections currently in use by SMB Multichannel Creates a new multichannel constraint Lists the constraints on multichannel connections Updates the constraint on the multichannel connection Removes the multichannel constraint Displays a list of drives mapped by an SMB client Removes an existing mapping Creates a new mapping Lists the SMB connections on the server Displays the client network interface Displays the current SMB client configuration settings Table 6 lists the EMC-provided SMB 3.0 PowerShell cmdlets to manage shares. Table 6. EMC-provided PowerShell cmdlets Command Add-LG Add-LGMember Add-Share Add-ShareAcl Add-SharePerms Description Adds a new local group on a server name Adds a member in a specified local group on a server name Creates a share on a server name Adds an ACE in a share's ACL on a server name Adds an access in share's permissions on a server name Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 63

64 Solution Technology Overview Command Remove-LG Remove-LGMember Remove-Session Remove-Share Remove-ShareAcl Remove-SharePerms Set-ShareFlags Show-AccountSid Show-ACL Show-LG Show-LGMembers Show- RootDirMembers Show- SecurityEventLog Show-Sessions Show-Shares Show-ShareAcl Show-ShareFlags Show-SharePerms Description Deletes a local group on a server name Deletes a member of a Local Group on a server name Deletes a session open on a server name Removes a share on a server name Removes an ACE in a share's ACL on a server Removes an access in share's permissions on a server name Sets share flags on a specified server name Displays SID of a specified user Displays the share's ACL on a server name Enumerates local group on a server name Enumerates members of a local group on a server name Lists the root directory members of a server name Displays the eventlogs of a server name Enumerates open sessions on a server name Displays all shares on a server name Displays the share's ACL on a server name Displays the share's flags values on a server name Enumerates access contained in a share's permissions on a server name The following are some examples of the PowerShell cmdlets: Show Shares command Figure 23 shows a list of all the SMB 3.0 shares on the VNX from the Show Shares command. 64 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

65 Solution Technology Overview Figure 23. PowerShell execution of Show Shares Get-SmbServerConfiguration command Figure 24 shows the SMB 3.0 server configuration from the Get- SMBServerConfiguration command. Figure 24. PowerShell execution of Get-SmbServerConfiguration Feature benefit PowerShell cmdlets enable clients and administrators to easily manage SMB 3.0 shares from a single location. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 65

66 Solution Technology Overview Enabling the feature PowerShell commands are enabled by default on Windows 2012 and Windows 8 clients. Download the EMC PowerShell commands from EMC Online Support to use them. Performance impact The execution of these cmdlets has no impact on storage, server, or network resources. SMB 3.0 Directory Leasing SMB 3.0 Directory Leasing enables clients to cache directory metadata locally. All future metadata requests are serviced from the same cache. Cache coherency is maintained because clients are notified when directory information changes on the server. There are several types of leases: Read-caching lease (R) allows a client to cache reads, and can be granted to multiple clients. Write-caching lease (W) allows a client to cache writes. A handle-caching lease (H) allows a client to cache open handles, and can be granted to multiple clients. Figure 25. SMB 3.0 Directory Leasing 66 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

67 Solution Technology Overview Feature benefit Directory leasing improves application response time in branch offices. This feature is useful in scenarios where a client in the branch office does not want to go over the high-latency WAN to fetch the same metadata information repeatedly. Instead, they can cache the same data and rely on the SMB server to notify them when information changes on the server. The typical usage includes: Home folders (read/write) Publication (read-only) Enabling the feature This feature is enabled by default on the Data Mover without a need for additional configuration. Performance impact This feature improves application response time, reduces network traffic and client processor utilization. Summary of feature defaults Table 7 summarizes the default status of the features. Table 7. Default status of SMB 3.0 features Feature Hyper-V storage support Continuous Availability Multichannel Copy Offload BranchCache Remote VSS Encryption PowerShell cmdlets Directory leasing Data Mover support Supported by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover Enabled by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover. EMC SMB PowerShell cmdlets for VNX can be downloaded from powerlink.emc.com Enabled by default on the Data Mover Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 67

68 Solution Technology Overview Backup and recovery Overview Backup and recovery, another important component in this VSPEX solution, provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster. EMC backup and recovery is a smart method of backup. It consists of best of class, integrated protection storage and software designed to meet backup and recovery objectives now and in the future. With EMC marketleading protection storage, deep data source integration, and featurerich data management services, you can deploy an open, modular protection storage architecture that allows you to scale while lowering cost and complexity. EMC Avamar deduplication EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, network-attached storage (NAS) servers, and desktops/laptops. Learn more: EMC Data Domain deduplication storage systems EMC Data Domain Deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with highspeed, inline deduplication for backup and archive workloads. Learn more: VMware vsphere data protection For backup and recovery options, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. Continuous Availability EMC RecoverPoint EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. EMC RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data locally (continuous data protection, CDP), remotely (continuous remote replication, CRR), or both (local and remote replication, CLR). RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and the data is transferred by FC. 68 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

69 Solution Technology Overview RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order. In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously. RecoverPoint uses lightweight splitting technology on the application server, in the fabric or in the array, to mirror application writes to the RecoverPoint cluster. RecoverPoint supports several types of write splitters: Array-based Intelligent fabric-based Host-based EMC VNX Replicator EMC VNX Replicator is a powerful, easy-to-use asynchronous replication solution. With its WAN-aware functionality, simple management interface, and advanced DR capability, it provides a complete replication solution. Replication between a primary and a secondary file system or iscsi LUN can be on the same VNX system, or on a remote system. Other technologies EMC VNX Replicator supports application-consistent iscsi replication. The host can initiate the replication via the VSS interface in Windows environments or Replication Manager. For CIFS environments, the Virtual Data Mover (VDM) functionality replicates the necessary context to the remote site along with the file systems. This includes CIFS server data, audit logs, and local groups. For asynchronous data recovery, the secondary copy can be read/write, and production can continue at the remote site. If the primary system becomes available, incremental changes at the secondary copy can be played back to the primary with the resynchronization function. This operates as described above, with a role reversal between primary and secondary. EMC XtremSW Cache In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. EMC XtremSW Cache is a server flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe flash technology. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 69

70 Solution Technology Overview Server-side flash caching for maximum speed XtremSW Cache performs the following functions to improve system performance: Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application. Automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server flash card. This means that the hottest data (most active data) automatically resides on the PCIe card in the server for faster access. Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application accelerates with XtremSW Cache, the array performance for other applications remains the same or slightly enhanced. Write-through caching to the array for total protection XtremSW Cache accelerates reads and protects data by using a writethrough cache to the storage to deliver persistent high-availability, integrity, and disaster recovery. Application agnostic XtremSW Cache is transparent to applications; there is no need to rewrite, retest, or recertify to deploy XtremSW Cache in the environment. Minimum impact on system resources Unlike other caching solutions on the market, XtremSW Cache does not require a significant amount of memory or CPU cycles, as all flash and wear-leveling management are done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using XtremSW Cache on server resources. XtremSW Cache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. XtremSW Cache active/passive clustering support The configuration of XtremSW Cache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The XtremSW Cache-enabled active/passive cluster ensures data integrity, and accelerates application performance. 70 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

71 Solution Technology Overview XtremSW Cache performance considerations XtremSW Cache performance considerations include: On a write request, XtremSW Cache first writes to the array, then to the cache, and then completes the application I/O. On a read request, XtremSW Cache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds; therefore, the array limits how fast the cache can work. As the number of writes increases, XtremSW Cache performance decreases. XtremSW Cache is most effective for workloads with a 70 percent or greater read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in XtremSW Cache 1.5. Note: For more information, refer to the Introduction to EMC XtremSW Cache White Paper. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 71

72 Solution Technology Overview 72 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

73 Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High-availability and failover Validation test profile Backup and recovery configuration guidelines Sizing guidelines Reference workload Applying the reference workload Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 73

74 Solution Architecture Overview Overview Solution architecture This chapter is a comprehensive guide to the major architectural aspects of this solution. Server capacity is presented in generic terms for required minimums of CPU, memory, and network resources; the customer is free to select the server hardware that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and Brocade storage network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a number of virtual machines validated by EMC. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Overview The VSPEX solution for Microsoft Hyper-V private cloud with VNX was validated at three different points of scale; one configuration with up to 300 virtual machines, one configuration with up to 600 virtual machines, and one configuration with up to 1,000 virtual machines. The defined configurations form the basis of creating a custom solution. Note: VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload. 74 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

75 Solution Architecture Overview Logical architecture The architecture diagrams in this section show the layout of the major components in the solutions. Two types of storage, block-based and filebased, are shown in the following diagrams. Figure 26 shows the infrastructure validated with block-based storage, where an 8 Gb FC (Depicted below with Connectrix-B 6510 Fibre Channel SAN), 10 GbE FCoE, or iscsi SAN carries storage traffic, and 10 GbE carries management and application traffic. Figure 26. Logical architecture for block storage Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 75

76 Solution Architecture Overview Figure 27 shows the infrastructure validated with file-based storage, where 10 GbE carries storage traffic and all other traffic. Figure 27. Logical architecture for file storage Key components The architectures include the following key components: Microsoft Hyper-V Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 8. Hyper-V provides highly available infrastructure through features such as: Live Migration Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Live Storage Migration Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. Failover Clustering High Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster. Dynamic Optimization (DO) Provides load balancing of computing capacity in a cluster with support of SCVMM. 76 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

77 Solution Architecture Overview Microsoft System Center Virtual Machine Manager (SCVMM) SCVMM is not required for this solution. However, if deployed, it (or its corresponding functionality in Microsoft System Center Essentials) simplifies provisioning, management, and monitoring of the Hyper-V environment. Microsoft SQL Server 2012 SCVMM, if used, requires a SQL Server database instance to store configuration and monitoring details. DNS Server Use DNS services for the various solution components to perform name resolution. This solution uses Microsoft DNS service running on Windows Server Active Directory Server Various solution components require Active Directory services to function properly. The Microsoft AD Service runs on a Windows Server 2012 server. IP network A standard Ethernet network carries all network traffic with redundant cabling and switching. A shared IP network carries user and management traffic. Storage Network VSPEX with Brocade networking offers different options for block-based and file-based storage networks. All storage traffic is carried over redundant cabling and Brocade Fabric switches. Storage network The storage network is an isolated network that provides hosts with access to the storage arrays. VSPEX offers different options for block-based and file-based storage. Brocade Storage network for block This solution provides three options for block-based storage networks. Fibre Channel (FC) is a set of standards that define protocols for performing high speed serial data transfer. FC provides a standard data transport frame among servers and shared storage devices. o Connectrix-B 6510 Fibre Channel Switch Provides fast and easy scaling from 24 to 48 Ports on Demand (PoD) and supports 2, 4, 8, or 16 Gbps for VNX series storage array. (Deployment of Connectrix-B 6510 FC switches demonstrated in Chapter 5.) Fibre Channel over Ethernet (FCoE) is a storage networking protocol that supports FC natively over Ethernet, by encapsulating FC frames into Ethernet frames. This allows the encapsulated FC Frames to run alongside traditional Internet Protocol (IP) traffic. o Brocade VDX 6740 Ethernet Fabric Switch Provides efficient, easy to configure, resiliency that scales from 24 to 64 Ports on Demand (PoD) at 10GbE for FCoE attached VNX series. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 77

78 Solution Architecture Overview 10 Gb Ethernet (iscsi) enables the transport of SCSI blocks over a TCP/IP network. iscsi works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. o Brocade VDX 6740 Ethernet Fabric Switch Provides efficient, easy to configure, resiliency that scales from 24 to 64 Ports on Demand (PoD) at 1 GbE or 10GbE for iscsi attached VNX series arrays. Brocade Storage network for file With file-based storage, a private, non-routable 10 GbE subnet carries the storage traffic. Brocade VDX 6740 Ethernet Fabric Switch Provides efficient, easy to configure, resiliency that scales from 24 to 64 Port on Demand (PoD) at 1 GbE or 10GbE for file attached VNX series arrays. (Deployment of VDX Ethernet Fabric series switches demonstrated in Chapter 5.) VNX storage array The VSPEX private cloud configuration begins with the VNX family storage arrays, including: EMC VNX5400 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to 300 virtual machines. EMC VNX5600 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SM B3.0) shares (for file) to Hyper-V hosts for up to 600 virtual machines. EMC VNX5800 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to 1,000 virtual machines. VNX family storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and FCoE protocols. The SPs provide access for all external hosts, and for the file side of the VNX array. Disk processor enclosure (DPE) is 3U in size, and houses the SPs and the first tray of disks. VNX5400, VNX5600 and VNX5800 use this component. X-Blades (or Data Movers) access data from the backend and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. 78 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

79 Solution Architecture Overview Data Mover enclosure (DME) is 2U in size and houses the Data Movers (X-Blades). All VNX for File models use the DME. Standby power supply (SPS) is 1U in size and provides enough power to each SP to ensure that any data in flight is destaged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and made persistent. Control Station is 1U in size and provides management functions to the X-Blades. The Control Station is responsible for X-Blade failover. An optional secondary Control Station ensures redundancy on the VNX array. Disk-array enclosures (DAE) house the drives used in the array. Hardware resources Table 8 lists the hardware used in this solution. Table 8. Solution hardware Component Microsoft Hyper-V servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 300 virtual machines: 300 vcpus Minimum of 75 physical CPUs For 600 virtual machines: 600 vcpus Minimum of 150 physical CPUs For 1,000 virtual machines: 1,000 vcpus Minimum of 250 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 300 virtual machines: Minimum of 600 GB RAM Add 2GB for each physical server For 600 virtual machines: Minimum of 1200 GB RAM Add 2GB for each physical server For 1,000 virtual machines: Minimum of 2000 GB RAM Add 2GB for each physical server Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 79

80 Solution Architecture Overview Component Configuration Brocade Network Block 2 x 10 GbE NICs per server 2 HBAs per server File 4 x 10 GbE NICs per server Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V High-Availability (HA) and meet the listed minimums. Brocade Network infrastructure Minimum switching capacity Block Brocade Connectrix-B Fibre Channel Switches Two Brocade 6510 switches 24 to 48 PoD 2 x 8 or 16 Gbps ports per VMware vsphere server, for storage network 2 x 8 Gbps ports per SP, for storage data File Brocade Ethernet Fabric Switch Two Brocade VDX 6740 switches 24 to 64 PoD 4 x 10 GbE ports per VMware vsphere server 2 x 10 GbE ports per Data Mover for data Management 1 x 1 GbE port per Control Station for management EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper. Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper. 80 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

81 Solution Architecture Overview Component EMC VNX series storage array Block Configuration Common: 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front end ports per SP. system disks for VNX OE For 300virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch Serial-attached SCSI (SAS) drives 6 x 200 GB flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives. 8x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives. 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 81

82 Solution Architecture Overview Component File Configuration Common: 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management System disks for VNX OE For 300 virtual machines EMC VNX Data Movers (active/standby) 110 x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives. 5 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines EMC VNX Data Movers (active/standby) 220 x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives. 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines EMC VNX Data Movers (2 active/1 standby) 360 x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives. 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare Shared infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, add the following: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. 82 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

83 Solution Architecture Overview Note: The solution recommends using a 10 Gb network or an equivalent 1Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. Software resources Table 9 lists the software used in this solution. Table 9. Solution software Software Microsoft Hyper-V Microsoft Windows Server Microsoft System Center Virtual Machine Manager Microsoft SQL Server Brocade Network Brocade FOS for block on 6510 FC series switch Brocade NOS for file on VDX 6740 Ethernet Fabric series switch Configuration Windows Server 2012 Data Center Edition (Data Center Edition is necessary to support the number of virtual machines in this solution) Version 2012 SP1 Version 2012 Enterprise Edition Note Any supported database for SCVMM is acceptable. Fabric OS v7.2 Network OS v EMC VNX EMC Storage Integrator (ESI) EMC PowerPath Next-Generation Backup EMC Avamar Check for latest version Check for latest version 6.1 SP1 EMC Data Domain OS 5.2 Virtual machines (used for validation not required for deployment) Base operating system Microsoft Windows Server 2012 Data Center Edition Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 83

84 Solution Architecture Overview Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Dynamic Memory and Smart Paging can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. Current VSPEX sizing guidelines specify a virtual CPU core to physical CPU core ratio of 4:1. This ratio was based upon an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, OEM server vendors that are VSPEX partners may suggest differing (normally higher) ratios. Follow the updated guidance supplied by your OEM server vendor. Table 10 lists the hardware resources that are used for the compute layer. 84 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

85 Solution Architecture Overview Table 10. Hardware resources for compute layer Component Microsoft Hyper-V servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 300 virtual machines: 300 vcpus Minimum of 75 physical CPUs For 600 virtual machines: 600 vcpus Minimum of 150 physical CPUs For 1,000 virtual machines: 1,000 vcpus Minimum of 250 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 300 virtual machines: Minimum of 600 GB RAM Add 2GB for each physical server For 600 virtual machines: Minimum of 1200 GB RAM Add 2GB for each physical server For 1,000 virtual machines: Minimum of 2000 GB RAM Add 2GB for each physical server Brocade Network Block File 2 x 10 GbE NICs per server 2 HBA per server 4 x 10 GbE NICs per server Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Hyper-V HA and meet the listed minimums. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 85

86 Solution Architecture Overview Hyper-V memory virtualization Microsoft Hyper-V has a number of advanced features to maximize performance, and overall resource utilization. The most important features relate to memory management. This section describes some of these features, and the items to consider when using these features in the VSPEX environment. In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 28. Figure 28. Hypervisor memory consumption Understanding the technologies in this section enhances this basic concept. 86 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

87 Solution Architecture Overview Dynamic Memory Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any given time. In Windows Server 2012, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines. Smart Paging Even with Dynamic Memory, Hyper-V allows more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory. Smart Paging is a memory management technique that uses disk resources as temporary memory replacement. It swaps out less-used memory to disk storage, and swaps in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, so Windows Server 2012 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host. Windows Server 2012 extends this functionality to the virtual machines, which provides improved performance in symmetrical multiprocessor (SMP) environments. Memory configuration guidelines The memory configuration guidelines take into account Hyper-V memory overhead, and the virtual machine memory settings. Hyper-V memory overhead Virtualized memory has some associated overhead, which includes the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. Leave at least 2 GB memory for the Hyper-V parent partition in this solution. Virtual machine memory In this solution, each virtual machine gets 2 GB memory in fixed mode. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 87

88 Solution Architecture Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. This guideline outlines network connectivity for access to existing infrastructure, management network and Brocade storage network between compute and EMC unified storage. The client access network is for users of the system, or clients, to communicate with the infrastructure. Administrators use the Management Network as a dedicated way to access the management connections on the storage array, network switches, and hosts. The Storage Network is communication between the compute layer and the storage layer. The Brocade Storage Network provides the communication between the compute layer and the storage layer. For detailed Brocade storage network resource requirements, refer to Table 11. Table 11. Hardware resources for network Component Configuration Brocade Network infrastructure Minimum switching capacity Block Brocade Fibre Channel Switch Two Brocade 6510 switches 24 to 48 PoD 2 x 10 GbE ports per Hyper-V server* 1 x 1 GbE port per Control Station for management* 2 ports per Hyper-V server, for storage network 2 ports per SP, for storage data File Brocade Ethernet Fabric Switch Two Brocade VDX 6740 switches 24 to 64 PoD 4 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management* 2 x 10 GbE ports per Data Mover for data Note: The solution may use a 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. Note: Separate VLANs are required for compute access to an existing client infrastructure and management network. Storage network is outlined for configuring both file and block with the VNX unified storage: File based storage network connectivity with VLAN, Jumbo Frames, and Link Aggregation Control Protocol (LACP) features; Block based storage network with 8 & 16 Gbps Fibre Channel connectivity with zoning configuration guidelines. 88 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

89 Solution Architecture Overview VLAN Isolate network traffic so that the traffic between hosts and storage, hosts and clients (File or iscsi only), and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient. This solution calls for a minimum of four VLANs for the following usage: File Storage network (for SMB 3.0) Block Storage (for iscsi only) Client access Management Zoning (Block Storage FC only) Zoning is the mechanism used to specify the devices in the fabric that should be allowed to communicate with each other for storage network traffic between host & storage (FC Block based only). Zoning is based on either port World Wide Name (pwwn) or Domain, Port (D, P). (Refer to documentation listed in Appendix D and the Secure SAN Zoning Best Practices white paper on for details.) When using pwwn, the SAN administrators cannot pre-provision zone assignments until the servers are connected and the WWN name of the HBAs is known. The Brocade fabric-based implementation supports a scalable solution for environments with blade and rack servers. This solution calls for a minimum of 4 zones: Block Storage network (quad zones for redundancy) Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 89

90 Solution Architecture Overview Figure 29 depicts the VLANs and the network connectivity requirements for a block-based VNX array. Figure 29. Required Brocade networks for block storage 90 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

91 Solution Architecture Overview Figure 30 depicts the VLANs and the network connectivity requirements for a file-based VNX array. Figure 30. Required Brocade networks for file storage The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note: Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. Enable jumbo frames (iscsi or SMB only) This solution recommends setting the MTU at 9216 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames for storage and host ports on the switches. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 91

92 Solution Architecture Overview Link aggregation (SMB only) A link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Brocade Virtual Link Aggregation Group (vlag) Brocade Virtual Link Aggregation Groups (vlags) are used for the Microsoft Hyper-V host and customer infrastructure. In the case of the VNX, a dynamic Link Aggregation Control Protocol (LACP) vlag is not used with MC/S and iscsi. While Brocade ISLs are used as interconnects between Brocade VDX switches within a Brocade VCS fabric, industry standard LACP LAGs are supported for connecting to other network devices outside the Brocade VCS fabric. Typically, LACP LAGs can only be created using ports from a single physical switch to a second physical switch. In a Brocade VCS fabric, a vlag can be created using ports from two Brocade VDX switches to a device to which both VDX switches are connected. This provides an additional degree of device-level redundancy, while providing active-active link-level load balancing. Brocade Inter- Switch Link (ISL) Trunks This solution uses Brocade Inter-Switch Link (ISL) Trunking within the Brocade VCS fabric to provide additional redundancy and load balancing between the iscsi clients and iscsi storage. Typically, multiple links between two switches are bundled together in a Link Aggregation Group (LAG) to provide redundancy and load balancing. Setting up a LAG requires lines of configuration on the switches and selecting a hash-based algorithm for load balancing based on source-destination IP or MAC addresses. All flows with the same hash traverse the same link, regardless of the total number of links in a LAG. This might result in some links within a LAG, such as those carrying flows to a storage target, being over utilized and packets being dropped, while other links in the LAG remain underutilized. Instead of LAG-based switch interconnects, Brocade VCS Ethernet fabrics automatically form ISL trunks when multiple connections are added between two Brocade VDX switches. Simply adding another cable increases bandwidth, providing linear scalability of switch-to-switch traffic, and this does not require any configuration on the switch. In addition, ISL trunks use a frame-by-frame load balancing technique, which evenly balances traffic across all members of the ISL trunk group. Equal-Cost Multipath (ECMP) A standard link-state routing protocol that runs at Layer 2 determines if there are Equal-Cost Multipaths (ECMPs) between RBridges in an Ethernet fabric and load balances the traffic to make use of all available ECMPs. If a neighbor switch is reachable via several interfaces with different bandwidths, all of them are treated as equal-cost paths. While it is 92 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

93 Solution Architecture Overview possible to set the link cost based on the link speed, such an algorithm complicates the operation of the fabric. Simplicity is a key value of Brocade VCS Fabric technology, so an implementation is chosen in the test case that does not consider the bandwidth of the interface when selecting equal-cost paths. This is a key feature needed to expand network capacity, to keep ahead of customer bandwidth requirements. Pause Flow Control Brocade VDX Series switches support the Pause Flow Control feature. IEEE 802.3x Ethernet pause and Ethernet Priority-Based Flow Control (PFC) are used to prevent dropped frames by slowing traffic at the source end of a link. When a port on a switch or host is not ready to receive more traffic from the source, perhaps due to congestion, it sends pause frames to the source to pause the traffic flow. When the congestion is cleared, the port stops requesting the source to pause traffic flow, and traffic resumes without any frame drop. When Ethernet pause is enabled, pause frames are sent to the traffic source. Similarly, when PFC is enabled, there is no frame drop; pause frames are sent to the source switch. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. Hyper-V allows more than one method of using storage when hosting virtual machines. The tested solutions described below use different block protocols (FC/FCoE/iSCSI) and CIFS (for file), and the storage layout described adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based upon their understanding of the system usage and load if required. However, the building blocks described in this guide ensure acceptable performance. The VSPEX storage building blocks section provides specific recommendations for customization. Table 12 lists hardware resources for storage. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 93

94 Solution Architecture Overview Table 12. Hardware resources for storage Component EMC VNX series storage array Block Configuration Common: 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front end ports per SP system disks for VNX OE For 300 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare 94 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

95 Solution Architecture Overview Component File Configuration Common: 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management System disks for VNX OE For 300 virtual machines: EMC VNX Data Movers (active / standby) 110 x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 600 virtual machines: EMC VNX Data Movers (active / standby) 220 x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare For 1,000 virtual machines: EMC VNX Data Movers (2 x active /1 x standby) 360 x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare Note: In a VNX5800, EMC recommends that you run no more than 600 virtual machines on a single active Data Mover. Configure two active Data Movers (2 x active/1 x standby) when scaling to 600 or larger in that case. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 95

96 Solution Architecture Overview Hyper-V storage virtualization for VSPEX This section provides guidelines to set up the storage layer of the solution to provide high-availability and the expected level of performance. Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes v2 and VHDX features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 31, the storage array presents either block-based LUNs (as CSV), or file-based CIFS share (as SMB shares) to the Windows hosts to host virtual machines. Figure 31. Hyper-V virtual disk types CIFS Windows Server 2012 supports using CIFS (SMB 3.0) file shares as shared storage for a Hyper-V virtual machine. CSV A Cluster Shared Volume (CSV) is a shared disk containing a New Technology File System (NTFS) volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass Through Windows 2012 also supports Pass Through, which allows a virtual machine to access a physical disk mapped to the host that does not have a volume configured on it. 96 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

97 Solution Architecture Overview SMB 3.0 (file-based storage only) The SMB protocol is the file sharing protocol that is used by default in Windows. With the introduction of Windows Server 2012, it provides a vast set of new SMB features with an updated (SMB 3.0) protocol. Some of the key features available with Windows Server 2012 SMB 3.0 are: SMB Transparent Failover SMB Scale Out SMB Multichannel SMB Direct SMB Encryption VSS for SMB file shares SMB Directory Leasing SMB PowerShell With these new features, SMB 3.0 offers richer capabilities that, when combined, provide organizations with a high performance storage alternative to traditional Fibre Channel storage solutions at a lower cost. Note: For more details about SMB 3.0, refer to Chapter 3. ODX Offloaded Data Transfer (ODX) is a feature of the storage stack in Microsoft Windows Server 2012 that gives users the ability to use the investment in external storage arrays to offload data transfers from the server to the storage arrays. When used with storage hardware that supports the ODX feature, file copy operations are initiated by the host, but performed by the storage device. ODX eliminates the data transfer between the storage and the Hyper-V hosts by using a token-based mechanism for reading and writing data within storage arrays and reduces the load on your network and hosts. Using ODX helps to enable rapid cloning and migration of virtual machines. Because the file transfer is offloading to the storage array when using ODX, the host resource usage, such as CPU and network, is significantly reduced. By maximizing the use of storage array, ODX minimizes latencies and improve the transfer speed of large files, such as database or video files. When performing file operations that are supported by ODX, data transfers are automatically offloaded to the storage array and are transparent to users. ODX is enabled by default in Windows Server Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 97

98 Solution Architecture Overview VHDX Hyper-V in Windows Server 2012 contains an update to the VHD format called VHDX, which has much larger capacity and built-in resiliency. The main features of the VHDX format are: Support for virtual hard disk storage with the capacity of up to 64 TB. Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures. Optimal structure alignment of the virtual hard disk format to suit large sector disks. The VHDX format also has the following features: Larger block size for dynamic and differential disks, which enables the disks to better meet the needs of the workload. The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors. The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates. Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware). VSPEX storage building blocks Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for file-based storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce this complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. Each building block storage pool, regardless of the size, contains two flash drives with FAST VP storage tiering to enhance metadata operations and performance. VSPEX solutions have been engineered to provide a variety of sizing configurations which afford flexibility when designing the solution. Customers can start out by deploying smaller configurations and scale up as their needs grow. At the same time, customers can avoid over- 98 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

99 Solution Architecture Overview purchasing by choosing a configuration that closely meets their needs. To accomplish this, VSPEX solutions can be deployed using one or both of the scale-points below to obtain the ideal configuration while guaranteeing a given performance level. Building block for 13 virtual servers The first building block can contain up to 13 virtual servers. It has two flash drives and five SAS drives in a storage pool, as shown in Figure 32. Figure 32. Building block for 13 virtual servers This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 13 more virtual servers. For details about pool expansion and restriping, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. Building block for 125 virtual servers The second building block can contain up to 125 virtual servers. It contains two flash drives, and 45 SAS drives, as shown in Figure 33. The preceding sections outline an approach to grow from 13 virtual machines in a pool to 125 virtual machines in a pool. However, after reaching 125 virtual machines in a pool, do not go to 138. Create a new pool and start the scaling sequence again. Figure 33. Building block for 125 virtual servers Implement this building block with all of the resources in the pool initially, or expand the pool over time as the environment grows. Table 13 lists the flash and SAS requirements in a pool for different numbers of virtual servers. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 99

100 Solution Architecture Overview Table 13. Number of disks required for different number of virtual machines Virtual servers Flash drives SAS drives * Note: Due to increased efficiency with larger stripes, the building block with 45 SAS drives can support up to 125 virtual servers. To grow the environment beyond 125 virtual servers, create another storage pool using the building block method described here. VSPEX private cloud validated maximums VSPEX private cloud configurations are validated on the VNX5400, VNX5600, and VNX5800 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX private cloud configuration. In addition to the VSPEX private cloud building blocks, each storage array must contain the drives used for the VNX OE, and hot spare disks for the environment. Notes: Allocate at least one hot spare for every 30 disks of a given type and size. The pool does not use system drives for additional storage. If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool must be 15k RPM and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes. For all VSPEX private cloud solutions: Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP : Works at the block storage pool level and automatically adjusts where data is stored based on access frequency. 100 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

101 Solution Architecture Overview Promotes frequently-accessed data to higher tiers of storage in 256 MB increments, and migrates infrequently-accessed data to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation. VNX5400 For block storage, allocate at least two LUNs to the Windows cluster from a single storage pool to serve as Cluster Shared Volumes for the virtual servers. For file storage, allocate at least two CIFS shares to the Windows cluster from a single storage pool to serve as SMB shares for the virtual servers. Optionally configure flash drives as FAST Cache in the array. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. VNX5400 is validated for up to 300 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 34 shows one potential configuration. Figure 34. Storage layout for 300 virtual machines using VNX5400 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 101

102 Solution Architecture Overview This configuration uses the following storage layout: One hundred and ten 600 GB SAS disks are allocated to three block-based storage pools: two pools with 45 SAS disks for 125 virtual machines each and one pool with 20 SAS disks for 50 virtual machines. Four 600 GB SAS disks are configured as hot spares. Six 200 GB flash drives are configured for Fast VP, two for each pool. One 200 GB flash drive is allocated as a hot spare. Using this configuration, the VNX5400 can support 300 virtual servers as defined in the Reference workload. 102 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

103 Solution Architecture Overview VNX5600 VNX5600 has been validated for up to 600 virtual servers. There are multiple ways to achieve this configuration with the building block approach. Figure 35 shows one potential configuration. Figure 35. Storage layout for 600 virtual machines using VNX5600 This configuration uses the following storage layout: Two hundred and twenty 600 GB SAS disks are allocated to five block-based storage pools: four pools with 45 SAS disks for 125 virtual machines each and one pool with 40 SAS disks for 100 virtual machines. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 103

104 Solution Architecture Overview Eight 600 GB SAS disks are configured as hot spares. Ten 200 GB flash drives are configured for Fast VP, two for each pool One 200 GB flash drive is allocated as a hot spare. Using this configuration, the VNX5600 can support 600 virtual servers as defined in Reference workload. VNX5800 VNX5800 is validated for up to 1,000 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 36 shows one potential configuration. 104 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

105 Solution Architecture Overview Figure 36. Storage layout for 1,000 virtual machines using VNX5800 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 105

106 Solution Architecture Overview This configuration uses the following storage layout: Three hundred and sixty 600 GB SAS disks are allocated to eight block-based storage pools: each with 45 SAS disks for 125 virtual machines. Twelve 600 GB SAS disks are configured as hot spares. Sixteen 200 GB flash drives are configured for Fast VP, two for each pool. One 200 GB flash drive is allocated as a hot spare. Using this configuration, the VNX5800 can support 1,000 virtual servers as defined in the Reference workload. Conclusion The scale levels listed in Figure 37 highlight the entry points and supported maximums for the arrays in the VSPEX private cloud environment. The entry points represent optimal model demarcations in terms of the number of virtual machines within the environment. This aids in providing a frame of reference to determine which VNX array to choose based upon your requirements. It is acceptable to configure any of the listed arrays with a smaller number of virtual machines than the maximums supported using the building block approach described earlier. Figure 37. Maximum scale levels and entry points of different arrays 106 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

107 Solution Architecture Overview High-availability and failover Overview This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with little or no impact on business operations. Virtualization layer Configure high availability in the virtualization layer, and configure the hypervisor to automatically restart failed virtual machines. Figure 38 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 38. High availability at the virtualization layer By implementing high availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the data center. This type of server has redundant power supplies, as shown in Figure 39. Connect the servers to separate power distribution units (PDUs) in accordance with your server vendor s best practices. Figure 39. Redundant power supplies Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 107

108 Solution Architecture Overview To configure HA in the virtualization layer, configure the compute layer with enough resources that meet the needs of the environment, even with a server failure, as demonstrated in Figure 38. Brocade Network layer The advanced networking features of the VNX family and Brocade VDX Ethernet Fabric and Connectrix-B 6510 Fibre Channel Family of switches provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage networks to guard against link failures, as shown in Figure 40 and Figure 41. Spread these connections across multiple Brocade Fabric switches to guard against component failure in the network. Figure 40. Brocade Network layer high availability (VNX) block storage network variant Figure 41. Brocade Network layer high availability (VNX) file storage Ensure there is no single point of failure to allow the compute layer to access storage, and communicate with users even if a component fails. 108 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

109 Solution Architecture Overview Storage layer The VNX design is for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk, as shown in Figure 42. Figure 42. VNX series HA components EMC storage arrays support HA by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 109

110 Solution Architecture Overview Validation test profile Profile characteristics The VSPEX solution was validated with the environment profile described in Table 14. Table 14. Profile characteristics Profile characteristic Value Number of virtual machines 300/600/1,000 Virtual machine OS Windows Server 2012 Datacenter Edition Processors per virtual machine 1 Number of virtual processors per physical CPU core RAM per virtual machine Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or CIFS shares to store virtual machine disks Number of virtual machines per LUN or CIFS share 4 2 GB 100 GB 25 IOPS 6/10/16 62 or 63 per LUN of CIFS share Disk and RAID type for LUNs or CIFS shares RAID 5, 600 GB, 15k rpm, 3.5- inch SAS disks Note: This solution was tested and validated with Windows Server 2012 as the operating system for Hyper-V hosts and virtual machines; however it also supports Windows Server Hyper-V hosts on Windows Server 2008 hosts and Windows Server 2012 use the same sizing and configuration. Backup and recovery configuration guidelines Overview This section provides guidelines to set up backup and recovery for this VSPEX solution. It includes how the EMC powered backup is characterized, and the backup layout. Backup characteristics The solution is sized with the following application environment profile, as listed in Table Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

111 Solution Architecture Overview Backup layout Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, Avamar and Data Domain managed to deploy as a single solution. This enables users to back up the unstructured user data directly to the Avamar system for simple file level recovery. Avamar manages the database and virtual machine images, and stores the backups on the Data Domain system with the embedded Boost client library. This backup solution unifies the backup process with industry-leading deduplication backup software and storage, and achieves the highest levels of performance and efficiency. Sizing guidelines Reference workload The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. There is guidance on how to correlate those reference workloads to customer workloads, and how that may change the end delivery from the server and network perspective. Make modifications to the storage definition by adding drives for greater capacity and performance, and by adding features such as FAST Cache and FAST VP. The disk layouts provide support for the appropriate number of virtual machines at the defined performance level and typical operations such as snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response times. Overview Defining the reference workload When you move an existing server to a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. To simplify the discussion, this section presents a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can decide which reference architecture to choose. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 111

112 Solution Architecture Overview For VSPEX solutions, the reference workload is a single virtual machine. Table 15 lists the characteristics of this virtual machine. Table 15. Virtual machine characteristics Characteristic Value Virtual machine operating system Microsoft Windows Server 2012 Datacenter Edition Virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine I/O operations per second (IOPS) per virtual machine I/O pattern 2 GB 100 GB 25 Random I/O read/write ratio 2:1 This specification for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference to measure other virtual machines. Server processor capabilities are constantly evolving. Server providers aligned with the VSPEX program may specify updated compute expectations based on recent technology changes. This guidance may override the compute requirements specified in the reference workload. Applying the reference workload Overview The solution creates a pool of resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 15. The customer virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. Example 1: Custom-built application A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully utilized. A careful analysis of the existing application reveals that the application can use one processor, and needs 3 GB memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. 112 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

113 Solution Architecture Overview Based on these numbers, the resource pool needs the following resources: CPU of one reference virtual machine Memory of two reference virtual machines Storage of one reference virtual machine I/Os of one reference virtual machine In this example, an appropriate virtual machine uses the resources for two of the reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 298 reference virtual machines remain. Example 2: Point-of-Sale system The database server for a customer s Point-of-Sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB memory. It uses 200 GB storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of four reference virtual machines Memory of eight reference virtual machines Storage of two reference virtual machines I/Os of eight reference virtual machines In this case, the one appropriate virtual machine uses the resources of eight reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 292 reference virtual machines remain. Example 3: Web server The customer s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB memory. It uses 25 GB storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of two reference virtual machines Memory of four reference virtual machines Storage of one reference virtual machine I/Os of two reference virtual machines In this case, the one appropriate virtual machine uses the resources of four reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 296 reference virtual machines remain. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 113

114 Solution Architecture Overview Example 4: Decision-support database Summary of examples The database server for a customer s decision support system must move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64 GB memory. It uses 5 TB storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of 10 reference virtual machines Memory of 32 reference virtual machines Storage of 52 reference virtual machines I/Os of 28 reference virtual machines In this case, one virtual machine uses the resources of 52 reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 248 reference virtual machines remain. These four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 300 reference virtual machines, and resources for 234 reference virtual machines remain in the resource pool as shown in Figure 43. Figure 43. Resource pool flexibility In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are beyond the scope of the document. Examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. Implementing the solution Overview The solution described in this guide requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are general requirements that are independent of any particular implementation except that the requirements grow linearly with the target level of scale. This section describes some considerations for implementing the requirements. 114 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

115 Solution Architecture Overview Resource types The solution defines the hardware requirements for the solution in terms of these basic resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, their use in the solution, and key implementation considerations in a customer environment. CPU resources The solution defines the number of CPU cores that are required, but not a specific type or configuration. New deployments should use recent revisions of common processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution. In any running system, monitor the utilization of resources and adapt as needed. The reference virtual machine and required hardware resources in the solution assume that there are four virtual CPUs for each physical processor core (4:1 ratio). Usually, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual server in the solution must have 2 GB of memory. In a virtual environment, it is common to provision virtual machines with more memory than is installed on the physical hypervisor server because of budget constraints. Memory over-commitment assumes that each virtual machine does not use all its allocated memory. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem via page file swapping. This solution is validated with statically assigned memory and no overcommitment of memory resources. If a real-world environment uses overcommitted memory, monitor the system memory utilization and associated page file I/O activity consistently to ensure that a memory shortfall does not cause unexpected results. Network resources The solution outlines the minimum needs of the system. If the system requires additional bandwidth, add capability at both the storage array and the hypervisor host to meet the requirements. The options for brocade storage network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and can add ports using EMC UltraFlex I/O modules. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 115

116 Solution Architecture Overview For reference purposes in the validated environment, each virtual machine generates 25 IOPS with an average size of 8 KB. This means that each virtual machine is generating at least 200 KB/s traffic on the storage network. For an environment rated for 300 virtual machines, this comes out to a minimum of approximately 60 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual machine migration Administrative and management operations The requirements for each of these depend on the use of the environment. It is not practical to provide precise numbers in this context. However, the network described in the solution should be sufficient to handle average workloads for the above use cases. The specific Brocade storage network layer connectivity and deployment is defined in Chapter 5. Regardless of the network traffic requirements, always have at least two physical network connections shared for a logical network so that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The storage building blocks described in this solution contain layouts for the disks used in the system validation. Each layout balances the available storage capacity with the performance capability of the drives. Consider a few factors when examining storage sizing. Specifically, the array has a collection of disks assigned to a storage pool. From that storage pool, provision CIFS shares to the Windows cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. It is acceptable to replace drives with larger capacity drives of the same type and performance characteristics, or with higher performance drives of the same type and capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. Moreover, it is acceptable to scale up using the building blocks with larger numbers of drives up to the limit defined in the VSPEX private cloud validated maximums. Observe the following best practices: Use the latest best practices guidance from EMC regarding drive placement within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best Practices for Performance. When expanding the capability of a storage pool using the building blocks described in this document, use the same type and size of drive in the pool. Create a new pool to use different 116 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

117 Solution Architecture Overview drive types and sizes. This prevents uneven performance across the pool. Configure at least one hot spare for every type and size of drive on the system. Configure at least one hot spare for every 30 drives of a given type. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system and conforms to EMC published best practices. Implementation summary The requirements in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual machine. In any customer implementation, the load of a system varies over time as users interact with the system. However, if the customer virtual machines differ significantly from the reference definition, and vary in the same resource group, add more of that resource type to the system to compensate. Quick assessment of customer environment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as listed in Table 16. Table 16. Blank worksheet row Application CPU (virtual CPUs) Memor y (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application Resource requirements Equivalent reference virtual machines N/A Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 117

118 Solution Architecture Overview Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU Memory IOPS Capacity CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as perfmon in Microsoft Windows to examine the CPU utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. In any operation that involves performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as Microsoft Windows perfmon, to determine memory efficiency. In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Storage performance requirements The storage performance requirements for an application are usually the least understood aspect of performance. Several components become important when discussing the I/O performance of a system. The first is the number of requests coming in or IOPS. Equally important is the size of the request, or I/O size -- a request for 4 KB of data is easier and faster to process than a request for 4 MB of data. That distinction becomes important with another factor, which is the average I/O response time, or I/O latency. 118 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

119 Solution Architecture Overview IOPS The reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as Microsoft Windows perfmon. Perfmon provides several counters that can help. The most common are: Logical Disk or Disk Transfer/sec Logical Disk or Disk Reads/sec Logical Disk or Disk Writes/sec Note: At the time of publication, Windows perfmon does not provide counters to expose IOPS and latency for CIFS-based VHDX storage. Monitor these areas from the VNX array as discussed in Chapter 7. The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application. I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even powers of 2, such as 4 KB, 8 KB, 16 KB, or 32 KB. The performance counter calculates a simple average; it is common to see 11 KB or 15 KB instead of the actual I/O sizes. The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application is using mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400 IOPS since the reference virtual machine assumed 8 KB I/O sizes. I/O latency You can use the average I/O response time, or I/O latency, to measure how quickly the storage system processes I/O requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target, and at the same time, monitor the system and reevaluate the resource pool utilization if needed. To monitor I/O latency, use the Logical Disk\Avg. Disk sec/transfer counter in Microsoft Windows perfmon. If the I/O latency is continuously over the target, reevaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 119

120 Solution Architecture Overview Storage capacity requirements The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year, requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. Determining equivalent reference virtual machines With all of the resources defined, determine an appropriate value for the equivalent reference virtual machines line by using the relationships in Table 17. Round all values up to the closest whole number. Table 17. Reference virtual machine resources Resource Value for reference virtual machine Relationship between requirements and equivalent reference virtual machines CPU 1 Equivalent reference virtual machines = resource requirements Memory 2 Equivalent reference virtual machines = (resource requirements)/2 IOPS 25 Equivalent reference virtual machines = (resource requirements)/25 Capacity 100 Equivalent reference virtual machines = (resource requirements)/100 For example, the Point of Sale system used in Example 2: Point-of-Sale system requires four CPUs, 16 GB memory, 200 IOPS, and 200 GB storage. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 18 demonstrates how that machine fits into the worksheet row. 120 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

121 Solution Architecture Overview Table 18. Example worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application Resource requirements Equivalent reference virtual machines N/A Use the highest value in the row to fill in the Equivalent reference virtual machines column. As shown in Figure 44, the example requires eight reference virtual machines. Figure 44. Required resource from the reference virtual machine pool Implementation example stage 1 A customer wants to build a virtual infrastructure to support one custombuilt application, one Point of Sale system, and one web server. The customer computes the sum of the Equivalent reference virtual machines column on the right side of the worksheet as listed in Table 19 to calculate the total number of reference virtual machines required. The table shows the result of the calculation, along with the value, rounded up to the nearest whole number. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 121

122 Solution Architecture Overview Table 19. Example applications stage 1 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example Application #2: Point of sale system Example Application #3: Web server Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A N/A N/A Total equivalent reference virtual machines 14 This example requires 14 reference virtual machines. According to the sizing guidelines, one storage pool with 10 SAS drives and 2 or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5400, for up to 300 reference virtual machines. 122 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

123 Solution Architecture Overview Figure 45 shows 12 reference virtual machines are available after implementing VNX5400 with 10 SAS drives and two flash drives. Figure 45. Aggregate resource requirements stage 1 Figure 46 shows the pool configuration in this example. Figure 46. Pool configuration stage 1 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 123

124 Solution Architecture Overview Implementation example stage 2 This customer must add a decision support database to this virtual infrastructure. Using the same strategy, calculate the number of Equivalent reference virtual machines required, as shown in Table 20. Table 20. Example applications - stage 2 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of Sale system Example application #3:Web server Example application #4: Decision support database Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A N/A N/A N/A Total equivalent reference virtual machines 66 This example requires 66 reference virtual machines. According to the sizing guidelines, one storage pool with 30 SAS drives and two or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5400, for up to 300 reference virtual machines. 124 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

125 Solution Architecture Overview Figure 47 shows 12 reference virtual machines available after implementing VNX5400 with 30 SAS drives and two flash drives. Figure 47. Aggregate resource requirements - stage 2 Figure 48 shows the pool configuration in this example. Figure 48. Pool configuration stage 2 Implementation example stage 3 With business growth, the customer must implement a much larger virtual environment to support one custom built application, one Point of Sale system, two web servers, and three Decision Support System databases. Using the same strategy, calculate the number of Equivalent reference virtual machines, as shown in Table 21. Table 21. Example applications - stage 3 Server resources Storage resources Application CPU (virtual CPUs) Memor y (GB) IOPS Capacit y (GB) Reference virtual machines Example application #1: Custom built application Resource requirements Equivalent reference virtual machines N/A Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 125

126 Solution Architecture Overview Server resources Storage resources Example application #2: Point of Sale system Example application #3: Web server #1 Example application #4: Decision Support System database #1 Example application #5: Web server #2 Example application #6: Decision Support System database #2 Example application #7: Decision Support System database #3 Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A N/A N/A N/A N/A N/A Total equivalent reference virtual machines Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

127 Solution Architecture Overview This example requires 174 reference virtual machines. According to the sizing guidelines, one storage pool with 70 SAS drives and 4 or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5400, for up to 300 reference virtual machines. Figure 49 shows 16 reference virtual machines are available after implementing VNX5400 with 70 SAS drives and four flash drives. Figure 49. Aggregate resource requirements for stage 3 Figure 50 shows the pool configuration in this example. Figure 50. Pool configuration stage 3 Fine-tuning hardware resources Usually, the process described in the section Determining equivalent reference virtual machines determines the recommended hardware size for servers and storage. However, in some cases there is a need to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this guide; however, you can perform additional customization at this point. Storage resources In some applications, there is a need to separate application data from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. To achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 127

128 Solution Architecture Overview With the method outlined in Determining equivalent reference virtual machines, it is easy to build a virtual infrastructure scaling from 13 reference virtual machines to 1,000 reference virtual machines with the building blocks described in VSPEX storage building blocks, while keeping in mind the recommended limits of each storage array documented in VSPEX private cloud validated maximums. Server resources For some workloads the relationship between server needs and storage needs does not match what is outlined in the Reference virtual machine. Size the server and storage layers separately in this scenario. Figure 51. Customizing server resources To do this, first total the resource requirements for the server components as shown in Table 22. In the Server Component Totals line at the bottom of the worksheet, add up the server resource requirements from the applications in the table. Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage Component Totals line at the bottom of Table 22 describes the required amount of storage. 128 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

129 Solution Architecture Overview Table 22. Server resource component totals Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capaci ty (GB) Referenc e virtual machine s Example application #1: Custom built application Example application #2: Point of Sale system Example application #3: Web server #1 Example application #4: Decision Support System database #1 Example application #5: Web server #2 Example application Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements N/A N/A N/A N/A N/A N/A Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 129

130 Solution Architecture Overview #6: Decision Support System database #2 Example Application #7: Decision Support System database #3 Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Server resources Storage resources N/A Total equivalent reference virtual machines 174 Server customization Server component totals NA Storage customization Storage component totals NA Storage component equivalent reference virtual machines NA Total equivalent reference virtual machines - storage 157 Note: Calculate the sum of the Resource Requirements row for each application, not the Equivalent reference virtual machines row, to get the Server/Storage Component Totals. In this example, the target architecture required 39 virtual CPUs and 227 GB of memory. With the stated assumptions of four virtual machines per physical processor core, and no memory over-provisioning, this translates to 10 physical processor cores and 227 GB of memory. With these numbers, the solution can be effectively implemented with fewer server and storage resources. Note: Keep high-availability requirements in mind when customizing the resource pool hardware. Appendix C provides a blank server resource component totals worksheet. 130 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

131 Solution Architecture Overview EMC VSPEX Sizing Tool To simplify the sizing of this solution EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions. The VSPEX Sizing Tool enables you to input your resource requirements from the customer s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions while providing platform configuration information that meets those requirements. This tool can be accessed at the following location: EMC VSPEX Sizing Tool. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 131

132 Solution Architecture Overview 132 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

133 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare, connect and configure Brocade network switches Configure Brocade VDX 6740 switch (File Storage) Configure Brocade 6510 Switch storage network (Block Storage) Prepare and configure storage array Install and configure Hyper-V hosts Install and configure SQL Server database System Center Virtual Machine Manager server deployment Summary Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 133

134 VSPEX Configuration Guidelines Overview The deployment process consists of the main stages listed in Table 23. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. The table also includes references to chapters that contain relevant procedures. Table 23. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks Obtain the deployment tools Gather customer configuration data Rack and cable the components Install and configure the Brocade storage network switches and, connect to the customer network Install and configure the VNX Configure virtual machine storage Install and configure the servers Set up SQL Server (used by SCVMM) Install and configure SCVMM Deployment prerequisite Customer configuration data Refer to the vendor documentation. Prepare, connect and configure Brocade network switches Prepare and configure storage array Prepare and configure storage array Install and configure Hyper-V hosts Install and configure SQL Server database System Center Virtual Machine Manager server deployment 134 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

135 VSPEX Configuration Guidelines Pre-deployment tasks Overview The pre-deployment tasks include procedures not directly related to environment installation and configuration, and provide needed results at the time of installation. Examples of pre-deployment tasks are collecting hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite. Table 24. Tasks for pre-deployment Task Description Reference Gather documents Gather tools Gather data Gather the related documents listed in Appendix D. These documents provide detail on setup procedures and deployment best practices for the various components of the solution. Gather the required and optional tools for the deployment. Use Table 25 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process. References: EMC documentation Brocade documentation Table 25 Appendix B Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 135

136 VSPEX Configuration Guidelines Deployment prerequisites Table 25 lists the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 25 and Table 9. Table 25. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual servers: Sufficient physical server capacity to host 250 or 500 or 1000 virtual servers Windows Server 2012 servers to host virtual infrastructure servers Note: The existing infrastructure may already meet this requirement. Brocade 6510 Fibre Channel switches (Block based storage network connectivity) or Brocade VDX 6740 Ethernet Fabric switches (File based storage network connectivity) Appendix C EMC VNX5400 (300 virtual machines), VNX5600 (600 virtual machines) or VNX5800 (1000 virtual machines): Multiprotocol storage array with the required disk layout Software SCVMM 2012 installation media Brocade VDX AMPP and integration with VMWare VCenter server Microsoft Windows Server 2012 installation media Microsoft Windows Server 2008 R2 installation media (optional for virtual machine guest OS) Microsoft SQL Server 2012 or newer installation media Note: The existing infrastructure may already meet this requirement. Licenses Microsoft Windows Server 2008 R2 Standard (or higher) license keys (optional) Microsoft Windows Server 2012 Datacenter Edition license keys Note: An existing Microsoft Key Management Server (KMS) may already 136 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

137 VSPEX Configuration Guidelines Requirement Description Reference meet this requirement. Microsoft SQL Server license key Note: The existing infrastructure may already meet this requirement. SCVMM 2012 license keys 40GbE Port Upgrade Licenses for Brocade VDX 6740 switches with NOS v4.1.0 Customer configuration data Assemble information such as IP addresses and hostnames during the planning process to reduce the onsite time. Appendix B provides a table to maintain a record of relevant customer information. Add, record, or modify information as needed during the deployment process. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 137

138 VSPEX Configuration Guidelines Prepare, connect and configure Brocade network switches Overview This section lists the Brocade network infrastructure required to support the VSPEX architectures. Table 26 provides a summary of the tasks for switch and network configuration, and references for further information. Table 26. Tasks for switch and network configuration Task Description Reference Complete network cabling Configure Brocade Network Connect the switch interconnect ports. Connect the VNX ports. Connect the Hyper-V host ports. Connect the Windows server ports. Configure Brocade 6510 Switch storage network (Block Storage) Configure Brocade 6510 Switch storage network (Block Storage) Configure Brocade VDX 6740 switch (File Storage) Configure storage array and Windows host infrastructure networking as specified in Prepare and configure storage array and Install and configure Hyper-V hosts Configure Brocade VDX 6740 switch (File Storage) Prepare and configure storage array Install and configure Hyper-V hosts Prepare Brocade Storage Network Infrastructure The Brocade network switches deployed with the VSPEX solution provide redundant links for each Hyper-V host, the storage array, and the switch interconnect ports, and the switch uplink ports. This Brocade storage network configuration provides both scalable bandwidth performance and redundancy. The Brocade network solution can be deployed alongside other components of a newly deployed VSPEX solution or as an upgrade for 1GbE to 10 GbE transitions of existing compute and storage VSPEX solution. This network solution has validated levels of performance and high-availability, this section illustrates the network switching capacity listed in Figure 52. Figure 52 and Figure 53 show sample redundant network connectivity with Brocade storage infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that there are no single points of failure. 138 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

139 VSPEX Configuration Guidelines File Based Storage Network Figure 53 shows a sample redundant Brocade VDX Ethernet Fabric switch for 10 GbE network between compute and storage. The diagram illustrates the use of redundant switches with 10GbE/40GbE links to ensure that no single points of failure exist in the CIFS based storage network connectivity. Note: The Brocade VDX Ethernet Fabric switch provide supports converged network for customers needing FCoE or iscsi block based storage network as well. Figure 52. Sample Brocade network architecture File storage Note: Ensure there are adequate switch ports between the file based attached storage array & Hyper-V hosts, and ports to existing customer infrastructure. Virtual machine networking and Hyper-V management are customer- facing networks. Separate them if required. Note: Use existing infrastructure that meets the requirements for customer infrastructure and management networks. In this deployment, we are using VLANs to separate traffic- VLAN 30 for Live Migration traffic, VLAN 20 for Storage traffic and VLAN 10 for Management. Refer to Step 5 in the deployment section for details. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 139

140 VSPEX Configuration Guidelines Block Based Storage Network Figure 53, shows a sample redundant Brocade 6510 Fibre Channel Fabric (FC) switch infrastructure for block based storage network between compute and storage array. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in the network connectivity. Brocade 6510 FC switches with Gen 5 Fibre Channel Technology simplifies the storage network infrastructure through innovative technologies and supports the VSPEX highly virtualized topology design. The Brocade 6510 FC switches are validated for the FC protocol option. Note: The Brocade VDX Ethernet Fabric switch supports converged networks for customers needing FCoE or iscsi block based storage network as well. Figure 53. Sample Brocade network architecture Block storage 140 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

141 VSPEX Configuration Guidelines Note: Ensure there are adequate storage switch ports between the block based attached storage array and Hyper-V hosts. Note: Use existing infrastructure that meets the requirements for customer infrastructure and management networks. Complete Network Cabling Connect Brocade switch ports to all servers, storage arrays, and uplinks. Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections. Ensure that the uplinks are connected to the existing customer network. Ensure the following: Connect Brocade switch ports to all servers, storage arrays, interswitch links (ISLs), and uplinks. All servers and switch uplinks plug into separate switching infrastructures and have redundant connections. Complete connection to the existing customer network. Note: Brocade switches have Installation Guides providing instructions on racking, cabling, and powering that you can refer to for details. Note: At this point, the new equipment is being connected to the existing customer network. Be careful that unforeseen interactions do not cause service issues on the customer network. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 141

142 VSPEX Configuration Guidelines Configure Brocade VDX 6740 switch (File Storage) This section describes Brocade VDX switch configuration procedure with Hyper-V compute and VNX storage. The Brocade VDX switches provide infrastructure connectivity between Hyper-V servers, existing customer network, and CIFS attached VNX storage as described in the following sections for this VSPEX solution. In this deployment, it is assumed that this new equipment is being connected to the existing customer network and potentially existing compute servers with either 1 GbE or 10 GbE attached NICs. VSPEX with the Brocade VDX 6740 Ethernet Fabric (24/64 port) switches for 10GbE attached Hyper-V Servers are enabled with VCS Fabric Technology which has the following salient features. It is an Ethernet fabric switched network. The Ethernet fabric utilizes an emerging standard called Transparent Interconnection of Lots of Links (TRILL) as the underlying technology. All switches automatically know about each other and all connected physical and logical devices. All paths in the fabric are available. Traffic is always distributed across equal-cost paths. Traffic from the source to the destination can travel across multiple equal cost paths. Traffic always travels across the shortest path in the fabric. If a single link fails, traffic is automatically switched to other available paths. If one of the links in Active Path #1 goes down, traffic is seamlessly switched across Active Path #2. Spanning Tree Protocol (STP) is not necessary because the Ethernet fabric itself is loop free and to connected servers, devices, and the rest of the network appears as a single logical switch. The fabric is self-forming. When two Brocade VCS Fabric modeenabled switches are connected, the fabric is automatically created and the switches discover the common fabric configuration. The fabric is masterless. No single switch stores configuration information or controls fabric operations. Any switch can fail or be removed without causing disruptive fabric downtime or delayed traffic. The fabric is aware of all members, devices, and virtual machines (VMs). If the VM moves from one Brocade VCS Fabric port to another Brocade VCS Fabric port in the same fabric, the port-profile is automatically moved to the new port, leveraging the Brocade Automatic Migration of Port Profiles (AMPP) feature. All switches in an Ethernet fabric can be managed as if they were a single logical chassis. To the rest of the network, the fabric looks no different from any other Layer 2 switch (Logical Chassis feature). 142 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

143 VSPEX Configuration Guidelines VCS is enabled by default on the Brocade VDX Brocade VDX switches are available in both port side exhaust and port side intake configurations. Depending upon the hot-aisle, cold-aisle considerations you should choose the appropriate airflow models for your deployment. For more information refer to the Brocade VDX 6740 Hardware Reference Manual as provided in Appendix D. Listed below is the procedure required to deploy the Brocade VDX 6740 switches with VCS Fabric Technology in the VSPEX Private Cloud Solution for up to 1000 Virtual Machines. Table 27. Brocade VDX 6740 Configuration Steps Brocade VDX Configuration Steps Step 1: Initial Switch Configuration Step 2: Power on the VDXs Step 3: Assign Switch Name Step 4: Brocade VCS Fabric ISL Port Configuration Step 5: Create required VLANs Step 6: Create vlags for Microsoft hosts Step 7: Configure Switch Interfaces for VNXe Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks Step 9 -Configure MTU and Jumbo Frames Step 10 Enable Flow Control Support Step 11- Auto QOS for NAS Refer to Appendix D for related documents. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 143

144 VSPEX Configuration Guidelines Step 1: Verify and apply Brocade VDX NOS licenses Before starting the switch configurations, make sure you have the required licenses for the VDX 6740 Switches available. With NOS version or later Brocade VCS Fabric license is built into the code so you will only require port upgrade licenses depending on the number of port density required in the setup. For this deployment, we will be assuming that 48x10GbE ports are activated on the base Brocade 6740s in the setup and we will be applying one 40GbE Port Upgrade License on each switch which will enable two 40GbE ports on each box. We will use these ports for Inter Switch Links (ISLs) between the two VDXs. A. Displaying the Switch License ID The switch license ID identifies the switch for which the license is valid. You will need the switch license ID when you purchase and activate a license key, if applicable. To display the switch license ID, enter the show license id command in the privileged EXEC mode, as shown. sw0# show license id Rbridge-Id License ID =================================================== 1 10:00:00:27:F8:BB:7E:85 B. Applying the Licenses to the switches Once you have the 40G Port Upgrade license strings generated from the Brocade Licensing portal for both the switches, you can apply them to the switches, as shown. sw0# license add licstr "*B Iflp5mb:NvYn,E4pLcOsVJfqrXDeeu9nMwqM2bQhqtf96TiqVORiWThxA:qsmQ8L3f IB0tJbTsSuRW,Sfl60zkfbeI2IQiEjHjZFgVb1HLbwLWd3l2JXaDtvcR8DxwiC:wfU #" 2014/03/07-03:40:27, [SEC-3051], 552,, INFO, sw0, The license key *B Iflp5mb:NvYn,E4pLcOsVJfqrXDeeu9nMwqM2bQhqtf96TiqVORiWThxA:qsmQ8L3f IB0tJbTsSuRW,Sfl60zkfbeI2IQiEjHjZFgVb1HLbwLWd3l2JXaDtvcR8DxwiC:wfU # is Added. License Added [*B Iflp5mb:NvYn,E4pLcOsVJfqrXDeeu9npwqM2bQhqtf96TiqVORiWThxA:qsmQ8L3f IB0tJbTsSuRW,Sfl60zkfbeI2IQiEjHjZFgVb1HLbwLWd3l2JXaDtvcR8DxwiC:wfU #] For license change to take effect, it may be necessary to enable ports... As noted in the switch output, note that you may have to enable ports for the licenses to take effect. You can do that by doing no shut on the interfaces you are using. The 40GbE ports can also be used in breakout mode as four 10GbE ports. For details, refer to the Network OS Administration Guide, v4.1.0 to configure them. 144 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

145 VSPEX Configuration Guidelines C. Displaying Licenses on the switches You can display installed licenses with the show license command. The following example displays a Brocade VDX 6740 licensed for full port density of 48 ports and two 40Gbe QSFP ports. This configuration does not include FCoE features. sw0# show license rbridge-id: 1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 10G Port Upgrade license Feature name:port_10g_upgrade License is valid Capacity: 24 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 40G Port Upgrade license Feature name:port_40g_upgrade License is valid Capacity: 2 Refer to Network OS Software Licensing Guide v4.1.0 as referred in Appendix D for additional licensing related information. Step 2: Configure logical chassis VCS ID and RBridge IDs on the VDXs When VCS is deployed as a Logical Chassis, it can be managed by a single Virtual IP and configuration changes are automatically saved across all switches in the fabric. RBridge ID is a unique identifier for an RBridge (physical switch in a VCS fabric) and VCS ID is a unique identifier for a VCS fabric. The factory default VCS ID is 1. All switches in a VCS fabric must have the same VCS ID. The default is set to 1 on each VDX switch so it doesn t need to be changed in a one-cluster implementation. The RBridge ID is also set to 1 by default on each VDX switch, but if more than one switch is to be added to the fabric then each switch needs its own unique ID as in this implementation. In this deployment VCS ID 1 is assigned on all VDXs and RBridge IDs are assigned as per the Deployment Topology in Figure 58. In the following example we will show configuration for Logical Chassis with RB21 as primary. Other Rbridges can be configured in a similar manner. The value range for RBridge ID is The value range for VCS ID is BRCD6740# vcs rbridge-id 21 In Privileged EXEC mode, enter the vcs command with options to set the VCS ID, RBridge ID and enable logical chassis mode for the switch. After you execute the below command you are asked if you want to apply the default configuration and reboot the switch; answer Yes. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 145

146 VSPEX Configuration Guidelines sw0# vcs vcsid 1 rbridge-id 21 logical-chassis enable This operation will perform a VCS cluster mode transition for this local node with new parameter settings. This will change the configuration to default and reboot the switch. Do you want to continue? [y/n]: Y Note: To create a Logical Chassis cluster, the user needs to perform the above steps on every VDX in the VCS fabric, changing only the RBridge ID each time, based on Figure 45. Any global and local configuration changes now made are distributed automatically to all nodes in the logical chassis cluster. You can enter the configuration mode for any VDX in the cluster from the cluster principal node using their respective Rbridge ID. Also, note that once in Logical Chassis mode you will be able to make configuration changes for the cluster Rbridges only from the Principal node and any attempts to make changes from secondary nodes will return an error, as shown: sw0(config)# int vlan 20 %Error: This operation is not supported from a secondary node sw0(config)# Optionally, the cluster can also be managed by a Virtual IP (in Logical Chassis and Fabric Cluster mode only) which is tied to the principal node/switch in the cluster. The management interface of the principal switch can be accessed by means of this Virtual IP address, as shown: sw0(config)# vcs virtual ip address In the above example, the entire fabric can be managed with one Virtual IP Note: For details on Logical Chassis and Virtual Cluster IP, refer to the Network OS Administration Guide, v Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

147 VSPEX Configuration Guidelines Step 3: Assign Switch Name Every switch is assigned the default host name of sw0, but must be changed for easy recognition and management using the switchattributes command. Use the switch-attributes command to set host name, as shown: sw0# configure terminal sw0(config)# switch-attributes 21 host-name BRCD6740-RB21 After you have enabled the Logical Chassis mode and assigned switch names on each node in the cluster, run the show vcs command to determine which node has been assigned as the cluster principal node. This node can be used to configure the entire VCS fabric. The arrow (>) denotes the cluster principal node. The asterisk (*) denotes the current logged-in node. BRCD # show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 1 VCS GUID : 34f262b4-e64f-4a18-a986-a767d389803e Total Number of Nodes : 2 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName >10:00:00:27:F8:BB:94:18* Online Online BRCD :00:00:27:F8:BB:7E: Online Online BRCD <truncated output> Step 4: Brocade VCS Fabric ISL Port Configuration The VDX platform comes preconfigured with a default port configuration that enables ISL and Trunking for easy and automatic VCS fabric formation. However, for edge port devices the port configuration requires editing to accommodate specific connections. The interface format is: rbridge id/slot/port number e.g 21/0/49 The default port configuration for the 40Gb ports can be seen with the show running-configuration command, as shown: BRCD # show running-config interface FortyGigabitEthernet 21/0/49 interface FortyGigabitEthernet 21/0/49 fabric isl enable fabric trunk enable no shutdown Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 147

148 VSPEX Configuration Guidelines There are two types of ports in a VCS fabric, ISL ports, and the edge ports. The ISL port connects VCS fabric switches whereas edge ports connect to end devices or non-vcs Fabric mode switches or routers. Figure 54. Port types Configuring Fabric ISLs and Trunks Brocade ISLs connect VDX switches in VCS mode. All ISL ports connected to the same neighbor VDX switch attempt to form a trunk. Trunk formation requires that all ports between the switches are set to the same speed and are part of the same port group. For redundancy, the recommendation is to have at least two trunks between two BrocadeVDX switches, but the actual number of trunks required may vary depending on customer I/O/Bandwidth and subscription ratio requirements. The maximum number of ports allowed per trunk group is sixteen on Brocade VDX 6740 and eight in Brocade VDX 6720 and In a deployment with the Brocade VDX 6740s, the 10GbE or 40GbE ports can be used for ISLs. In this example configuration, we be using the 40 GbE ports to form two ISLs each which guarantees frame-based load balancing across the ISLs. 148 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

149 VSPEX Configuration Guidelines Shown below are the port groups for the VDX 6740 and 6740T platforms. Trunk Group 1-1/10 GbE SFP ports Trunk group 4-1/10 GbE SFP ports Trunk Group 2-1/10 GbE SFP ports Trunk Group 3A - 40 GbE QSFP ports Trunk Group 3-1/10 GbE SFP ports Trunk Group 4A - 40 GbE QSFP ports Figure 55. Port Groups of the VDX Trunk Group 1-1/10 GbE BaseT ports Trunk Group 4-1/10 GbE BaseT ports Trunk Group 2-1/10 GbE BaseT ports Trunk Group 3A - 40 GbE QSFP ports Trunk Group 3-1/10 GbE BaseT ports Trunk Group 4A - 40 GbE QSFP ports Figure 56. Port Groups of the VDX 6740T and Brocade VDX6740T-1G Note: On the Brocade VDX 6740, ports in groups 3 and 3A, as well as port groups 4 and 4A, can be trunked together only when the 40 GbE QSFP ports are configured in breakout mode. On the Brocade VDX 6740T/6740T-1G model, this trunking is not allowed. For more information about Brocade trunking, refer to the Brocade Network OS Administrator s Guide, v You can use the fabric isl enable, fabric trunk enable, no fabric isl enable, and no fabric trunk enable to toggle the ports which are part of a trunked ISL, if needed. The following example shows the running configuration of an ISL port on RB21..BRCD # show running-config interface FortyGigabitEthernet 21/0/49 interface FortyGigabitEthernet 21/0/49 fabric isl enable fabric trunk enable no shutdown! Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 149

150 VSPEX Configuration Guidelines You can also verify ISL configurations using the show fabric isl or show fabric trunk commands on RB21, as shown: BRCD # show fabric isl Rbridge-id: 21 #ISLs: 2 Src Src Nbr Nbr Nbr-WWN BW Trunk Nbr-Name Index Interface Index Interface Fo 21/0/49 0 Fo 22/0/49 10:00:00:27:F8:BB:7E:85 40G Yes "BRCD " 2 Fo 21/0/51 2 Fo 22/0/51 10:00:00:27:F8:BB:7E:85 40G Yes "BRCD " BRCD # show fabric trunk Rbridge-id: 21 Trunk Src Source Nbr Nbr Group Index Interface Index Interface Nbr-WWN Fo 21/0/49 0 Fo 22/0/49 10:00:00:27:F8:BB:7E: Fo 21/0/51 2 Fo 22/0/51 10:00:00:27:F8:BB:7E:85 Step 5: Create required VLANs It is best practice to separate network traffic into VLANs. The steps in this section provide guideline to create required VLANs. In this example deployment, we are using VLANS 10, 20, and 30, as shown. VLAN Name VLAN Purpose VLAN ID VLAN Description Storage VLAN 20 This VLAN is for CIFS traffic Cluster VLAN 30 This VLAN is for cluster live migration Management VLAN 10 Management VLAN To create a VLAN interface, perform the following steps from privileged EXEC mode. 1. Enter the configure terminal command to access global configuration mode. BRCD # configure terminal Entering configuration mode terminal BRCD6740-RB21(config)# 2. Enter the interface VLAN command to assign the VLAN interface number. BRCD (config)# interface Vlan 20 BRCD (config-Vlan-20)# 150 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

151 VSPEX Configuration Guidelines 3. Create other required VLANs as described in above table. You can view the defined VLANs in the VCS cluster using the show VLAN brief command. Note: Once in Logical Chassis mode you will be able to create VLANs or other configuration changes for the cluster Rbridges only from the Principal node and any attempts to make changes from secondary nodes will return an error. In this deployment Rbridge 21 is the Principal node and all configuration changes will need to be run on this node. Figure 57. Creating VLANs Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 151

152 VSPEX Configuration Guidelines Step 6: Create vlags for Microsoft Hyper-V hosts 1. Configure vlag Port-channel Interfaces 44 and 55 on Brocade VDX 6740-RB21(Principal Node) for Hyper-V hosts A and B. BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 44 BRCD6740-RB21(config-Port-channel-44)# mtu 9216 BRCD6740-RB21(config-Port-channel-44)# speed BRCD6740-RB21(config-Port-channel-44)# description Host_A-vLAG-44 BRCD6740-RB21(config-Port-channel-44)# switchport BRCD6740-RB21(config-Port-channel-44)# switchport mode trunk BRCD6740-RB21(config-Port-channel-44)# switchport trunk allowed vlan 20 BRCD6740-RB21(config-Port-channel-44)# no shutdown BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 55 BRCD6740-RB21(config-Port-channel-55)# mtu 9216 BRCD6740-RB21(config-Port-channel-55)# speed BRCD6740-RB21(config-Port-channel-55)# description Host_B-vLAG-55 BRCD6740-RB21(config-Port-channel-55)# switchport BRCD6740-RB21(config-Port-channel-55)# switchport mode trunk BRCD6740-RB21(config-Port-channel-55)# switchport trunk allowed vlan 20 BRCD6740-RB21(config-Port-channel-55)# no shutdown 2. Configure Interface TenGigabitEthernet 21/0/10 and 21/0/11 on Brocade VDX6740-RB2. BRCD6720-RB21# configure terminal BRCD6720-RB21(config)# interface TenGigabitEthernet 21/0/11 BRCD6720-RB21(conf-if-te-21/0/11)# description Host_B-vLAG-55 BRCD6720-RB21(conf-if-te-21/0/11)# channel-group 55 mode active type standard BRCD6720-RB21(conf-if-te-21/0/11)# lacp timeout long BRCD6720-RB21(conf-if-te-21/0/11)# no shutdown BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface TenGigabitEthernet 21/0/11 BRCD6740-RB21(conf-if-te-21/0/11)# description Host_B-vLAG-55 BRCD6740-RB21(conf-if-te-21/0/11)# channel-group 55 mode active type standard BRCD6740-RB21(conf-if-te-21/0/11)# lacp timeout long BRCD6740-RB21(conf-if-te-21/0/11)# no shutdown 3. Repeat above steps 1-2 to configure Interface TenGigabitEthernet 21/0/10 and 22/0/11 on Brocade VDX6740-RB22 via RB21 CLI, which is the principal node. 4. Validate vlag port-channel interfaces 44 and 55 in the Brocade VCS cluster with RB21 and RB22 to Hyper-V hosts A and B. 152 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

153 VSPEX Configuration Guidelines BRCD # show interface port-channel 44 Port-channel 44 is up, line protocol is up Hardware is AGGREGATE, address is c.adee Current address is c.adee Description: Host_A-vLAG-44 Interface index (ifindex) is Minimum number of links to bring Port-channel up is 1 MTU 9216 bytes LineSpeed Actual : Mbit Allowed Member Speed : Mbit Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 4d19h49m Queueing strategy: fifo Receive Statistics: <truncated output> BRCD # show int port-channel 55 Port-channel 55 is up, line protocol is up Hardware is AGGREGATE, address is c.adec Current address is c.adec Description: Host_B-vLAG-55 Interface index (ifindex) is Minimum number of links to bring Port-channel up is 1 MTU 9216 bytes LineSpeed Actual : Mbit Allowed Member Speed : Mbit Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 4d19h49m Queueing strategy: fifo Receive Statistics: <truncated output> 5. Validate Interface TenGigabitEthernet 21/0/10 and 21/0/11 on Brocade VDX RB21, as shown. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 153

154 VSPEX Configuration Guidelines BRCD # show interface TenGigabitEthernet 21/0/10 TenGigabitEthernet 21/0/10 is up, line protocol is up Hardware is Ethernet, address is c.adb6 Current address is c.adb6 Pluggable media present Description: Host_A-vLAG-44 Interface index (ifindex) is MTU 9216 bytes LineSpeed Actual : Mbit, Duplex: Full LineSpeed Configured : Auto, Duplex: Full Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 5d23h36m Queueing strategy: fifo Receive Statistics:... <truncated output> BRCD # show interface TenGigabitEthernet 21/0/11 TenGigabitEthernet 21/0/11 is up, line protocol is up Hardware is Ethernet, address is c.adb8 Current address is c.adb8 Pluggable media present Description: Host_B-vLAG-55 Interface index (ifindex) is MTU 9216 bytes LineSpeed Actual : Mbit, Duplex: Full LineSpeed Configured : Auto, Duplex: Full Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 5d23h36m Queueing strategy: fifo Receive Statistics:... <truncated output> 5. Repeat validation following Step 5 on Interface TenGigabitEthernet 22/0/10 and 22/0/11on Brocade BRCD6740-RB22 as well. 154 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

155 VSPEX Configuration Guidelines Step 7: Create vlags for VNX portsstep 7: Configure Switch Interfaces for VNXe EMC s VNX5400/5600/5800 storage arrays support LACP-based dynamic LAGs, so in order to provide link and node level redundancy, dynamic LACP based vlags can be configured on the Brocade VDX 6740 switches. Note: In some port-channel configurations, depending on the storage ports (1G or 10G), the speed on the port-channel might need to be set manually on the VDX 6740 as shown: BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 33 BRCD6740-RB21(config-Port-channel-33)# speed [1000,10000,40000] (1000): BRCD6740-RB21(config-Port-channel-33)# To configure dynamic vlags on each Brocade VDX 6740 switch interface, use the following steps- 1. Configure Port-channel Interface 33 on BRCD6740-RB21 for VNX(enabled for storage VLAN 20) BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 33 BRCD6740-RB21(config-Port-channel-33)# mtu 9216 BRCD6740-RB21(config-Port-channel-33)# description VNX-vLAG-33 BRCD6740-RB21(config-Port-channel-33)# switchport BRCD6740-RB21(config-Port-channel-33)# switchport mode trunk BRCD6740-RB21(config-Port-channel-33)# switchport trunk allowed vlan Configure Interface TenGigabitEthernet 21/0/23 and 21/0/24 on BRCD6740-RB21 for Port-Channel 33 and LACP BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface TenGigabitEthernet 21/0/23 BRCD6740-RB21(conf-if-te-21/0/23)# description VNX-SPA-fxg-1-0 BRCD6740-RB21(conf-if-te-21/0/23)# channel-group 33 mode active type standard BRCD6740-RB21(conf-if-te-21/0/23)# lacp timeout long BRCD6740-RB21(conf-if-te-21/0/23)# no shutdown BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface TenGigabitEthernet 21/0/24 BRCD6740-RB21(conf-if-te-21/0/24)# description VNX-SPA-fxg-1-1 BRCD6740-RB21(conf-if-te-21/0/24)# channel-group 33 mode active type standard BRCD6740-RB21(conf-if-te-21/0/24)# lacp timeout long BRCD6740-RB21(conf-if-te-21/0/24)# no shutdown 3. Repeat above steps 1-2 on Logical Chassis Principal Node Rbridge 21 to configure Port-Channel 33 on Rbridge 22 s interfaces 22/0/23 and 22/0/24((enabled for storage VLAN 20)going to SPB on the VNX as well. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 155

156 VSPEX Configuration Guidelines 4. Validate vlag Port-channel Interface on BRCD6740-RB21 and BRCD6740-RB22 to VNX. BRCD6740-RB21# show interface Port-channel 33 Port-channel 33 is up, line protocol is up Hardware is AGGREGATE, address is c.adee Current address is c.adee Description: VNX-vLAG-33 Interface index (ifindex) is Minimum number of links to bring Port-channel up is 1 MTU 9216 bytes LineSpeed Actual : Mbit Allowed Member Speed : Mbit 5. Validate Interface TenGigabitEthernet 21/0/23-24 on BRCD6740- RB21 and Interface TenGigabitEthernet 22/0/23-24 on BRCD6740- RB22. BRCD6740-RB21# show interface TenGigabitEthernet 21/0/23 TenGigabitEthernet 21/0/23 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb6 Current address is c.adu6 Description: VNX-SPA-fxg-1-0 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> BRCD6740-RB21# show interface TenGigabitEthernet 21/0/24 TenGigabitEthernet 21/0/24 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb7 Current address is c.ade6 Description: VNX-SPA-fxg-1-1 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> BRCD6740-RB21# show interface TenGigabitEthernet 22/0/23 TenGigabitEthernet 22/0/23 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb8 Current address is c.adp6 Description: VNX-SPB-fxg-2-0 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> 156 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

157 VSPEX Configuration Guidelines BRCD6740-RB21# show interface TenGigabitEthernet 22/0/24 TenGigabitEthernet 22/0/24 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb9 Current address is c.adh6 Description: VNX-SPB-fxg-2-1 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks Brocade VDX 6740 switches can be uplinked to be accessible from customer s existing network infrastructure. On VDX 6740 platforms, the user can use 40GbE or 10GbE ports for this. In this example deployment we are using 10GbE ports.the uplink should be configured to match whether or not the customer s network is using tagged or untagged traffic. The following example can be leveraged as a guideline to connect VCS fabric to existing infrastructure network: Figure 58. Example VCS/VDX network topology with Infrastructure connectivity Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 157

158 VSPEX Configuration Guidelines Creating virtual link aggregation groups (vlags) to the Infrastructure Network Create vlags from each RBridge to Infrastructure Switches that in turn provide access to resources at the core network. This example illustrates the configuration for Port-Channel 4 on RB21 and RB Use the channel-group command to configure interfaces as members of a port channel to the infrastructure switches that interface to the core. This example uses port channel 4 on Grp1, RB21. BRCD6740-RB21(config)# interface port-channel 4 BRCD6740-RB21(config-Port-channel-4)# switchport BRCD6740-RB21(config-Port-channel-4)# switchport mode trunk BRCD6740-RB21(config-Port-channel-4)# switchport trunk allowed vlan all BRCD6740-RB21(config-Port-channel-4)# no shutdown 2. Use the channel-group command to configure interfaces as members of a Port-Channel 4 to the infrastructure switches that interface to the core.. BRCD6740-RB21(config)# in te 21/0/5 BRCD6740-RB21(conf-if-te-21/0/5)# channel-group 4 mode active type standard BRCD6740-RB21(conf-if-te-21/0/5)# in te 21/0/6 BRCD6740-RB21(conf-if-te-21/0/6)# channel-group 4 mode active type standard 3. Repeat above steps 1-2 on Logical Chassis Principal Node Rbridge 21 to configure Port-Channel 4 on Rbridge 22 s interfaces 22/0/5 and 22/0/6. 4. Use the do show port-chan command to confirm that the vlag comes up and is configured correctly. Note: The LAG must be configured on the MLX MCT as well before the vlag can become operational. BRCD6740-RB21(config-Port-channel-4)# do show port-chan 4 LACP Aggregator: Po 4 (vlag) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 21 (2) rbridge-id: 22 (2) Admin Key: Oper Key 0004 Partner System ID - 0x0001,01-80-c Partner Oper Key Member ports on rbridge-id 21: Link: Te 21/0/5 (0x F) sync: 1 * Link: Te 21/0/6 (0x ) sync: 1 <truncated output> 158 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

159 VSPEX Configuration Guidelines Step 9 - Configure MTU and Jumbo Frames Brocade VDX Series switches support the transport of jumbo frames. This solution recommends an MTU setting at 9216 (Jumbo frames) for efficient NAS storage and migration traffic. Jumbo frames are enabled by default on the Brocade ISL trunks. However, to accommodate end-to-end jumbo frame support on the network for the edge systems, this feature can be enabled under the vlag interface. Please note that for end-to-end flow control, Jumbo frames need to be enabled both on the host servers and the storage with the same MTU size of Configuring MTU Note: This must be performed on all RBbridges where a given interface port-channel is located. In this example, interface port-channel 44 is on RBridge 21 and RBridge 22, so we will apply configurations from both RBridge 21 and RBridge 22. Example to enable Jumbo Frame Support on applicable VDX interfaces for which Jumbo Frame support is required: BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 44 BRCD6740-RB21(config-Port-channel-44)# mtu (<NUMBER: >) (9216): 9216 Step 10 Enable Flow Control Support Ethernet Flow Control is used to prevent dropped frames by slowing traffic at the source end of a link. When a port on a switch or host is not ready to receive more traffic from the source, perhaps due to congestion, it sends pause frames to the source to pause the traffic flow. When the congestion is cleared, the port stops requesting the source to pause traffic flow, and traffic resumes without any frame drop. It is recommended to enable Flow Control on vlag interfaces towards the VNX on the VDX 6740s, as shown: Enable QOS Flow Control for both tx and rx on VLAG interfaces on RB21 and RB22 going to the VNX BRCD6740-RB21# conf t BRCD6740-RB21 (config)# interface Port-channel 33 BRCD6740-RB21 (config-port-channel-33)# qos flowcontrol tx on rx on Step 11- Auto QOS for NAS The Auto QoS feature introduced in NOS v4.1.0 automatically classifies traffic based on either a source or a destination IPv4 address. Once the traffic is identified, it is assigned to a separate priority queue. This allows a minimum bandwidth guarantee to be provided to the queue so that the identified traffic is less affected by network traffic congestion than other traffic. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 159

160 VSPEX Configuration Guidelines Note: As this command was created primarily to benefit Network Attached Storage devices, the commands used in the following sections use the term NAS. However, there is no strict requirement that these nodes be actual NAS devices, as Auto QoS will prioritize the traffic for any set of specified IP addresses. There are four steps to enabling and configuring Auto QoS for NAS: 1. Enable Auto QoS. 2. Set the Auto QoS CoS value. 3. Set the Auto QoS DSCP value. 4. Specify the NAS server IP addresses. For detailed instructions for setting this feature up, refer to the Network OS Administration Guide, v Configure Brocade 6510 Switch storage network (Block Storage) Listed below is the procedure required to deploy the Brocade 6510 Fibre Channel (FC) switches in the VSPEX Private Cloud Solution with VMware vsphere for up to 1000 Virtual Machines for block storage. The Brocade 6510 FC switches provide for infrastructure connectivity between Hyper-V servers and attached VNX storage of the VSPEX solution. At the point of deployment, compute nodes are connected to an FC storage network with 4, 8, or 16G FC attached HBAs. The 6510 FC switches: Provide flexibility, simplicity, and enterprise-class functionality in a 48-port switch for virtualized data centers and private cloud architectures Enables fast, easy, and cost-effective scaling from 24 to 48 ports using Ports on Demand (PoD) capabilities Simplifies deployment with the Brocade EZSwitchSetup wizard Accelerates deployment and troubleshooting time with Dynamic Fabric Provisioning (DFP), critical monitoring, and advanced diagnostic features Maximizes availability with redundant, hot-pluggable components and non-disruptive software upgrades Simplifies server connectivity and SAN scalability by offering dual functionality as either a full-fabric SAN switch or an NPIV-enabled Brocade Access Gateway In addition, it is important to consider the airflow direction of the switches. Brocade 6510 FC switches are available in both port side exhaust and port side intake configurations. Depending upon the hot-aisle, cold-aisle considerations choose the appropriate airflow. For more information refer to the Brocade 6510 Hardware Reference Manual as provided in Appendix D. 160 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

161 VSPEX Configuration Guidelines All Brocade Fibre Channel Switches have factory defaults listed in Table 28. Setting Table 28. Brocade switch default settings Factory default Factory Default MGMT IP: Factory Default Subnet: Factory Default Gateway: Factory Default admin/user Password: Factory Default Domain ID: 1 Brocade Switch Management password CLI Web Tools Connectrix Manager Listed below is the procedure required to deploy the Brocade 6510 FC switches in the VSPEX Private Cloud Solution for up to 500 Virtual Machines. Table 29. Brocade 6510 FC switch Configuration Steps Step 1: Initial Switch Configuration Step 2: Fibre Channel Switch Licensing Step 3: Zoning Configuration Step 4: Switch Management and Monitoring Refer to Appendix D for related documents Step Step 1: Initial Switch Configuration Configure Hyper Terminal 1. Connect the serial cable to the serial port on the switch and to an RS-232 serial port on the workstation. 2. Open a terminal emulator application (such as HyperTerminal on a PC) and configure the application as follows Table 30. Brocade switch default settings Parameter Value Bits per second 9600 Databits 8 Parity None Stop bits 1 Flow control None Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 161

162 VSPEX Configuration Guidelines Configure IP Address for Management Interface Switch IP address You can configure the Brocade 6510 with a static IP address, or you can use a DHCP (Dynamic Host Configuration Protocol) server to set the IP address of the switch. DHCP is enabled by default. The Brocade 6510 supports both IPv4 and IPv6. Using DHCP to set the IP address When using DHCP, the Brocade 6510 obtains its IP address, subnet mask, and default gateway address from the DHCP server. The DHCP client can only connect to a DHCP server that is on the same subnet as the switch. If your DHCP server is not on the same subnet as the Brocade 6510, use a static IP address. Setting a static IP address 1. Log into the switch using the default password, which is password. 2. Use the ipaddrset command to set the Ethernet IP address. If you are going to use an IPv4 IP address, enter the IP address in dotted decimal notation as prompted. As you enter a value and press Enter for a line in the following example, the next line appears. For instance, the Ethernet IP Address appears first. When you enter a new IP address and press Enter or simply press Enter accept the existing value, the Ethernet Subnetmask line appears. In addition to the Ethernet IP address itself, you can set the Ethernet subnet mask, the Gateway IP address, and whether to obtain the IP address via Dynamic Host Control Protocol (DHCP) or not. SW6510:admin> ipaddrset Ethernet IP Address [ ]: Ethernet Subnetmask [ ]: Gateway IP Address [ ]: DHCP [Off]: off If you are going to use an IPv6 address, enter the network information in semicolon-separated notation as a standalone command. SW6510:admin> ipaddrset -ipv6 --add 1080::8:800:200C:417A/64 IP address is being changed Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

163 VSPEX Configuration Guidelines Configure Domain ID and Fabric Parameters RCD-FC-6510:FID128:admin> switchdisable BRCD-FC-6510:FID128:admin> configure Configure... Fabric parameters (yes, y, no, n): [no] y Domain: (1..239) [1] 10 WWN Based persistent PID (yes, y, no, n): [no] Allow XISL Use (yes, y, no, n): [no] R_A_TOV: ( ) [10000] E_D_TOV: ( ) [2000] WAN_TOV: ( ) [0] MAX_HOPS: (7..19) [7] Data field size: ( ) [2112] Sequence Level Switching: (0..1) [0] Disable Device Probing: (0..1) [0] Suppress Class F Traffic: (0..1) [0] Per-frame Route Priority: (0..1) [0] Long Distance Fabric: (0..1) [0] BB credit: (1..27) [16] Disable FID Check (yes, y, no, n): [no] Insistent Domain ID Mode (yes, y, no, n): [no] yes Disable Default PortName (yes, y, no, n): [no] Edge Hold Time(0 = Low(80ms),1 = Medium(220ms),2 = High(500ms): [220ms]): (0..2) [1] Virtual Channel parameters (yes, y, no, n): [no] F-Port login parameters (yes, y, no, n): [no] Zoning Operation parameters (yes, y, no, n): [no] RSCN Transmission Mode (yes, y, no, n): [no] Arbitrated Loop parameters (yes, y, no, n): [no] System services (yes, y, no, n): [no] Portlog events enable (yes, y, no, n): [no] ssl attributes (yes, y, no, n): [no] rpcd attributes (yes, y, no, n): [no] webtools attributes (yes, y, no, n): [no] Note: The domain ID will be changed. The port level zoning may be affected. Since Insistent Domain ID Mode is enabled, ensure that switches in fabric do not have duplicate domain IDs configured, otherwise this may cause switch to segment, if Insistent domain ID is not obtained when fabric reconfigures. BRCD-FC-6510:FID128:admin> switchenable Set Switch Name SW6510:FID128:admin> switchname BRCD-FC-6510 Committing configuration... Done. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 163

164 VSPEX Configuration Guidelines Verify Domain ID and Switch Name BRCD-FC-6510:FID128:admin> switchshow more switchname: BRCD-FC-6510 switchtype: switchstate: Online switchmode: Native switchrole: Principal switchdomain: 10 switchid: fffc0a switchwwn: 10:00:00:27:f8:61:80:8a zoning: OFF switchbeacon: OFF FC Router: OFF Allow XISL Use: OFF LS Attributes: [FID: 128, Base Switch: No, Default Switch: Yes, Address Mode 0] Date and Time Setting The Brocade 6510 maintains the current date and time inside a batterybacked real-time clock (RTC) circuit. Date and time are used for logging events. Switch operation does not depend on the date and time; a Brocade 6510 with an incorrect date and time value still functions properly. However, because the date and time are used for logging, error detection, and troubleshooting, you should set them correctly. Time Zone, Date and Clock Server can be configured on all Brocade switches. Time Zone You can set the time zone for the switch by name. You can also set country, city or time zone parameters. BRCD-FC-6510:FID128:admin> tstimezone --interactive Please identify a location so that time zone rules can be set correctly. Please select a continent or ocean. 1) Africa 2) Americas 3) Antarctica 4) Arctic Ocean 5) Asia 6) Atlantic Ocean 7) Australia 8) Europe 9) Indian Ocean 10) Pacific Ocean 11) none - I want to specify the time zone using the POSIX TZ format. Enter number or control-d to quit?2 Please select a country. 1) Anguilla 27) Honduras 2) Antigua & Barbuda 28) Jamaica 3) Argentina 29) Martinique 4) Aruba 30) Mexico 164 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

165 VSPEX Configuration Guidelines 5) Bahamas 31) Montserrat 6) Barbados 32) Netherlands Antilles 7) Belize 33) Nicaragua 8) Bolivia 34) Panama 9) Brazil 35) Paraguay 10) Canada 36) Peru 11) Cayman Islands 37) Puerto Rico 12) Chile 38) St Barthelemy 13) Colombia 39) St Kitts & Nevis 14) Costa Rica 40) St Lucia 15) Cuba 41) St Martin (French part) 16) Dominica 42) St Pierre & Miquelon 17) Dominican Republic 43) St Vincent 18) Ecuador 44) Suriname 19) El Salvador 45) Trinidad & Tobago 20) French Guiana 46) Turks & Caicos Is 21) Greenland 47) United States 22) Grenada 48) Uruguay 23) Guadeloupe 49) Venezuela 24) Guatemala 50) Virgin Islands (UK) 25) Guyana 51) Virgin Islands (US) 26) Haiti Enter number or control-d to quit?47 Please select one of the following time zone regions. 1) Eastern Time 2) Eastern Time - Michigan - most locations 3) Eastern Time - Kentucky - Louisville area 4) Eastern Time - Kentucky - Wayne County 5) Eastern Time - Indiana - most locations 6) Eastern Time - Indiana - Daviess, Dubois, Knox & Martin Counties 7) Eastern Time - Indiana - Starke County 8) Eastern Time - Indiana - Pulaski County 9) Eastern Time - Indiana - Crawford County 10) Eastern Time - Indiana - Switzerland County 11) Central Time 12) Central Time - Indiana - Perry County 13) Central Time - Indiana - Pike County 14) Central Time - Michigan - Dickinson, Gogebic, Iron & Menominee Counties 15) Central Time - North Dakota - Oliver County 16) Central Time - North Dakota - Morton County (except Mandan area) 17) Mountain Time 18) Mountain Time - south Idaho & east Oregon 19) Mountain Time - Navajo 20) Mountain Standard Time - Arizona 21) Pacific Time 22) Alaska Time 23) Alaska Time - Alaska panhandle 24) Alaska Time - Alaska panhandle neck 25) Alaska Time - west Alaska 26) Aleutian Islands 27) Hawaii Enter number or control-d to quit?21 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 165

166 VSPEX Configuration Guidelines The following information has been given: United States Pacific Time Therefore TZ='America/Los_Angeles' will be used. Local time is now: Mon Aug 12 15:04:43 PDT Universal Time is now: Mon Aug 12 22:04:43 UTC Is the above information OK? 1) Yes 2) No Enter number or control-d to quit?1 System Time Zone change will take effect at next reboot Setting the date 1. Log into the switch using the default password, which is password. 2. Enter the date command, using the following syntax (the double quotation marks are required): date "mmddhhmmyy" The values are: mm is the month; valid values are 01 through 12. dd is the date; valid values are 01 through 31. HH is the hour; valid values are 00 through 23. MM is minutes; valid values are 00 through 59. yy is the year; valid values are 00 through 99 (values greater than 69 are interpreted as 1970 through 1999, and values less than 70 are interpreted as ). switch:admin> date Fri Sep 29 17:01:48 UTC 2007 switch:admin> date " " Thu Sep 27 12:30:00 UTC 2007 switch:admin> Synchronizing local time using NTP Perform the following steps to synchronize the local time using NTP. 1. Log into the switch using the default password, which is password. 2. Enter the tsclockserver command: switch:admin> tsclockserver "<ntp1;ntp2>" Ntp1 is the IP address or DNS name of the first NTP server, which the switch must be able to access. The value ntp2 is the name of the second NTP server and is optional. The entire operand <ntp1;ntp2> is optional; by default, this value is LOCL, which uses the local clock of the principal or primary switch as the clock server. switch:admin> tsclockserver LOCL 166 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

167 VSPEX Configuration Guidelines Verify Switch Component Status BRCD-FC-6510:FID128:admin> switchstatusshow Switch Health Report 08/14/ :19:56 PM Switch Name: BRCD-FC-6510 IP address: SwitchState: HEALTHY Duration: 218:52 Power supplies monitor HEALTHY Temperatures monitor HEALTHY Fans monitor HEALTHY Flash monitor HEALTHY Marginal ports monitor HEALTHY Faulty ports monitor HEALTHY Missing SFPs monitor HEALTHY Error ports monitor HEALTHY Fabric Watch is not licensed Detailed port information is not included BRCD-FC-6510:FID128:admin> Report time: Step 2: FC Switch Licensing Verify or Install Licenses Brocade GEN5 Fibre Channel switches come with preinstalled basic licenses required for FC operation. The Brocade 6510 provides 48 ports in a single (1U) height switch that enables the creation of very dense fabrics in a relatively small space. The Brocade 6510 offers Ports on Demand (POD) licensing as well. Base models of the switch contain 24 ports, and up to two additional 12-port POD licenses can be purchased. 1. licenseshow (Record License Info) if applicable 2. If POD license needs to be installed on the switch you would require a Transaction Key (From License Purchase Paper Pack) and Switch WWN (from wwn sn or switchshow command output) 3. licenseadd key can be used to add the license to the switch. Obtaining New License Keys To obtain POD license keys, contact [email protected] Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 167

168 VSPEX Configuration Guidelines Step 3: FC Zoning Configuration Zone Objects A zone object is any device in a zone, such as: Zone Schemes Physical port number or port index on the switch Node World Wide Name (N-WWN) Port World Wide Name (P-WWN) You can establish a zone by identifying a zone objects by using one or more of the following zoning schemes Domain, Index- All members are specified by Domain ID and Port Number or Domain Index number pair or aliases. World Wide Name (WWN)-All members are specified only by WWN or Aliases of the WWN. They can be Node or Port version of WWN. Mixed Zoning- A zone containing members specified by a combination of domain, port or domain, index and WWN. Configurations of Zones Following are recommendations for zoning: Use nsshow to list the WWN of the Host and Storage (Initiator and Target). Record Port WWN Create the Alias for Device using alicreate "Alias, "WWN" Create the Zone using zonecreate Zone Name, WWN/Alias Create the Zone Configuration using cfgcreate cfgname, Zone Name Save the Zone Configuration using cfgsave Enable the Zone Configuration using cfgenable cfgname Use the following Zoning steps to configure Fabric-B switch. BRCD-FC-6510:FID128:admin> nsshow { Type Pid COS PortName NodeName TTL(sec) N 0a0500; 3;10:00:00:05:33:64:d6:35;20:00:00:05:33:64:d6:35; na FC4s: FCP PortSymb: [30] "Brocade " Fabric Port Name: 20:05:00:27:f8:61:80:8a Permanent Port Name: 10:00:00:05:33:64:d6:35 Port Index: 5 Share Area: No Device Shared in Other AD: No Redirect: No Partial: No N 0a0a00; 3;50:06:01:6c:36:60:07:c3;50:06:01:60:b6:60:07:c3; na FC4s: FCP PortSymb: [27] "CLARiiON::::SPB10::FC::::::" 168 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

169 VSPEX Configuration Guidelines NodeSymb: [25] "CLARiiON::::SPB::FC::::::" Fabric Port Name: 20:05:00:05:1e:02:93:75 Permanent Port Name: 50:06:01:6c:36:60:07:c3 Port Index: 10 Share Area: No Device Shared in Other AD: No Redirect: No Partial: No The Local Name Server has two entries. Create Alias SW6510:FID128:admin> alicreate error: Usage: alicreate "arg1", "arg2" SW6510:FID128:admin> alicreate "ESX_Host_HBA1_P0","10:00:00:05:33:64:d6:35" SW6510:FID128:admin> alicreate "VNX_SPA_P0","50:06:01:60:b6:60:07:c3" Create Zone SW6510:FID128:admin> zonecreate error: Usage: zonecreate "arg1", "arg2" SW6510:FID128:admin> zonecreate "ESX_Host_A","ESX_Host_HBA1_P0;VNX_SPA_P0" Create cfg and add zone to cfg SW6510:FID128:admin> cfgcreate error: Usage: cfgcreate "arg1", "arg2" SW6510:FID128:admin> cfgcreate "vspex", "ESX_Host_A" Save cfg and enable cfg SW6510:FID128:admin> cfgsave You are about to save the Defined zoning configuration. This action will only save the changes on Defined configuration. Any changes made on the Effective configuration will not take effect until it is re-enabled. Until the Effective configuration is reenabled, merging new switches into the fabric is not recommended and may cause unpredictable results with the potential of mismatched Effective Zoning configurations. Do you want to save Defined zoning configuration only? (yes, y, no, n): [no] y Updating flash... SW6510:FID128:admin> cfgenable "vspex" You are about to enable a new zoning configuration. This action will replace the old zoning configuration with the current configuration selected. If the update includes changes to one or more traffic isolation zones, the update may result in localized disruption to traffic on ports associated with the traffic isolation zone changes Do you want to enable 'vspex' configuration (yes, y, no, n): [no] y zone config "vspex" is in effect Updating flash... Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 169

170 VSPEX Configuration Guidelines Verify Zone Configuration SW6510:FID128:admin> cfgshow Defined configuration: cfg: vspex ESX_Host_A zone: ESX_Host_A ESX_Host_HBA1_P0; VNX_SPA_P0 alias: ESX_Host_HBA1_P0 10:00:00:05:33:64:d6:35 alias: VNX_SPA_P0 50:06:01:60:b6:60:07:c3 Effective configuration: cfg: vspex zone: ESX_Host_A 10:00:00:05:33:64:d6:35 50:06:01:60:b6:60:07:c3 SW6510:FID128:admin> cfgactvshow Effective configuration: cfg: vspex zone: ESX_Host_A 10:00:00:05:33:64:d6:35 50:06:01:60:b6:60:07:c3 Step 4: Switch Management and Monitoring The following table shows a list of commands that can be used to manage and monitor Brocade Fibre Channel switches in a production environment. Switch Management Switchshow Switch Monitoring 1. porterrshow 2. Portperfshow 3. Portshow 4. errshow 5. Errdump 6. Sfpshow 7. Fanshow 8. Psshow 9. Sensorshow 10. Firmwareshow 11. Fosconfig --show 12. Memshow 13. Portcfgshow 14. Supportsave to collect switch logs 170 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

171 VSPEX Configuration Guidelines Prepare and configure storage array Implementation instructions and best practices may vary because of the storage network protocol selected for the solution. Each case contains the following steps: 1. Configure the VNX. 2. Provision storage to the hosts. 3. Configure FAST VP. 4. Optionally configure FAST Cache. The sections below cover the options for each step separately depending on whether one of the block protocols (FC, FCoE, iscsi), or the file protocol (CIFS) is selected For FC, FCoE, or iscsi, refer to VNX configuration for block protocols. For CIFS, refer to VNX configuration for file protocols. VNX configuration for block protocols This section describes how to configure the VNX storage array for host access using block protocols such as FC, FCoE, or iscsi. In this solution, the VNX provides data storage for Windows hosts. Table 31. Tasks for VNX configuration for block protocols Task Description Reference Prepare the VNX Set up the initial VNX configuration Provision storage for Hyper-V hosts Physically install the VNX hardware using the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNX. Create the storage areas required for the solution. EMC VNX5400 Unified Installation Guide EMC VNX5600 Unified Installation Guide EMC VNX5800 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Prepare the VNX The installation guides for VNX5400, VNX5600, and VNX5800 provide instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 171

172 VSPEX Configuration Guidelines Set up the initial VNX configuration After the initial VNX setup, configure key information about the existing environment to enable the storage array to communicate with the other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces For data connections using FC or FCoE Connect one or more servers to the VNX storage system, either directly or through qualified FC or FCoE switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. For data connections using iscsi Connect one or more servers to the VNX storage system, either directly or through qualified IP switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information: 1. Set up a storage network IP address: Logically isolate the storage network from the other networks in the solution, as described in Chapter 3. This ensures that other network traffic does not impact traffic between the hosts and the storage. 2. Enable jumbo frames on the VNX iscsi ports: Use jumbo frames for iscsi networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment: a. In Unisphere, select Settings > Network > Settings for Block. b. Select the appropriate iscsi network interface. c. Click Properties. d. Set the MTU size to 9,000. e. Click OK to apply the changes. The reference documents listed in Table 31 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout. 172 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

173 VSPEX Configuration Guidelines Provision storage for Hyper-V hosts This section describes provisioning block storage for Hyper-V hosts. To provision file storage, refer to VNX configuration for file protocols. Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter 4. a. Log in to Unisphere. b. Select the array for this solution. c. Select Storage > Storage Configuration > Storage Pools. d. Click Pools. e. Click Create. Note: The pool does not use system drives for additional storage. Table 32. Storage allocation table for block Configuration 300 virtual machines Number of pools Number of 15K SAS drives per pool Number of Flash drives per pool Number of LUNs per pool LUN size (TB) Total x 7 TB LUNs 2 x 3 TB LUNs 600 virtual machines Total x 7 TB LUNs 2 x 6 TB LUNs 1000 virtual machines Total x 7 TB LUNs Note: Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 173

174 VSPEX Configuration Guidelines 2. Create the hot spare disks at this point. Refer to the appropriate VNX installation guide for additional information. Figure 34 depicts the target storage layout for 300 virtual machines. Figure 35 depicts the target storage layout for 600 virtual machines. Figure 36 depicts the target storage layout for 1,000 virtual machines. 3. Use the pools created in step 1 to provision thin LUNs: a. Select Storage > LUNs. b. Click Create. c. Select the pool created in step 1. Always create two thin LUNs in one physical storage pool. User Capacity depends on the specific number of virtual machines. Refer to Table 32 for more information. 4. Create a storage group, and add LUNs and Hyper-V servers: a. Select Hosts > Storage Groups. b. Click Create and input a name for the new storage group. c. Select the created storage group. d. Click LUNs. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs dialog appears. e. Configure and add the Hyper-V hosts to the storage pool. 174 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

175 VSPEX Configuration Guidelines VNX configuration for file protocols This section and Table 33 describe file storage provisioning tasks for Hyper- V hosts Table 33. Tasks for storage configuration for file protocols Task Description Reference Prepare the VNX Set up the initial VNX configuration Create a network interface Create a CIFS server Create a storage pool for file Create the file systems Create the SMB file share Physically install the VNX hardware with the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNX. Configure the IP address and network interface information for the CIFS server. Create the CIFS server instance to publish the storage. Create the block pool structure and LUNs to contain the file system. Establish the SMB shared file system. Attach the file system to the CIFS server to create an SMB share for Hyper-V storage. VNX5400 Unified Installation Guide VNX5600 Unified Installation Guide VNX5800 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Prepare the VNX The installation guides for VNX5400, VNX5600, and VNX5800 provide instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. Set up the initial VNX configuration After the initial VNX setup, configure key information about the existing environment to allow the storage array to communicate with the other devices in the environment. Ensure one or more servers connect to the VNX storage system, either directly or through qualified IP switches. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces Storage network IP address Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 175

176 VSPEX Configuration Guidelines CIFS services and Active Directory Domain membership Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Enable jumbo frames on the VNX storage network interfaces Use Jumbo frames for storage networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment. Complete the following steps to enable jumbo frames: 1. In Unisphere, select Settings > Network > Settings for File. Select the appropriate network interface from the Interfaces tab. 2. Click Properties. 3. Set the MTU size to 9, Click OK to apply the changes. The reference documents listed in Table 31 provide more information on how to configure the VNX platform. The Storage configuration guidelines section provide more information on the disk layout. Create a network interface A network interface maps to a CIFS server. CIFS servers provide access to file shares over the network Complete the following steps to create a network interface: 1. Log in to the VNX. 2. In Unisphere, select Settings > Network > Settings For File. 3. On the Interfaces tab, click Create as shown in Figure 59. Figure 59. Network Settings for File dialog box 176 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

177 VSPEX Configuration Guidelines In the Create Network Interface wizard, complete the following steps: 1. Select the Data Mover which will provide access to the file share. 2. Select the device name where the network interface will reside. Note: Run the following command as nasadmin on the Control Station to ensure that the selected device has a link connected: > server_sysconfig <datamovername> -pci This command lists the link status (UP or DOWN) for all devices on the specified Data Mover. 3. Type an IP address for the interface. 4. Type a Name for the interface. 5. Type the netmask for the interface. The Broadcast Address appears automatically after you provide the IP address and netmask. 6. Set the MTU size for the interface to 9,000. Note: Ensure that all devices on the network (switches, servers, and so on) have the same MTU size. 7. If required, specify the VLAN ID. In this deployment, we are tagging storage traffic with tag 20. This setting needs to be maintained on the end-end connecticity for CIFS traffic. 8. Click OK, as shown in Figure 60. Figure 60. The Create Interface dialog box Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 177

178 VSPEX Configuration Guidelines Create a CIFS server A CIFS server provides access to the CIFS (SMB) file share. Note: In Unisphere, select Storage > Shared Folders Note must exist before creating an SMB 3.0 file share. A CIFS server 1. > CIFS > CIFS Servers. 2. Click Create. The Create CIFS Server window appears. From the Create CIFS Server window, complete the following steps: 3. Select the Data Mover on which to create the CIFS server. 4. Set the server type as Active Directory Domain. 5. Type a Computer Name for the server. The computer name must be unique within Active Directory. Unisphere automatically assigns the NetBIOS name to the computer name. 6. Type the Domain Name for the CIFS server to join. 7. Select Join the Domain. 8. Specify the domain credentials: a. Type the Domain Admin User Name. b. Type the Domain Admin Password. 9. Select Enable Local Usersto allow the creation of a limited number of local user accounts on the CIFS server. a. Set the Local Admin Password. b. Confirm the Local Admin Password. 10. Select the network interface created in step 1 to allow access to the CIFS server. 11. Click OK. The newly created CIFS server appears under the CIFS server tab as shown in Figure Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

179 VSPEX Configuration Guidelines Figure 61. The Create CIFS Server dialog box Create storage pools for file Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums as described in Chapter 4. a. Log in to Unisphere. b. Select the array for this solution. c. Select Storage > Storage Configuration > Storage Pools > Pools. d. Click Create. Note: The pool does not use system drives for additional storage. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 179

180 VSPEX Configuration Guidelines Table 34. Storage allocation table for file Configuration 300 virtual machines Number of pools Number of 15K SAS drives per pool Number of Flash drives per pool Number of LUNs per pool Number of FS per storage pool for file LUN size (GB) FS size (TB) Total X 800GB LUNs 20 X 400GB LUNs 4 x 7 TB FS 2 x 3 TB FS 600 virtual machines Total X 800GB LUNs 20 X 700GB LUNs 8 x 7 TB FS 2 x 6 TB FS 1000 virtual machines Total X 800GB LUNs 16 x 7 TB FS 2. Create the hot spare disks at this point. Refer to the appropriate VNX installation guide for additional information. Figure 34 depicts the target storage layout for 300 virtual machines. Figure 35 depicts the target storage layout for 600 virtual machines. Figure 36 depicts the target storage layout for 1,000 virtual machines. 3. Provision LUNs on the pool created in step 1: a. Select Storage > LUNs. b. Click Create. 180 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

181 VSPEX Configuration Guidelines c. Select the pool created in step 1. For User Capacity, refer to Table 34 on the details of the size of LUNs. The Number of LUNs to create depends on the disk number in the pool. Refer to Table 34 for details on the number of LUNs needed in each pool. Note: For FAST VP implementations, assign no more than 95% of the available storage pool capacity for file. 4. Connect the LUNs to the Data Mover for file access: a. Click Hosts > Storage Groups. b. Select filestorage. c. Click Connect LUNs. d. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs panel appears. Use the new storage pool to create file systems. Create file systems To create an SMB file share, complete the following tasks: 1. Create a storage pool and a network interface. 2. Create a file system. 3. Export an SMB file share from the file system. If no storage pools or interfaces exist, follow the steps in Create a network interface and Create storage pools for file to create a storage pool and a network interface. Create two thin file systems from each storage pool for file. Refer to Table 34 for details on the number of file systems. Complete the following steps to create VNX file systems for SMB file shares: 1. Log in to Unisphere. 2. Select Storage > Storage Configuration > File Systems. 3. Click Create. The File System Creation wizard appears. 4. Specify the file system details: a. Select Storage Pool. b. Type a File System Name. c. Select a Storage Pool to contain the file system. d. Select the Storage Capacity of the file system. Refer to Table 34 for detailed storage capacity. e. Select Thin Enabled. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 181

182 VSPEX Configuration Guidelines f. Select the Data Mover (R/W) to own the file system. Note: The selected Data Mover must have an interface defined on it. g. Click OK as shown in Figure 62. Figure 62. The Create File System dialog box The new file system appears on the File Systems tab. 1. Click Mounts. 2. Select the created file system and then click Properties. 3. Select Set Advanced Options. 4. Select Direct Writes Enabled. 5. Select CIFS Sync Writes Enabled. 6. Click OK as shown in Figure Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

183 VSPEX Configuration Guidelines Figure 63. The File System Properties dialog box Create the SMB file share After completing creating the file system, the SMB file share can be created. To create the share, complete the following steps: 1. From the VNX dashboard, hover over the Storage tab. 2. Select Shared folders > CIFS. 3. From the shares page click Create. The Create CIFS Share window opens. 4. Select the Data Mover on which to create the share (the same Data Mover that owns the CIFS server). 5. Specify a name for the share. 6. Specify the file system for the share. Leave the default path as is. 7. Select the CIFS server to provide access to the share as shown in Figure Optionally specify a user limit, or any comments about the share. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 183

184 VSPEX Configuration Guidelines Figure 64. The Create File Share dialog box FAST VP configuration This procedure applies to both file and block storage implementations. Complete the following steps to configure FAST VP. Assign two flash drives in each block-based storage pool: 1. In Unisphere, select the storage pool to configure for FAST VP. 2. Click Properties for a specific storage pool to open the Storage Pool Properties dialog. Figure 65 shows the tiering information for a specific FAST pool. Note: The Tier Status area shows FAST relocation information specific to the selected pool. 3. Select Scheduled from the Auto-Tiering list box. The Tier Details panel shows the exact data distribution. 184 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

185 VSPEX Configuration Guidelines Figure 65. The Storage Pool Properties dialog box You can also connect to the array-wide Relocation Schedule by clicking the button in the top right corner to access the Manage Auto-Tiering window as shown in Figure 66. Figure 66. Manage Auto-Tiering dialog box Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 185

186 VSPEX Configuration Guidelines From this status dialog, users can control the Data Relocation Rate. The default rate is Medium to minimize the impact on host I/O. Note: FAST is a completely automated tool that provides the ability to create a relocation schedule. Schedule the relocations during off-hours to minimize any potential performance impact. FAST Cache configuration You can configure FAST Cache as an option. Note: Use the flash drives listed in Sizing guidelines for FAST VP configurations as described in FAST VP configuration. FAST Cache is an optional component of this solution that provides improved performance as outlined in Chapter 3. To configure FAST Cache on the storage pools for this solution, complete the following steps: 1. Configure flash drives as FAST Cache: a. Click Properties from the Unisphere dashboard or Manage Cache in the left-hand pane of the Unisphere interface to access the Storage System Properties window as shown in Figure 67. b. Click the FAST Cache tab to view FAST Cache information. Figure 67. The Storage System Properties dialog box 186 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

187 VSPEX Configuration Guidelines c. Click Create to open the Create FAST Cache window as shown in Figure 68. The RAID Type field displays RAID 1 when the FAST Cache is created. This window also provides the option to select the drives for the FAST Cache. The bottom of the screen shows the flash drives used to create the FAST Cache. Select Manual to choose the drives manually. Refer to Storage configuration guidelines to determine the number of flash drives required in this solution. Note: If a sufficient number of flash drives are not available, VNX displays an error message and does not create the FAST Cache. Figure 68. The Create FAST Cache dialog box 2. Enable FAST Cache in the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. All the LUNs created in the storage pool have FAST Cache enabled or disabled. Configure the LUNS from the advanced tab on the Create Storage Pool window shown in Figure 69. After installation, FAST Cache is enabled by default at storage pool creation. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 187

188 VSPEX Configuration Guidelines Figure 69. Advanced tab in the Create Storage Pool dialog If the storage pool already exists, use the Advanced tab of the Storage Pool Properties window to configure FAST Cache as shown in Figure 70. Figure 70. Advanced tab in the Storage Pool Properties dialog Note: The VNX FAST Cache feature on does not cause an instant performance improvement. The system must collect data about access patterns, and promote frequently used information into the cache. This process can take several hours. Array performance gradually improves during this time. 188 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

189 VSPEX Configuration Guidelines Install and configure Hyper-V hosts Overview This section provides the requirements for the installation and configuration of the Windows hosts and infrastructure servers to support the architecture. Table 35 describes the required tasks. Table 35. Tasks for server installation Task Description Reference Install Windows hosts Install Hyper-V and configure Failover Clustering Configure windows hosts networking Install PowerPath on Windows Servers Plan virtual machine memory allocations Install Windows Server 2012 on the physical servers for the solution. 1. Add the Hyper-V Server role. 2. Add the Failover Clustering feature. 3. Create and configure the Hyper-V cluster. Configure Windows hosts networking, including NIC teaming and the Virtual Switch network. Install and configure PowerPath to manage multipathing for VNX LUNs Ensure that Windows Hyper-V guest memory management features are configured properly for the environment PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Install Windows hosts Follow Microsoft best practices to install Windows Server 2012 and the Hyper-V role on the physical servers for this solution. Install Hyper-V and configure failover clustering To install and configure Failover Clustering, complete the following steps: 1. Install and patch Windows Server 2012 on each Windows host. 2. Configure the Hyper-V role, and the Failover Clustering feature. Install the HBA drivers, or configure iscsi initiators on each Windows host. For details, refer to EMC Host Connectivity Guide for Windows. Table 35 provides the steps and references to accomplish the configuration tasks. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 189

190 VSPEX Configuration Guidelines Configure Windows host networking To ensure performance and availability, the following network interface cards (NICs) are required: At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary). At least two 10 GbE NICs for the storage network. At least one NIC for Live Migration. Note: Enable jumbo frames for NICS that transfer iscsi or SMB data. Set the MTU to 9,000. Consult the NIC configuration guide for instruction. Install PowerPath on Windows servers Install PowerPath on Windows servers to improve and enhance the performance and capabilities of the VNX storage array. For the detailed installation steps, refer to PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Plan virtual machine memory allocations Server capacity serves two purposes in the solution: Supports the new virtualized server infrastructure. Supports the required infrastructure services such as authentication or authorization, DNS, and databases. For information on minimum infrastructure service hosting requirements, refer to Appendix A. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Memory configuration Take care to properly size and configure the server memory for this solution. This section provides an overview of memory management in a Hyper-V environment. Memory virtualization techniques enable the hypervisor to abstract physical host resources such as Dynamic Memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. With advanced processors (such as Intel processors with EPT support), this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. There are multiple techniques available within the hypervisor to maximize the use of system resources such as memory. Do not substantially over commit resources as this can lead to poor system performance. The exact implications of memory over commitment in a real-world environment are difficult to predict. Performance degradation due to resource-exhaustion increases with the amount of memory over-committed. 190 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

191 VSPEX Configuration Guidelines Install and configure SQL Server database Overview Most customers use a management tool to provision and manage their server virtualization solution even though it is not required. The management tool requires a database backend. SCVMM uses SQL Server 2012 as the database platform. This section describes how to set up and configure a SQL Server database for the solution. Table 36 lists the detailed setup tasks. Table 36. Tasks for SQL Server database setup Task Description Reference Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install Microsoft SQL Server Configure a SQL Server for SCVMM Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements. Install Microsoft Windows Server 2012 Datacenter Edition on the virtual machine. Install Microsoft SQL Server on the designated virtual machine. Configure a remote SQL Server instance or SCVMM Create a virtual machine for Microsoft SQL Server Create the virtual machine with enough computing resources on one of the Windows servers designated for infrastructure virtual machines. Use the storage designated for the shared infrastructure. Note: The customer environment may already contain a SQL Server for this role. In that case, refer to the section Configure a SQL Server for SCVMM. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 191

192 VSPEX Configuration Guidelines Install Microsoft Windows on the virtual machine The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings. Install SQL Server Use the SQL Server installation media to install SQL Server on the virtual machine. The Microsoft TechNet website provides information on how to install SQL Server. One of the installable components in the SQL Server installer is the SQL Server Management Studio (SSMS). Install this component on the SQL server directly, and on an administrator console. To change the default path for storing data files, perform the following steps: 1. Right-click the server object in SSMS and select Database Properties. The Properties window appears. 2. Change the default data and log directories for new databases created on the server. Configure a SQL Server for SCVMM To use SCVMM in this solution, configure the SQL Server for remote connections. The requirements and steps to configure it correctly are available in the article Configuring a Remote Instance of SQL Server for VMM. Refer to the list of documents in Appendix D for more information. Note: Do not use the Microsoft SQL Server Express based database option for this solution. Create individual login accounts for each service that accesses a database on the SQL Server. 192 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

193 VSPEX Configuration Guidelines System Center Virtual Machine Manager server deployment Overview This section provides information on how to configure SCVMM. Complete the tasks in Table 37. Table 37. Tasks for SCVMM configuration Task Description Reference Create the SCVMM host virtual machine Install the SCVMM guest OS Install the SCVMM server Install the SCVMM Management Console Install the SCVMM agent locally on the hosts Create a virtual machine for the SCVMM Server. Install Windows Server 2012 Datacenter Edition on the SCVMM host virtual machine. Install an SCVMM server. Install an SCVMM Management Console. Install an SCVMM agent locally on the hosts SCVMM manages. Create a virtual machines Install the guest operating system How to Install a VMM Management Server How to Install the VMM Console Installing a VMM Agent Locally on a Host Add a Hyper-V cluster into SCVMM Add the Hyper-V cluster into SCVMM. Adding and Managing Hyper-V Hosts and Host Clusters in VMM Add file share storage in SCVMM (file variant only) Add SMB file share storage to a Hyper-V cluster in SCVMM. How to Assign SMB 3.0 File Shares to Hyper-V Hosts and Clusters in VMM Create a virtual machine in SCVMM Create a virtual machine in SCVMM. Creating and Deploying Virtual Machines Perform partition alignment, and assign File Allocation Unite Size Using Diskpart.exe to perform partition alignment, assign drive letters, and assign file allocation unit size of virtual machine s disk drive Disk Partition Alignment Best Practices for SQL Server Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 193

194 VSPEX Configuration Guidelines Task Description Reference Create a template virtual machine Deploy virtual machines from the template virtual machine Create a template virtual machine from the existing virtual machine. Create the hardware profile and Guest Operating System profile at this time. Deploy the virtual machines from the template virtual machine. How to Create a Virtual Machine Template How to Create and Deploy a Virtual Machine from a Template Create a SCVMM host virtual machine To deploy the Microsoft Hyper-V server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Microsoft Hyper-V server with the customer guest OS configuration by using an infrastructure server datastore presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines SCVMM must manage. Install the SCVMM guest OS Install the guest OS on the SCVMM host virtual machine. Install the requested Windows Server version on the virtual machine and select appropriate network, time, and authentication settings. Install the SCVMM server Set up the VMM database and the default library server, and then install the SCVMM server. Refer to the Microsoft TechNet Library topic Installing the VMM Server to install the SCVMM server. Install the SCVMM Management Console SCVMM Management Console is a client tool to manage the SCVMM server. Install the VMM Management Console on the same computer as the VMM server. Refer to the Microsoft TechNet Library topic Installing the VMM Administrator Console to install the SCVMM Management Console. 194 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

195 VSPEX Configuration Guidelines Install the SCVMM agent locally on a host If the hosts must be managed on a perimeter network, install a VMM agent locally on the host before adding it to VMM. Optionally, install a VMM agent locally on a host in a domain before adding the host to VMM. Refer to the Microsoft TechNet Library topic Installing a VMM Agent Locally on a Host to install a VMM agent locally on a host. Add a Hyper-V cluster into SCVMM Add the deployed Microsoft Hyper-V cluster to SCVMM. SCVMM manages the Hyper-V cluster. Refer to the Microsoft TechNet Library topic How to Add a Host Cluster to VMM to add the Hyper-V cluster. Add file share storage to SCVMM (file variant only) Create a virtual machine in SCVMM To add file share storage to SCVMM, complete the following steps: 1. Open the VMs and Services workspace. 2. In the VMs and Services pane, right-click the Hyper-V Cluster name. 3. Click Properties. 4. In the Properties window, click File Share Storage. 5. Click Add, and then add the file share storage to SCVMM. Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, then install the software, and change the Windows and application settings. Refer to the Microsoft TechNet Library topic How to Create a Virtual Machine with a Blank Virtual Hard Disk to create a virtual machine. Perform partition alignment, and assign File Allocation Unite Size Perform disk partition alignment on virtual machines whose operation system is prior to Windows Server It is recommended to align the disk drive with an offset of 1,024 KB, and format the disk drive with a file allocation unit (cluster) size of 8 KB. Refer to the Microsoft TechNet Library topic Disk Partition Alignment Best Practices for SQL Server to perform partition alignment, assign drive letters, and assign file allocation unit size using diskpart.exe Create a template virtual machine Converting a virtual machine into a template removes the virtual machine. Backup the virtual machine, because the virtual machine may be destroyed during template creation. Create a hardware profile and a Guest Operating System profile when creating a template. Use the profiler to deploy the virtual machines. Refer to the Microsoft TechNet Library topic How to Create a Template from a Virtual Machine. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 195

196 VSPEX Configuration Guidelines Deploy virtual machines from the template virtual machine The deployment wizard enables you to save the PowerShell scripts and reuse them to deploy other virtual machines with the same configuration. Refer to the Microsoft TechNet Library topic How to Deploy a Virtual Machine. Summary This chapter presented the required steps to deploy and configure the various aspects of the VSPEX solution, including the physical and logical components. At this point, the VSPEX solution is fully functional. 196 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

197 Chapter 6 Verifying the Solution This chapter presents the following topics: Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 197

198 Verifying the Solution Overview This chapter provides a list of items to review after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure the configuration meets core availability requirements. Complete the tasks listed in Table 39. Table 38. Tasks for testing the installation Task Description Reference Post-install checklist Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. Hyper-V : How many network cards do I need? Deploy and test a single virtual server Verify redundanc y of the solution componen ts Verify that each Hyper-V host has access to the required datastores and VLANs. Verify that the Live Migration interfaces are configured correctly on all Hyper-V hosts. Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface. Perform a reboot for each storage processor in turn, and ensure that the storage connectivity is maintained. Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact. On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host. Using a VNXe System with Microsoft Windows Hyper-V Virtual Machine Live Migration Overview Deploying Hyper-V Hosts Using Microsoft System Center 2 Machine Manager N/A Vendor documentation Creating a Hyper-V Host Cluster in VMM Overview 198 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

199 Verifying the Solution Post-install checklist The following configuration items are critical to the functionality of the solution. On each Windows Server, verify the following items prior to deployment into production: The VLAN for virtual machine networking is configured correctly. The storage networking is configured correctly. Each server can access the required Cluster Shared Volumes/Hyper-V SMB shares. A network interface is configured correctly for Live Migration. Deploy and test a single virtual server Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to login to it. Verify the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures. On a Hyper-V host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host. Block environments Complete the following steps to perform a reboot of each VNX storage processor in turn and verify that connectivity to the LUNs is maintained throughout each reboot: 1. Log in to the Control Station with administrator credentials. 2. Navigate to /nas/sbin. 3. Reboot SP A by using the./navicli -h spa rebootsp command. 4. During the reboot cycle, check for the presence of datastores on Windows hosts. When cycle completes, reboot SP B by using the /navicli h spb rebootsp command. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 199

200 Verifying the Solution File environments Perform a failover of each VNX Data Mover in turn and verify that connectivity to SMB shares is maintained and that connections to CIFS file systems are reestablished. For simplicity, use the following approach for each Data Mover: Note: Optionally, reboot the Data Movers through the Unisphere interface. 1. From the Control Station prompt, run the server_cpu <movername> -reboot command, where <movername> is the name of the Data Mover. 2. To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure. 3. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. 200 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

201 Chapter 7 System Monitoring This chapter presents the following topics: Overview 202 Key areas to monitor 202 VNX resources monitoring guidelines 205 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 201

202 System Monitoring Overview Key areas to monitor System monitoring of the VSPEX environment is the same as monitoring any core IT system; it is a relevant and core component of administration. The monitoring levels involved in a highly virtualized infrastructure such as a VSPEX environment are somewhat more complex than a purely physical infrastructure, as the interaction and interrelationships between various components can be subtle and nuanced. However, those who are experienced in administering physical environments should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and data flows. The following business requirements drive the need for proactive, consistent monitoring of the environment: Stable, predictable performance Sizing and capacity needs Availability and accessibility Elasticity : the dynamic addition, subtraction, and modification of workloads Data protection If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are at the end of this chapter. Since VSPEX Proven Infrastructures comprise end-to-end solutions, system monitoring includes three discrete, but highly interrelated areas: Servers, including virtual machines and clusters Networking Storage This chapter focuses primarily on monitoring key components of the storage infrastructure, the VNX array, but briefly describes other components as well. 202 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

203 System Monitoring Performance baseline When a workload is added to a VSPEX deployment, server, storage, and networking resources are consumed. As additional workloads are added, modified, or removed, resource availability and more importantly, capabilities change, which impact all other workloads running on the platform. Customers must fully understand their workload characteristics on all key components prior to deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined reference virtual machine. Deploy the first workload, and then measure the end-to-end resource consumption along with platform performance. This removes the guesswork from sizing activities and ensures that initial assumptions were valid. As additional workloads deploy, rerun the benchmarks to determine cumulative load and impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that oversubscription does not negatively impact overall system performance. Run these baselines consistently, to ensure the platform as a whole, and the virtual machines themselves operate as expected. The following section discusses which components should comprise a core performance baseline. Servers The key resources to monitor from a server perspective include the use of: Processors Memory Disk (local, NAS, and SAN) Networking Monitor these areas from both a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). Depending on your operating system, there are tools available to monitor and capture this data. For example, if your VSPEX deployment uses Windows servers as the hypervisor, you can use Windows perfmon to monitor and log these metrics. Follow your vendor s guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application. Detailed information about this tool is available from the Microsoft TechNet Library topic Using Performance Monitor. Keep in mind that each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of reference virtual machines deployed and their defined workload. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 203

204 System Monitoring Brocade Networking Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and if network file or block protocols such as NFS/CIFS/SMB are implemented, at the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS size. Capture additional data from network card or HBA utilities. From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths and inter switch link (ISL) utilization. Networking storage protocols are discussed in the following section. Storage Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the VNX storage arrays provide an easy yet powerful way to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus on, including: Capacity IOPS Latency SP utilization For CIFS, SMB, or NFS protocols, the following additional components should be monitored: Data Mover, CPU, and memory usage File system latency Network interfaces throughput in and throughput out Additional considerations (though primarily from a tuning perspective) include: I/O size Workload characteristics Cache utilization These factors are outside the scope of this document; however, storage tuning is an essential component of performance optimization. EMC offers the following additional guidance on the subject through EMC Online Support: in the EMC VNX Unified Best Practices for Performnace-Applied Best Practices Guide. 204 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

205 System Monitoring VNX resources monitoring guidelines Monitor the VNX with the EMC Unisphere GUI by opening an HTTPS session to the Control Station IP. Monitoring is divided into these parts: Monitoring block storage resources Monitoring file storage resources Monitoring block storage resources This section explains how to use Unisphere to monitor block storage resource usage that includes capacity, IOPS, and latency. Capacity In Unisphere, two panels display capacity information. These two panels provide a quick assessment to overall free space available within the configured LUNs and underlying storage pools. For block, sufficient free storage should remain in the configured pools to allow for anticipated growth and activities such as snapshot creation. It is essential to have a free buffer, especially for thin LUNs because out-of-space conditions usually lead to undesirable behaviors on affected host systems. As such, configure threshold alerts to warn storage administrators when capacity use rises above 80 percent. In that case, auto-expansion may need to be adjusted or additional space allocated to the pool. If LUN utilization is high, reclaim space or allocate additional space. To set capacity threshold alerts for a specific pool, complete the following steps: 1. Select that pool and select Properties > Advanced. 2. In the Storage Pool Alerts area, choose a number for Percent Full Threshold of this pool, as shown in Figure 71. Figure 71. Storage Pool Alerts area Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 205

206 System Monitoring To drill-down into capacity for block, complete the following steps: 1. In Unisphere, select the VNX system to examine. 2. Select Storage > Storage > Configurations > Storage Pools. This opens the Storage Pools panel. 3. Examine the columns Free Capacity and % Consumed, as shown in Figure 72. Figure 72. Storage Pools panel Monitor capacity at the storage pool and LUN levels: 1. Click Storage > LUNs. The LUN Properties dialog box appears. 2. Select a LUN to examine and click Properties, which displays detailed LUN information, as shown in Figure Verify the LUN Capacity area of the dialog box. User Capacity is the total physical capacity available to all thin LUNs in the pool. Consumed Capacity is the total physical capacity currently assigned to all thin LUNs. 206 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

207 System Monitoring Figure 73. LUN Properties dialog box Examine capacity alerts and all other system events by opening the Alerts panel and the SP Event Logs panel, both of which are accessed under the Monitoring and Alerts panel, as shown in Figure 74. Figure 74. Monitoring and Alerts panel Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 207

208 System Monitoring IOPS The effects of an I/O workload serviced by an improperly configured storage system, or one whose resources are exhausted, can be felt system wide. Monitoring the IOPS that the storage array services includes looking at metrics from the host ports in the SPs, along with requests serviced by the back-end disks. The VSPEX solutions are carefully sized to deliver a certain performance level for a particular workload level. Ensure that IOPS are not exceeding design parameters. Statistical reporting for IOPS (along with other key metrics) can be examined using the Statistics for Block panel by selecting VNX > System > Monitoring and Alerts > Statistics for Block. Monitor the statistics online or offline using the Unisphere Analyzer, which requires a license. Another metric to examine is Total Bandwidth (MB/s). An 8 Gbps Front End SP port can process 800 MB per second. The average bandwidth must not exceed 80 percent of the link bandwidth under normal operating conditions. IOPS delivered to the LUNs are often more than those delivered by the hosts. This is particularly true with thin LUNs, as there is additional metadata associated with managing the I/O streams. Unisphere Analyzer shows the IOPS on each LUN as shown in Figure 75. Figure 75. IOPS on the LUNs 208 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

209 System Monitoring Certain RAID levels also impart write-penalties that create additional backend IOPS. Examine the IOPS delivered to (and serviced from) the underlying physical disks, which can also be viewed in the Unisphere Analyzer in Figure 76. The guidelines for drive performance are 180 IOPS for 15k RPM SAS drives 120 IOPS for 10k RPM SAS drives 80 IOPS for NL SAS drives Figure 76. IOPS on the disks Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 209

210 System Monitoring Latency Latency is the byproduct of delays in processing I/O requests. This context focuses on monitoring storage latency, specifically block-level I/O. Using similar procedures from a previous section, view the latency at the LUN level as shown in Figure 77. Figure 77. Latency on the LUNs Latency can be introduced anywhere along the I/O stream, from the application layer, through the transport, and out to the final storage devices. Determining precise causes of excessive latency requires a methodical approach. Excessive latency in an FC network is uncommon. Unless there is a defective component such as an HBA or cable, delays introduced in the network fabric layer are normally a result of misconfigured switching fabrics. An overburdened storage array can also cause latency within an FC environment. Focus primarily on the LUNs and the underlying disk pools ability to service I/O requests. Requests that cannot be serviced are queued, which introduces latency. The same paradigm applies to Ethernet-based protocols such as iscsi and FCoE. However, additional factors come into place because these storage protocols use Ethernet as the underlying transport. Isolate the network traffic (either physical or logical) for storage, and preferably implement Quality of Service (QoS) in a shared or converged fabric. If network problems are not introducing excessive latency, examine the storage array. In addition to overburdened disks, excessive SP utilization can also introduce latency. 210 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

211 System Monitoring SP utilization levels greater than 80 percent indicate a potential problem. Background processes such as replication, deduplication, and snapshots all compete for SP resources. Monitor these processes to ensure they do not cause SP resource exhaustion. Possible mitigation techniques include staggering background jobs, setting replication limits, and adding more physical resources or rebalancing the I/O workloads. Growth may also mandate moving to more powerful hardware. For SP metrics, examine the data under the SP tab of the Unisphere Analyzer, as shown in Figure 78. Review metrics such as Utilization %, Queue Length and Response Time (ms). High values for any of these metrics indicate the storage array is under duress and likely requires mitigation. EMC best practices recommend a threshold of 70% utilization, response time of 20 ms, and queue length of 10. Figure 78. SP utilization Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 211

212 System Monitoring Monitoring file storage resources File-based protocols such as NFS and CIFS/SMB involve additional management processes beyond those for block storage. Data Movers, hardware components that provide an interface between NFS, CIFS, or SMB users, and the SPs, provide these management services for VNX Unified systems. Data Movers process file protocol requests on the client side, and convert the requests to the appropriate SCSI block semantics on the array side. The additional components and protocols introduce additional monitoring requirements such as Data Mover network link utilization, memory utilization, and Data Mover processor utilization. To examine Data Mover metrics in the Statistics for File panel, select VNX > System > Monitoring and Alerts > Statistics for File, as shown in Figure 79. By clicking the Data Mover link, the following summary metrics are displayed as shown in Figure 79. Usage levels in excess of 80 percent indicate potential performance concerns and likely require mitigation through Data Mover reconfiguration, additional physical resources, or both. Figure 79. Data Mover statistics Select Network Device from the Statistics panel to observe front-end network statistics. The Network Device Statistics window appears as shown in Figure 80. If throughput figures exceed 80 percent of the link bandwidth to the client, configure additional links to relieve the network saturation. 212 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

213 System Monitoring Figure 80. Front-end Data Mover network statistics Capacity Similar to block storage monitoring, Unisphere has a statistics panel for file storage. Select Storage > Storage Configurations > Storage Pools for File to check file storage space utilization at the pool level as shown in Figure 81. Figure 81. Storage Pools for File panel Monitor capacity at the pool and file system levels. 1. Select Storage > File Systems. The File Systems window appears, as shown in Figure 82. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 213

214 System Monitoring Figure 82. File Systems panel 2. Select a file system to examine and click Properties, which displays detailed file system information, as shown in Figure Examine the File Storage area for Used and Free capacity. 214 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

215 System Monitoring Figure 83. File System Properties window Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 215

216 System Monitoring IOPS In addition to monitoring block storage IOPS, Unisphere also provides the ability to monitor file system IOPS. Select System > Monitoring and Alerts > Statistics for File > File System I/O as shown in Figure 84. Figure 84. File System I/O Statistics window 216 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

217 System Monitoring Latency To observe file system latency, select System > Monitoring and Alerts > Statistics for File > All Performance in Unisphere, and examine the value for CIFS:Ops/sec as shown in Figure 85. Figure 85. CIFS Statistics window Summary Consistent and thorough monitoring of the VSPEX Proven Infrastructure is a best practice. Having baseline performance data helps to identify problems, while monitoring key system metrics helps to ensure that the system functions optimally and within designed parameters. The monitoring process can extend through integration with automation and orchestration tools from key partners such as Microsoft with its System Center suite of products. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 217

218 System Monitoring 218 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

219 Chapter 8 Validation with Microsoft Fast Track v3 This chapter presents the following topics: Overview 220 Business case for validation 220 Process requirements 221 Additional resources 224 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 219

220 Validation with Microsoft Fast Track v3 Overview The Microsoft Hyper-V Fast Track Program is a reference architecture validation framework designed by Microsoft to validate end-to-end virtualization solutions comprised of Microsoft software products. These software products have been tightly integrated and tested with specific hardware components, while being built and configured according to best practices defined by Microsoft and the hardware vendors. Customers receive a fully built, read-to-run solution at their site. Microsoft handles primary support in conjunction with the solution owner (hardware vendors and/or system integrators) to ensure end-to-end solution support. Unlike the VSPEX Proven Infrastructure solutions, which offer partners the flexibility to choose the solution components, the Microsoft Hyper-V Fast Track Program are locked configurations based on specific end-to-end architectures. Similar to the Windows Logo Program, any significant changes (such as a different HBA or BIOS) invalidate the architecture unless Microsoft revalidates the configuration. VSPEX Proven Infrastructure solutions provide a valuable platform to serve as potential Microsoft Hyper-V Fast Track Program validated solutions, because much of the effort, such sizing and performance validation, has been completed by EMC. Customers can also benefit from a solution that has been thoroughly tested, validated, and approved by Microsoft. This section describes the steps for EMC VSPEX partners to guide a VSPEX Private Infrastructure solution through the Microsoft Hyper-V Fast Track Program. Business case for validation The release of Microsoft Windows Server 2012 introduces significant product enhancements, and is the first generally available cloudoptimized server operating system. Microsoft identified key areas or pillars to focus on, including: Continuous availability Virtualization Performance Additionally, the release of the Microsoft System Center 2012 SP1 product suite introduces powerful, flexible new tools to integrate with the new features of Windows Server System Center Orchestrator, Virtual Machine Manager, Operations Manager, and Data Protection Manager provide customers the tools to cohesively build and manage virtualized cloud infrastructures. The Microsoft Hyper-V Fast Track Program incorporates these products into a pre-built, bundled cloud solution based on collective best practices. This 220 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

221 Validation with Microsoft Fast Track v3 Process requirements eliminates design guesswork and implementation problems, and enables organizations to implement cloud-based solutions rapidly within their IT infrastructure. Furthermore, since the end-to-end configuration is tested and validated, customers avoid many of the issues in a complex, multitiered environment such as driver or firmware incompatibilities. EMC VSPEX partners that certify VSPEX Proven Infrastructures in the Microsoft Hyper-V Fast Track Program can create additional revenue streams from the services that comprise virtualization solutions. Partners can also use the VSPEX labs to validate their Microsoft Hyper-V Fast Track Program solution, using EMC expertise and reducing hardware requirements. Solution validation for the Microsoft Hyper-V Fast Track Program is a significant endeavor. Using a VSPEX Proven Infrastructure solution as a basis eliminates a significant portion of the required work. Any VSPEX Proven Infrastructure that uses Microsoft Windows Server 2012 (or later) as the hypervisor is a viable candidate. Step 1: Core prerequisites Step 2: Select the VSPEX Proven Infrastructure platform An EMC VSPEX partner must also be a Microsoft Gold partner. Obtain Microsoft Hyper-V Fast Track Program v3 documentation and program guidelines directly from Microsoft by sending a request to the following alias: [email protected]. Upon receipt, thoroughly review the documentation and program requirements to become familiar with the process. There are certain support obligations defined in the Microsoft Hyper-V Fast Track Program. Contact Microsoft, or refer to program documentation for further details. Select any VSPEX Proven Infrastructure solution based on Microsoft Windows Server Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 221

222 Validation with Microsoft Fast Track v3 Step 3: Define additional Microsoft Hyper- V Fast Track Program components After choosing the base VSPEX Proven Infrastructure, partners must define additional architectural requirements to comply with the Microsoft Hyper-V Fast Track Program guidelines and requirements. Program documentation classifies these components as described in Table 39. Table 39. Hyper-V Fast Track component classification Icon Level Description Mandatory Recommend Optional Required to pass Microsoft validation. Optional. This is an industry-standard recommendation and is not required to pass the Microsoft validation. Optional. Presents an alternate method to consider and is not required to pass the Microsoft validation. Partners must ensure that all mandatory components are included in the solution. EMC strongly advises partners to include recommended components to ensure the solution is robust and competitive. Partners must make the following changes to a VSPEX Proven Infrastructure. All hardware components must be logo certified for Windows Server Refer to the Windows Service Catalog website for device certification information. Use the WHCK process and the SysDev Dashboard portal as starting points for the certification process, and send proof of certification to the Microsoft Hyper-V Fast Track Program Team for review. Provide an SKU, part number, or other simple and efficient process to purchase or resell the solution. Send details of the ordering process to the Microsoft Hyper-V Fast Track Program Team for review. Servers must meet the following minimum requirements: Two to four server nodes with clustering installed (cluster nodes). Dual processor sockets, with 6 cores per socket (12 cores total). 32 GB RAM (4 GB per virtual machine and management host). 1 Gigabyte Ethernet (GbE) cluster interconnect. Additional network isolation is required for cluster heartbeat traffic. Ensure the environment meets the following minimum network requirements: Two physically separate networks. The cluster heartbeat network must be on a distinctly separate subnet from the hosted network traffic. 1 GbE, or greater, network adapter for internal communications, and 1 GbE, or greater, network adapter for external LAN communications for each node. 222 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

223 Validation with Microsoft Fast Track v3 1 GbE, or greater, network speed for Live Migration traffic and cluster communication. EMC recommends using a 10 GbE network dedicated to Live Migration. Do not share the virtual machine network adapter with the host operating system. EMC and Microsoft do not support configurations with a single network connection. Configure Network teaming so that: The solution can withstand the loss of any single adapter without losing server connectivity. The solution uses NIC teaming to provide high availability for the virtual machine networks. Microsoft supports third party teaming or Microsoft teaming. Step 4: Build a detailed bill of materials Create a detailed bill of materials that includes hardware manufacturer, model, firmware, BIOS, and driver versions, and vendor part number for: Servers HBAs Switches Storage arrays Software Any other major components Step 5: Test the environment Step 6: Document and publish the solution Install and configure the end-to-end environment. Run the Windows Cluster Validation Tool to verify the environment configuration, and Failover Clustering support. Send the results of this test to the Microsoft Hyper-V Fast Track Program Team for review. Refer to Validate Hardware for a Windows Server 2012 Failover Cluster for more information about the Windows Cluster Validation Tool. Use the available solution template from the Microsoft Hyper-V Fast Track Program Team, or create a solution document based on the appropriate VSPEX Proven Infrastructure Design Guide. Add the additional required content per step three above, and then submit the final solution document to Microsoft and EMC for posting. An example solution created by Cisco and EMC, which follows the Microsoft Hyper-V Fast Track Program v2 guidelines, is available at Cisco Solutions for VSPEX. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 223

224 Validation with Microsoft Fast Track v3 Additional resources Microsoft Hyper-V Fast Track Program v3 documentation is only available for Microsoft partners, although some material exists on the Microsoft Partner Portal, TechNet, and various Microsoft blog sites. For the best results, contact the Microsoft Hyper-V Fast Track Program v3 Partner Program Management Team via their alias at Alternatively, Microsoft partners can work through their Microsoft Technical Account Managers (TAMs). The public website is Private Cloud Fast Track. 224 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

225 Appendix A Bill of Materials This appendix presents the following topic: Bill of materials Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 225

226 Bill of Materials Bill of materials Table 40 lists the hardware used in this solution. Note: EMC recommends that you use 10 Gb network or equivalent 1 Gb network infrastructure for these solutions as long as the underlying requirements around bandwidth and redundancy are fulfilled. Table 40. List of components used in the VSPEX solution for 300 virtual machines Component Windows servers CPU Memory Solution for 300 virtual machines 1 vcpu per virtual machine 4 vcpus per physical core 300 vcpus Minimum of 75 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host Minimum of 600 GB RAM Brocade Network Block File 2 x 10 GbE NICs per server 2 HBAs per server 4 x 10 GbE NICs per server Note: To implement Microsoft Hyper-V High-Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements. Brocade Network infrastructure Minimum switching capacity Block 2 Brocade 6510 Fibre Channel switches 2 HBA ports per Windows server, for storage network 2 FC ports per SP, for storage data 2 x 10 GbE ports per Data Mover for data 1 x 1 GbE port per Control Station for management File 2 Brocade VDX 6740 Ethernet Fabric switches 4 x 10 GbE ports per Windows server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data EMC Backup Avamar 226 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

227 Bill of Materials Component Solution for 300 virtual machines Data Domain Refer to the document entitled EMC Backup and Recovery Options for VSPEX Private Clouds EMC VNX series storage array Block File VNX x 1GbE interface per control station for management 1 x 1GbE interface per SP for management 2 Front End ports per SP. 110 x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare. VNX Data Movers (active/standby) 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 110 x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare. Shared infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If this is being implemented without existing infrastructure, a minimum number of additional servers is required: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 227

228 Bill of Materials Table 41. List of components used in the VSPEX solution for 600 virtual machines Component Windows servers CPU Memory Solution for 600 virtual machines 1 vcpu per virtual machine 4 vcpus per physical core 600 vcpus Minimum of 150 Physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host Minimum of 1200 GB RAM Brocade Network Block File 2 x 10 GbE NICs per server 2 HBA per server 4 x 10 GbE NICs per server Note: To implement Microsoft Hyper-V High-Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements. Brocade Network infrastructure Minimum switching capacity Block 2 Brocade 6510 Fibre Channel switches 2 HBA ports per Windows server, for storage network 2 FC ports per SP, for storage data 2 x 10 GbE ports per Data Mover for data 1 x 1 GbE port per Control Station for management File 2 Brocade VDX 6740 Ethernet Fabric switches 4 x 10 GbE ports per Windows server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data EMC Backup Avamar Data Domain Refer to the document EMC Backup and Recovery Options for VSPEX Private Clouds 228 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

229 Bill of Materials Component EMC VNX series storage array Block File Solution for 600 virtual machines VNX x 1GbE interface per Control Station for management 1 x 1GbE interface per SP for management 2 Front End ports per SP. 220 x 600 GB 15k rpm 3.5-inch SAS drives 10x 200 GB flash drives. 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare. VNX Data Movers (active/standby) 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 220 x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare Shared infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If this is being implemented without existing infrastructure, a minimum number of additional servers is required: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 229

230 Bill of Materials Table 42. List of components used in the VSPEX solution for 1,000 virtual machines Component Windows servers CPU Memory Solution for 1,000 virtual machines 1 vcpu per virtual machine 4 vcpus per physical core 1,000 vcpus Minimum of 250 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host Minimum of 2000 GB RAM Brocade Network Block File 2 x 10 GbE NICs per server 2 HBA per server 4 x 10 GbE NICs per server Note: To implement Microsoft Hyper-V High-Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements. Brocade Network infrastructure Minimum switching capacity Block 2 Brocade 6510 Fibre Channel switches 2 HBA ports per Windows server, for storage network 2 FC ports per SP, for storage data 2 x 10 GbE ports per Data Mover for data 1 x 1 GbE port per Control Station for management File 2 Brocade VDX 6740 Ethernet Fabric switches 4 x 10 GbE ports per Windows server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data EMC Backup Avamar Data Domain Refer to the white paper entitled EMC Backup and Recovery Options for VSPEX Private Clouds. 230 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

231 Bill of Materials Component EMC VNX series storage array Block File Solution for 1,000 virtual machines VNX x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 Front End ports per SP. 360 x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives. 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare. VNX Data Movers (2 active/1 standby) 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 360 x 600 GB 15k rpm 3.5-inch SAS drives 16 x 200 GB flash drives 12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare. Shared infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If this is being implemented without existing infrastructure, a minimum number of additional servers is required: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. Note: In VNX5800, EMC recommends to run no more than 600 virtual machines on a single active Data Mover. Configure two active Data Movers (2 x active/1 x standby) when scaling to 600 or larger in that case. Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 231

232 Bill of Materials 232 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

233 Appendix B Customer Configuration Data Sheet This appendix presents the following topic: Customer configuration data sheet Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 233

234 Customer Configuration Data Sheet Customer configuration data sheet Before you start the configuration, gather some customer-specific network, and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a leave behind document for future reference. The VNX File and Unified Worksheets should be cross-referenced to confirm customer information. Table 43. Common server information Server name Purpose Primary IP Domain Controller DNS Primary DNS Secondary DHCP NTP SMTP SNMP System Center Virtual Machine Manager SQL Server Table 44. Hyper-V server information Server name Purpose Primary IP Hyper-V Host 1 Private Net (storage) addresses Hyper-V Host Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

235 Customer Configuration Data Sheet Table 45. Array information Array name Admin account Management IP Storage pool name Datastore name Block FC WWPN FCOE WWPN iscsi IQN iscsi Port IP File CIFS server IP Table 46. Brocade Network infrastructure information Name Purpose IP Ethernet Switch 1 Ethernet Switch 2 Subnet mask Default gateway Table 47. VLAN information Name Network Purpose VLAN ID Allowed Subnets Virtual machine networking managements iscsi storage network (block) CIFS storage network (file) Live Migration (optional) Public (client access) Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 235

236 Customer Configuration Data Sheet Table 48. Service accounts Account Purpose Windows Server administrator Array administrator SCVMM administrator SQL Server administrator Password (optional, secure appropriately) 236 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

237 Appendix C Server Resources Component Worksheet This appendix presents the following topic: Server resources component worksheet Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 237

238 Server Resources Component Worksheet Server resources component worksheet Table 49. Blank worksheet for determining server resources Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Total equivalent reference virtual machines Server customization Server component totals NA Storage customization Storage component totals Storage component equivalent reference virtual machines NA NA Total equivalent reference virtual machines - storage 238 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

239 References Appendix D References This appendix presents the following topic: References Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 239

240 References References EMC documentation The following documents, available on EMC Online Support provide additional and relevant information. If you do not have access to a document, contact your EMC representative. EMC Storage Integrator (ESI) 2.1 for Windows Suite EMC VNX Virtual Provisioning Applied Technology VNX FAST Cache: A Detailed Review Introduction to EMC XtremSW Cache VNX 5400 Unified Installation Guide VNX 5600 Unified Installation Guide VNX 5800 Unified Installation Guide Using EMC VNX Storage with Microsoft Windows Hyper-V EMC VNX Unified Best Practice For Performance -Applied Best Practices Guide Using VNX SnapSure EMC Host Connectivity Guide for Windows EMC VNX Series: Introduction to SMB 3.0 Support Configuring and Managing CIFS on VNX Brocade documentation Brocade VDX Switches and VCS Fabric related documentation can be found as stated below: Product Data Sheets for Brocade VDX 6740 Series of switches Brocade VDX 6740/6740T/ 6740T-1G Switch Data sheet uct_data_sheets/vdx-6740-ds.pdf Hardware Reference Manual Brocade VDX 6740 Hardware Reference Manual B_VDX/VDX6740_VDX6740T_HardwareManual.pdf Brocade Network OS (NOS) Guides Network OS Administrator s Guide Supporting Network OS v B_VDX/NOS_AdminGuide_v410.pdf Network OS Command Reference Supporting Network OS v Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

241 References B_VDX/NOS_CommandRef_v301.pdf B_VDX/NOS_CommandRef_v410.pdf Brocade Network OS (NOS) Software Licensing Guide v B_VDX/NOS_LicensingGuide_v410.pdf The Brocade Network Operating System (NOS) Release notes can be found at Brocade 65xx Switches and FOS Fabrics related documentation can be found as stated below: Product Data Sheets for Brocade VDX 6510 Series of switches: uct_data_sheets/6510-switch-ds.pdf Hardware Reference Manual B_SAN/B6510_HardwareManual.pdf Brocade Fabric OS (FOS) Guides Fabric OS Administrator s Guide Supporting Network OS v B_SAN/FOS_AdminGd_v720.pdf Fabric OS Command Reference Supporting Network OS v B_SAN/FOS_CmdRef_v720.pdf Brocade 6510 QuickStart Guide B_SAN/B6510_QuickStartGuide.pdf SAN Fabric Administration Best Practices Guide des/san-admin-best-practices-bp.pdf The Brocade Fabric Operating System (FOS) Release notes can be found at Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 241

242 References Other documentation The following documents, located on the Microsoft website, provide additional and relevant information: Installing the VMM Server How to Add a Host Cluster to VMM How to Create a Template from a Virtual Machine Configuring a Remote Instance of SQL Server for VMM Installing Virtual Machine Manager Installing the VMM Administrator Console Installing a VMM Agent Locally on a Host Adding Hyper-V Hosts and Host Clusters to VMM How to Create a Virtual Machine with a Blank Virtual Hard Disk to Create a Virtual Machine How to Deploy a Virtual Machine Installing Windows Server 2012 Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster Hardware and Software Requirements for Installing SQL Server 2012 Installing SQL Server 2012 How to Install a VMM Management Server 242 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

243 Appendix E About VSPEX This appendix presents the following topic: About VSPEX Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup 243

244 About VSPEX About VSPEX EMC has joined forces with the industry leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-of-breed technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk. Validation by EMC ensures predictable performance and enables customers to select technology that uses their existing IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure for customers looking to gain the simplicity that is characteristic of truly converged infrastructures, while at the same time gaining more choice in individual solution components. VSPEX solutions are proven by EMC, and packaged and sold exclusively by EMC channel partners. VSPEX provides channel partners with more opportunity, faster sales cycles, and end-to-end enablement. By working even more closely together, EMC and its channel partners can now deliver infrastructure that accelerates the journey to the cloud for even more customers. 244 Machines Enabled by Brocade Network Fabrics, EMC VNX, and EMC Next-Generation VNX and EMC Powered Backup

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNXe3200,

More information

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)

How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx) EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

DEDICATED NETWORKS FOR IP STORAGE

DEDICATED NETWORKS FOR IP STORAGE DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

Windows Server 2012 授 權 說 明

Windows Server 2012 授 權 說 明 Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor

More information

Scalable Approaches for Multitenant Cloud Data Centers

Scalable Approaches for Multitenant Cloud Data Centers WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

BROCADE FABRIC VISION TECHNOLOGY FREQUENTLY ASKED QUESTIONS

BROCADE FABRIC VISION TECHNOLOGY FREQUENTLY ASKED QUESTIONS FAQ BROCADE FABRIC VISION TECHNOLOGY FREQUENTLY ASKED QUESTIONS Introduction This document answers frequently asked questions about Brocade Fabric Vision technology. For more information about Fabric Vision

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP Enabled By EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This describes how to design virtualized Oracle Database resources on

More information

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014 DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTIONS FOR VSPEX PRIVATE CLOUD EMC VSPEX December 2014 Copyright 2013-2014 EMC Corporation. All rights reserved. Published in USA. Published December,

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. November 2014 Copyright 2014 EMC Corporation.

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VFCACHE ACCELERATES ORACLE

EMC VFCACHE ACCELERATES ORACLE White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support Agreement Joint Use Of Technology CEO

More information

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE

EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE White Paper EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE EMC Next-Generation VNX, EMC Storage Integrator for Windows Suite, Microsoft System Center 2012 SP1 Reduce storage

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Brocade Network Advisor High Availability Using Microsoft Cluster Service

Brocade Network Advisor High Availability Using Microsoft Cluster Service Brocade Network Advisor High Availability Using Microsoft Cluster Service This paper discusses how installing Brocade Network Advisor on a pair of Microsoft Cluster Service nodes provides automatic failover

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2

DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User-Computing solution for Citrix XenDesktop 7.5.

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

The Benefits of Brocade Gen 5 Fibre Channel

The Benefits of Brocade Gen 5 Fibre Channel The Benefits of Brocade Gen 5 Fibre Channel The network matters for storage. This paper discusses key server and storage trends and technology advancements and explains how Brocade Gen 5 Fibre Channel

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET STORAGE CENTER WITH STORAGE CENTER DATASHEET THE BENEFITS OF UNIFIED AND STORAGE Combining block and file-level data into a centralized storage platform simplifies management and reduces overall storage

More information

Brocade Fabric Vision Technology Frequently Asked Questions

Brocade Fabric Vision Technology Frequently Asked Questions Brocade Fabric Vision Technology Frequently Asked Questions Introduction This document answers frequently asked questions about Brocade Fabric Vision technology. For more information about Fabric Vision

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

VCS Monitoring and Troubleshooting Using Brocade Network Advisor

VCS Monitoring and Troubleshooting Using Brocade Network Advisor VCS Monitoring and Troubleshooting Using Brocade Network Advisor Brocade Network Advisor is a unified network management platform to manage the entire Brocade network, including both SAN and IP products.

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014 Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...

More information

Microsoft SMB File Sharing Best Practices Guide

Microsoft SMB File Sharing Best Practices Guide Technical White Paper Microsoft SMB File Sharing Best Practices Guide Tintri VMstore, Microsoft SMB 3.0 Protocol, and VMware 6.x Author: Neil Glick Version 1.0 06/15/2016 @tintri www.tintri.com Contents

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS 全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Best Practices for Microsoft

Best Practices for Microsoft SCALABLE STORAGE FOR MISSION CRITICAL APPLICATIONS Best Practices for Microsoft Daniel Golic EMC Serbia Senior Technology Consultant [email protected] 1 The Private Cloud Why Now? IT infrastructure

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage WHITE PAPER Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Brocade continues to innovate by delivering the industry s first 16 Gbps switches for low latency and high transaction

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.

More information

How To Build An Ip Storage Network For A Data Center

How To Build An Ip Storage Network For A Data Center 1 REDEFINING STORAGE CONNECTIVITY JACK RONDONI, VICE PRESIDENT STORAGE NETWORK, BROCADE 2 THE TRENDS DRIVING STORAGE Virtualization Continues to Drive Change Storage Will Continue To Grow SSD Changes Everything

More information

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance

Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance DATA CENTER SOLUTIONS For More Information: (866) 787-3271 [email protected] KEY BENEFITS Designed

More information

Nimble Storage for VMware View VDI

Nimble Storage for VMware View VDI BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important

More information

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments W h i t e p a p e r Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments Introduction Windows Server 2012 Hyper-V Storage Networking Microsoft s Windows Server 2012 platform is designed for

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland Introducing Markus Erlacher Technical Solution Professional Microsoft Switzerland Overarching Release Principles Strong emphasis on hardware, driver and application compatibility Goal to support Windows

More information

EMC RECOVERPOINT FAMILY

EMC RECOVERPOINT FAMILY EMC RECOVERPOINT FAMILY Cost-effective local and remote data protection and disaster recovery solution ESSENTIALS Maximize application data protection and disaster recovery Protect business applications

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability

More information

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

DVS Enterprise. Reference Architecture. VMware Horizon View Reference DVS Enterprise Reference Architecture VMware Horizon View Reference THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read

More information

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER ESSENTIALS Mitigate project risk with the proven leader, many of largest EHR sites run on EMC storage Reduce overall storage costs with automated

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited

More information

Cisco Solution for EMC VSPEX Server Virtualization

Cisco Solution for EMC VSPEX Server Virtualization Reference Architecture Cisco Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 Virtual Machines Enabled by Cisco Unified Computing System, Cisco Nexus Switches, Microsoft Hyper-V, EMC

More information