EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD
|
|
|
- Annice Peters
- 10 years ago
- Views:
Transcription
1 Proven Infrastructure EMC VSPEX with Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with Brocade VDX networking, Microsoft Hyper- V, EMC VNXe3200, and EMC Powered Backup for up to 125 virtual machines. August 2014
2 2014 Brocade Communications Systems, Inc. All Rights Reserved. ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Published August 2014 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup Part Number 2 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
3 Contents Chapter 1 Executive Summary 15 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 19 Introduction Virtualization Compute Networking Storage EMC next-generation VNXe EMC Powered Backup Chapter 3 Solution Technology Overview 29 Overview Summary of key components Virtualization Overview Microsoft Hyper-V Virtual Fibre Channel ports Microsoft System Center Virtual Machine Manager High availability with Hyper-V Failover Clustering Hyper-V Replica Hyper-V snapshot Cluster-Aware Updating EMC Storage Integrator Compute Network Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 3
4 Contents Overview Brocade 6510 Fibre Channel switch for Block Based Storage Brocade VDX Ethernet Fabric switch for file based storage Storage Overview EMC VNXe EMC VNXe Virtual Provisioning Windows Offloaded Data Transfer EMC PowerPath VNXe FAST Cache VNXe FAST VP VNXe file shares ROBO Backup and recovery Overview EMC Avamar deduplication EMC Data Domain deduplication storage systems EMC RecoverPoint Other technologies EMC XtremCache Chapter 4 Solution Architecture Overview 51 Overview Solution architecture Overview Logical architecture Key components Hardware resources Software resources Server configuration guidelines Overview Hyper-V memory virtualization Memory configuration guidelines Network configuration guidelines Overview VLAN Enabling jumbo frames (iscsi or SMB only) Enabling link aggregation (SMB only) Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
5 Contents Brocade Virtual Link Aggregation Group (vlag) Brocade Inter-Switch Link (ISL) Trunks Equal-Cost Multipath (ECMP) Pause Flow Control Storage configuration guidelines Overview Hyper-V storage virtualization for VSPEX VSPEX storage building blocks VSPEX Private Cloud validated maximums High availability and failover Overview Virtualization layer Compute layer Brocade Network layer Storage layer Validation test profile Profile characteristics EMC Powered Backup and configuration guidelines Sizing guidelines Reference workload Overview Defining the reference workload Applying the reference workload Overview Example 1: Custom-built application Example 2: Point-of-Sale system Example 3: Web server Example 4: Decision-support database Summary of examples Implementing the solution Overview Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 5
6 Contents Quick assessment of customer environment Overview CPU requirements Memory requirements Storage performance requirements IOPS I/O size I/O latency Storage capacity requirements Determining equivalent reference virtual machines Fine-tuning hardware resources EMC VSPEX Sizing Tool Chapter 5 VSPEX Configuration Guidelines 99 Overview Pre-deployment tasks Overview Deployment prerequisites Customer configuration data Prepare, connect, and configure Brocade network switches Overview Prepare Brocade Storage Network Infrastructure Complete Network Cabling Configure Brocade VDX 6740 switch (File Storage) Step 1: Verify and apply Brocade VDX NOS licenses Step 2: Configure logical chassis VCS ID and RBridge IDs on the VDXs 111 Step 3: Assign Switch Name Step 4: Brocade VCS Fabric ISL Port Configuration Step 5: Create required VLANs Step 6: Create vlags for Microsoft Hyper-V hosts Step 7: Create vlags for VNX ports Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks Step 9: Configure MTU and Jumbo Frames Step 10: Enable Flow Control Support Step 11: Auto QOS for NAS Configure Brocade 6510 Switch storage network (Block Storage) Providing power to the switch Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
7 Contents Configuring the 6510 switch Step 1: Initial Switch Configuration Step 2: FC Switch Licensing Step 3: FC Zoning Configuration Step 4: Switch Management and Monitoring Preparing and configuring storage array VNXe configuration for block protocols VNXe configuration for file protocols FAST VP configuration (optional) FAST Cache configuration (optional) Installing and configuring Hyper-V hosts Overview Installing Windows hosts Installing Hyper-V and configuring failover clustering Configuring Windows host networking Installing PowerPath on Windows servers Planning virtual machine memory allocations Installing and configuring SQL Server database Overview Creating a virtual machine for Microsoft SQL Server Installing Microsoft Windows on the virtual machine Installing SQL Server Configuring a SQL Server for SCVMM System Center Virtual Machine Manager server deployment Overview Creating a SCVMM host virtual machine Installing the SCVMM guest OS Installing the SCVMM server Installing the SCVMM Management Console Installing the SCVMM agent locally on a host Adding a Hyper-V cluster into SCVMM Adding file share storage to SCVMM (file variant only) Creating a virtual machine in SCVMM Performing partition alignment, and assigning File Allocation Unite Size159 Creating a template virtual machine Deploying virtual machines from the template virtual machine Summary Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 7
8 Contents Chapter 6 Verifying the Solution 161 Overview Post-installing checklist Deploying and testing a single virtual server Verifying the redundancy of the solution components Block and File environments Chapter 7 System Monitoring 165 Overview Key areas to monitor Performance baseline Servers Brocade Networking Storage VNXe resources monitoring guidelines Monitoring block storage resources Monitoring file storage resources Summary Appendix A Bill of Materials 183 Bill of materials Appendix B Customer Configuration Data Sheet 187 Customer configuration data sheet Appendix C Server Resources Component Worksheet 191 Server resources component worksheet Appendix D References 193 References EMC documentation Brocade documentation Other documentation Appendix E About VSPEX 199 About VSPEX Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
9 Figures Figure 1. Next-generation VNXe with multicore optimization Figure 2. Active/active processors increase performance, resiliency, and efficiency Figure 3. EMC Powered Backup solutions Figure 4. VSPEX Private Cloud components Figure 5. Compute layer flexibility Figure 6. Example of highly available Brocade Block Based storage network design Figure 7. Brocade VDX with VCS Fabrics in a highly available file based storage network design Figure 8. Storage pool rebalance progress Figure 9. Thin LUN space utilization Figure 10. Examining storage pool space utilization Figure 11. Logical architecture for block storage Figure 12. Logical architecture for file storage Figure 13. Hypervisor memory consumption Figure 14. Required networks for block storage Figure 15. Required networks for file storage Figure 16. Hyper-V virtual disk types Figure 17. Building block for 15 virtual servers Figure 18. Building block for 125 virtual servers Figure 19. Storage layout for 125 virtual machines using VNXe Figure 20. Maximum scale levels and entry points of different arrays Figure 21. High availability on the virtualization layer Figure 22. Redundant power supplies Figure 23. Brocade Network layer high availability (VNXe) block storage network variant Figure 24. Brocade Network layer high availability (VNXe) file storage. 77 Figure 25. VNXe series HA components Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 9
10 Figures Figure 26. Resource pool flexibility Figure 27. Required resource from the reference virtual machine pool Figure 28. Aggregate resource requirements stage Figure 29. Pool configuration stage Figure 30. Aggregate resource requirements - stage Figure 31. Pool configuration stage Figure 32. Customizing server resources Figure 33. Sample Brocade network architecture File storage Figure 34. Sample Brocade network architecture Block storage Figure 35. Port types Figure 36. Port Groups of the VDX Figure 37. Port Groups of the VDX 6740T and Brocade VDX6740T-1G Figure 38. Creating VLANs Figure 39. Example VCS/VDX network topology with Infrastructure connectivity Figure 40. Configure NAS Server Address Figure 41. Configure NAS Server type Figure 42. Fast VP tab Figure 43. Scheduled Fast VP relocation Figure 44. Fast VP Relocation Schedule Figure 45. Create Fast Cache Figure 46. Advanced tab in the Create Storage Pool dialog box Figure 47. Settings tab in the Storage Pool Properties dialog box Figure 48. Storage Pool Alert settings Figure 49. Storage Pool Snapshot settings Figure 50. Storage Pools panel Figure 51. LUN Properties dialog box Figure 52. System Panel Figure 53. System Health panel Figure 54. IOPS on the LUNs Figure 55. IOPS on the drives Figure 56. Latency on the LUNs Figure 57. SP CPU Utilization Figure 58. VNXe file statistics Figure 59. System Capacity panel Figure 60. File Systems panel Figure 61. File System Capacity panel Figure 62. System Performance panel displaying file metrics Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
11 Figures Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 11
12 Tables Tables Table 1. VNXe customer benefits Table 2. Solution hardware Table 3. Solution software Table 4. Hardware resources for compute layer Table 5. Hardware resources for network Table 6. Hardware resources for storage Table 7. Number of disks required for different number of virtual machines Table 8. Profile characteristics Table 9. Virtual machine characteristics Table 10. Blank worksheet row Table 11. Reference virtual machine resources Table 12. Example worksheet row Table 13. Example applications stage Table 14. Example applications - stage Table 15. Server resource component totals Table 16. Deployment process overview Table 17. Tasks for pre-deployment Table 18. Deployment prerequisites checklist Table 19. Tasks for switch and network configuration Table 20. Brocade VDX 6740 Configuration Steps Table 21. Brocade switch default settings Table 22. Brocade 6510 FC switch Configuration Steps Table 23. Brocade switch default settings Table 24. Tasks for VNXe configuration for block protocols Table 25. Storage allocation table for block Table 26. Tasks for storage configuration for file protocols Table 27. Storage allocation table for file Table 28. Tasks for server installation Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
13 Tables Table 29. Tasks for SQL Server database setup Table 30. Tasks for SCVMM configuration Table 31. Tasks for testing the installation Table 32. Rules of thumb for drive performance Table 33. Best practice for performance monitoring Table 34. List of components used in the VSPEX solution for 125 virtual machines Table 35. Common server information Table 36. Hyper-V server information Table 37. Array information Table 38. Brocade Network infrastructure information Table 39. VLAN information Table 40. Service accounts Table 41. Blank worksheet for determining server resources Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 13
14
15 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 15
16 Executive Summary Introduction Target audience Document purpose The VSPEX Private Cloud for VSPEX for Microsoft Hyper-V and Brocade VDX networking solution provides a complete system architecture capable of supporting up to 125 virtual machines validated and modular architectures; are built with proven superior technologies to create complete virtualization solutions that enable you to make an informed decision at the hypervisor, compute, backup, storage, networking, and storage layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, expanded choices, greater efficiency, and lower risk. This document is a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The readers of this document should have the necessary training and background to install and configure Microsoft Hyper-V, Brocade VDX Ethernet Fabric or Connectrix-B Fibre Channel series switches, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the customer s environment. Individuals focusing on selling and sizing a VSPEX end-user computing solution for Microsoft Hyper-V private cloud infrastructure must pay particular attention to the first four chapters of this document. After the purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This proven infrastructure guide includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system. The VSPEX private cloud architecture provides the customer with a modern system capable of hosting many virtual machines at a consistent 16 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
17 Executive Summary Business needs performance level. This solution runs on the Microsoft Hyper-V virtualization layer backed by the highly available Brocade fabrics network switch series and VNX family of storage. The compute and network components, which are defined by the VSPEX partners, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. The 125 virtual machine Hyper-V Private Cloud solution described in this document is based on the EMC VNXe3200 and on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective when deployed. For larger environments, solutions for up to 1,000 virtual machines based on the EMC VNX series are described in the EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Proven Infrastructure Guide. A private cloud architecture is a complex system offering. This document facilitates its setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, validation tests and monitoring instructions ensure that your customer s system is running correctly. Following the instructions in this document ensures an efficient and expedited journey to the cloud. Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX private cloud solutions use Microsoft Hyper-V to reduce the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design flexibility and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The business needs for the VSPEX private cloud solutions for Microsoft Hyper-V are: Providing an end-to-end virtualization solution to effectively utilize the capabilities of the unified infrastructure components. Providing a VSPEX private cloud solution for Microsoft Hyper-V to efficiently virtualize up to 125 virtual machines for varied customer use cases. Providing a reliable, flexible, and scalable reference design. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 17
18 Executive Summary 18 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
19 Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Networking Storage Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 19
20 Solution Overview Introduction Virtualization Compute Networking The EMC VSPEX with Brocade Networking Solution for Private Cloud for Microsoft Hyper-V provides a complete system architecture capable of supporting up to 125 virtual machines with a redundant server or network topology and highly available storage. The core components that make up this particular solution are virtualization, compute, networking, storage, and EMC Powered Backup. Microsoft Hyper-V is a key virtualization platform in the industry. For years, Hyper-V has provided flexibility and cost savings to end users by consolidating large, inefficient server farms into nimble, reliable cloud infrastructures. Features such as Live Migration, which enables a virtual machine to move between different servers with no disruption to the guest operating system, and Dynamic Optimization, which performs Live Migrations automatically to balance loads, make Hyper-V a solid business choice. With the release of Windows Server 2012 R2, a Microsoft virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). VSPEX provides the flexibility to design and implement the customer s choice of server components. The infrastructure must conform to the following attributes: Sufficient cores and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to withstand a server failure and failover within the environment Brocade VDX Ethernet Fabric and Fibre Channel Fabric switch technology enable the implementation of high performance, efficient, and resilient networks validated with the VSPEX proven architectures. Brocade Ethernet and Fibre Channel fabrics provide an open standards based solution that unleashes the full potential of high-density server virtualization, private cloud architectures, and EMC VNX storage. 20 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
21 Solution Overview Brocade VDX Ethernet Fabrics networking solutions provides the following attributes: Offers flexibility to deploy 1000BASE-T and upgrade to 10GBASE-T for higher bandwidth Delivers high performance and reduces network congestion with 10 Gigabit Ethernet (GbE) ports, low latency, and 24 MB deep buffers Improves capacity with the ability to create up to a 160 GbE uplink with Brocade ISL Trunking Manages an entire multitenant Brocade VCS fabric as a single switch with Brocade VCS Logical Chassis Provides efficiently load-balanced multipathing at Layers 1, 2, and 3, including multiple Layer 3 gateways Simplifies Virtual Machine (VM) mobility and management with automated, dynamic port profile configuration and migration Supports Software-Defined Networking (SDN) technologies within data, control, and management planes The Brocade 6510 Fibre Channel Fabric switch is the purpose-built, data center-proven network infrastructure for storage, delivering unmatched reliability, simplicity, and 4/8/16 Gbps performance. The Brocade 6510 Fibre Channel Fabrics networking solution provides the following attributes: Provides exceptional price/performance value, combining flexibility, simplicity, and enterprise-class functionality in a 48-port switch Enables fast, easy, and cost-effective scaling from 24 to 48 ports using Ports on Demand (PoD) capabilities Simplifies management through Brocade Fabric Vision technology, reducing operational costs and optimizing application performance Simplifies deployment and supports high-performance fabrics by using Brocade ClearLink Diagnostic Ports (D_Ports) to identify optic and cable issues Simplifies and accelerates deployment with the Brocade EZSwitchSetup wizard and Dynamic Fabric Provisioning (DFP) Maximizes availability with redundant, hot-pluggable components and non-disruptive software upgrades Simplifies server connectivity by deploying as a full-fabric switch or a Brocade Access Gateway Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 21
22 Solution Overview Storage The EMC VNXe storage series provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation. VNXe storage includes the following components, sized for the stated reference architecture workload: I/O ports (for block and file): Provide host connectivity to the array, which supports CIFS/ Server Message Block (SMB), Network File System (NFS), Fibre Channel (FC), and Internet Small Computer System Interface (iscsi). Storage processors The compute components of the storage array, used for all aspects of data moving into, out of, and between arrays. Unlike the VNX family, which requires external processing units known as Data Movers to provide file services, the VNXe contains integrated code that provides file services to hosts. Disk drives Disk spindles and solid state drives (SSDs) that contain the host or application data and their enclosures The 125 virtual machine Hyper-V Private Cloud solution described in this document is based on the VNXe3200 storage array. The VNXe3200 can currently support a maximum of 50 drives. The VNXe series supports a wide range of business-class features that are ideal for the private cloud environment, including: EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP ) EMC FAST Cache Thin provisioning Snapshots or checkpoints File-level retention Quota management EMC nextgeneration VNXe Features and enhancements EMC now offers customers even greater performance and choice than before with the inclusion of the next generation of VNXe Unified Storage into the VSPEX family of Proven Infrastructures. The next-generation VNXe, led by the VNXe3200, offers a hybrid, unified storage system for VSPEX customers who need to centralize and simplify storage when transforming their IT. Customers who need to virtualize up to 125 virtual machines with VSPEX Private Cloud solutions will now see the benefits that the new Multicore (MCx) VNXe3200 brings. The new architecture distributes all data services 22 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
23 Solution Overview across all the system s cores. Cache management and backend RAID management processes scale linearly and benefit greatly from the latest Intel multicore CPUs. Simply put, I/O operations in VSPEX run faster and more efficiently than ever before with the new VNXe3200. The VNXe3200 is ushering in a profoundly new experience for small and medium-sized VSPEX customers as it delivers performance and scale at a lower price. The VNXe3200 is a significantly more powerful system than the previous VNXe series and ships with many enterprise-like features and capabilities such as auto-tiering, file deduplication, and compression, which add to the simplicity, efficiency, and flexibility of the VSPEX Private Cloud solution. EMC FAST Cache and FAST VP, features that have in the past been exclusive to the VNX, are now available to VSPEX customers with VNXe3200 storage. FAST Cache dynamically extends the storage system s existing read/write caching capacity to increase system-wide performance and lower the cost per virtual machine. FAST Cache uses high-performing flash drives that are positioned between the primary cache (DRAM-based) and the hard disk drives. This feature boosts the performance of highly transactional applications and virtual desktops by keeping hot data in the cache, so it is available when you need it. VNXe3200 FAST Cache and FAST VP auto-tiering lowers the total cost of ownership through policy-based movement of your data to the right storage type. Doing so maximizes the cost investment and speed benefit of SSDs across the system intelligently while leveraging the capacity of lesscostly spinning drives. This avoids over-purchasing and exhaustive manual configuration. The EMC VNXe flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, The VNXe combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s virtualized application environments. VNXe includes many features and enhancements designed and built upon the success of the next generation VNX family. These features and enhancements include: More capacity with multicore optimization with Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx) Greater efficiency with a flash-optimized hybrid array Better protection by increasing application availability with active/active storage processors Easier administration and deployment by increasing productivity with a new Unisphere Management Suite Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 23
24 Solution Overview Flash-optimized hybrid array VNXe is a flash-optimized hybrid array that provides automated tiering to deliver the best performance for your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flashoptimized VNXe takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and migrates the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. Data is typically used most frequently at the time it is created; therefore new data is first stored on flash drives for the best performance. As that data ages and becomes less active over time, FAST VP moves the data from high-performance to high-capacity drives automatically, based on customer-defined policies. EMC has enhanced this functionality with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multi-level cell (emlc) technology to lower the cost per gigabyte. FAST Cache dynamically absorbs unpredicted spikes in system workloads. All VSPEX use cases benefit from the increased efficiency. Note: This reference architecture does not make use of FAST Cache or FAST VP. Lab testing has demonstrated performance increases of approximately 10 20%, depending upon protocol using the VSPEX workload. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNXe, customers can realize an even greater return on their investment. VNXe provides out-of-band, filebased deduplication that can dramatically lower the costs of the flash tier. VNXe Intel MCx Code Path Optimization The advent of flash technology has been a catalyst in totally changing the requirements of VNXe storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNXe data services across all cores, as shown in Figure 1. The VNXe series with MCx has dramatically improved the file performance for transactional applications like databases or virtual machines over network-attached storage (NAS). 24 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
25 Solution Overview Figure 1. Next-generation VNXe with multicore optimization Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is key to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNXe come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors. VNXe performance Performance enhancements VNXe storage, enabled with the MCx architecture, is optimized for FLASH 1 st and provides unprecedented overall performance, optimizing for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and providing optimal capacity efficiency (cost per GB). VNXe provides the following performance improvements: Up to four times more file transactions when compared with dual controller arrays Increased file performance for transactional applications by up to three times, with a 60 percent better response time Up to four times more Oracle and Microsoft SQL Server OLTP transactions Up to six times more virtual machines Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 25
26 Solution Overview Active/active array storage processors The new VNXe architecture provides active/active array storage processors, as shown in Figure 2, which eliminate application timeouts during path failover since both paths are actively serving I/O. Figure 2. Active/active processors increase performance, resiliency, and efficiency Load balancing is also improved and applications can achieve an up to two times improvement in performance. Active/active for block is ideal for applications that require the highest levels of availability and performance, but do not require tiering or efficiency services like compression or deduplication. Virtualization Management EMC Storage Integrator EMC Storage Integrator (ESI) is targeted towards the Windows and Application administrator. ESI is easy to use, delivers end-to end monitoring, and is hypervisor agnostic. Administrators can provision in both virtual and physical environments for a Windows platform, and troubleshoot by viewing the topology of an application from the underlying hypervisor to the storage. Microsoft Hyper-V With Windows Server 2012 R2, Microsoft provides Hyper-V 3.0, an enhanced hypervisor for private cloud that can run on NAS protocols for simplified connectivity. Offloaded Data Transfer The Offloaded Data Transfer (ODX) feature of Windows Server 2012 R2 enables data transfers during copy operations to be offloaded to the storage array, freeing up host cycles. For example, using ODX for a live migration of a SQL Server virtual machine doubled performance, decreased migration time by 50 percent, reduced CPU on the host server by 20 percent, and eliminated network traffic. 26 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
27 Solution Overview EMC Powered Backup EMC Powered Backup solutions, EMC Avamar and EMC Data Domain, deliver the protection and confidence needed to accelerate the deployment of VSPEX Private Clouds. Optimized for virtual environments, EMC Powered Backup reduces backup times by 90 percent and increases recovery speeds by 30 times, even offering virtual machines instant access for worry-free protection. EMC backup appliances add another layer of assurance with end-to-end verification and self-healing to ensure successful recoveries. Our solutions also deliver big saving. With industry-leading deduplication, you can reduce backup storage by 10 to 30 times, backup management time by 81 percent, and WAN bandwidth by 99 percent for efficient disaster recovery, delivering a seven-month payback period on average. You will be able to scale storage easily and efficiently as your environment grows. Figure 3. EMC Powered Backup solutions EMC Powered Backup solutions used in this VSPEX solution include the EMC Avamar deduplication software and system, and the EMC Data Domain deduplication storage system. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 27
28
29 Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview Summary of key components Virtualization Compute Network Storage Backup and recovery Other technologies Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 29
30 Solution Technology Overview Overview This solution uses the VNXe array, Brocade network Fabric switches, and Microsoft Hyper-V to provide storage and server hardware consolidation in a VSPEX Private Cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 4 depicts the solution components. Figure 4. VSPEX Private Cloud components The following sections describe the components in detail. 30 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
31 Solution Technology Overview Summary of key components This section briefly describes the key components of this solution. Virtualization The virtualization layer decouples the physical implementation of resources from the applications that use them. The application s view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. Compute The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the customer to implement the solution by using any server hardware that meets these requirements. Network Brocade VDX Ethernet Fabric or Connectrix-B Fibre Channel Fabric switches with Brocade Fabric networking technology connect the users of the private cloud to existing customer infrastructure with the compute and storage resources of the VSPEX solution. EMC VSPEX reference architecture with Brocade network Fabric switches provides the required connectivity and scalability. The EMC VSPEX with Brocade networking solutions enables the customer to implement a solution that provides a cost effective, resilient, and operationally efficient virtualization platform. Storage The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the private cloud can be implemented. The EMC VNXe storage used in this solution provides high-performance data storage while maintaining high availability. Backup and recovery The backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable. Solution architecture provides details on all the components that make up the reference architecture. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 31
32 Solution Technology Overview Virtualization Overview The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and allows the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. Microsoft Hyper- V Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server Hyper-V virtualizes computer hardware resources, such as CPU, memory, storage, and networking. This transformation creates fully functional virtual machines that run their own operating systems and applications like physical computers. Hyper-V works with Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure. Live migration and live storage migration enable seamless movement of virtual machines or virtual machines files between Hyper-V servers or storage systems transparently and with minimal performance impact. Virtual Fibre Channel ports Windows Server 2012 R2 provides virtual Fibre Channel (FC) ports within a Hyper-V guest operating system. The virtual FC port uses the standard N- port ID virtualization (NPIV) process to address the virtual machine WWNs within the Hyper-V host s physical host bus adapter (HBA). This provides virtual machines with direct access to external storage arrays over FC, enables clustering of guest operating systems over FC, and offers an important new storage option for the hosted servers in the virtual infrastructure. Virtual FC in Hyper-V guest operating systems also supports related features, such as virtual SANs, live migration, and multipath I/O (MPIO). Prerequisites for virtual FC include: One or more installations of Windows Server 2012 R2 with the Hyper-V role One or more FC HBAs installed on the server, each with an appropriate HBA driver that supports virtual FC NPIV-enabled SAN Virtual machines using the virtual FC adapter must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 R2 as the guest operating system. 32 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
33 Solution Technology Overview Microsoft System Center Virtual Machine Manager Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform for the virtualized data center. SCVMM allows administrators to configure and manage the virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment. High availability with Hyper-V Failover Clustering The Windows Server 2012 Failover Clustering feature provides highavailability in Hyper-V. High availability is impacted by both planned and unplanned downtime, and Failover Clustering significantly increases the availability of virtual machines during planned and unplanned downtimes. Configure Windows Server 2012 Failover Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual machines between cluster nodes. The advantages of this configuration are: Enables migration of virtual machines to a different cluster node if the cluster node where they reside must be updated, changed, or rebooted. Allows other members of the Windows Failover Cluster to take ownership of the virtual machines if the cluster node where they reside suffers a failure or significant degradation. Minimizes downtime due to virtual machine failures. Windows Server Failover Cluster detects virtual machine failures and automatically takes steps to recover the failed virtual machine. This allows the virtual machine to be restarted on the same host server, or migrated to a different host server. Hyper-V Replica Hyper-V Replica was introduced in Windows Server 2012 to provide asynchronous virtual machine replication over the network from one Hyper-V host at a primary site to another Hyper-V host at a replica site. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates the changes to the replica server over the network with HTTP and HTTPS. The amount of network bandwidth required is based on the transfer schedule and data change rate. If the primary Hyper-V host fails, you can manually fail over the production virtual machines to the Hyper-V hosts at the replica site. Manual failover brings the virtual machines back to a consistent point from which they can be accessed with minimal impact on the business. After recovery, the primary site can receive changes from the replica site. You can perform a planned failback to manually revert the virtual machines back to the Hyper-V host at the primary site. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 33
34 Solution Technology Overview Hyper-V snapshot A Hyper-V snapshot creates a consistent point-in-time view of a virtual machine. Snapshots function as source for backups or other use cases. Virtual machines do not have to be running to take a snapshot. Snapshots are completely transparent to the applications running on the virtual machine. The snapshot saves the point-in-time status of the virtual machine, and enables users to revert the virtual machine to a previous point-in-time if necessary. Note: Snapshots require additional storage space. The amount of additional storage space depends on the frequency of data change on the virtual machine and the number of snapshots being retained. Cluster-Aware Updating Cluster-Aware Updating (CAU) was introduced in Windows Server It provides a way of updating cluster nodes with little or no disruption. CAU transparently performs the following tasks during the update process: 1. Puts one cluster node into maintenance mode and takes it offline (virtual machines are live-migrated to other cluster nodes). 2. Installs the updates. 3. Performs a restart if necessary. 4. Brings the node back online (migrated virtual machines are moved back to the original node). 5. Updates the next node in the cluster. The node managing the update process is called the Orchestrator. The Orchestrator can work in a couple of different modes: Self-updating mode: The Orchestrator runs on the cluster node being updated. Remote-updating mode: The Orchestrator runs on a standalone Windows operating system, and remotely manages the cluster update. CAU is integrated with Windows Server Update Service (WSUS). PowerShell allows automation of the CAU process. 34 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
35 Solution Technology Overview EMC Storage Integrator EMC Storage Integrator (ESI) is an agentless, free plug-in that enables application-aware storage provisioning for Microsoft Windows Server applications, Hyper-V, VMware, and Xen Server environments. Administrators can provision block and file storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI. ESI supports the following functions: Provisioning, formatting, and presenting drives to Windows servers Provisioning new cluster disks, and automatically adding them to the cluster Provisioning shared CIFS storage, and mounting it to Windows servers Provisioning SharePoint storage, sites, and databases in a single wizard Compute The choice of a server platform for a VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX documents the minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution. In the example shown in Figure 5, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this by using white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 35
36 Solution Technology Overview Figure 5. Compute layer flexibility The first customer needs four of the chosen servers, while the other customer needs two. Note: To enable high-availability at the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. Use the following best practices in the compute layer: Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you implement high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. 36 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
37 Solution Technology Overview Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimal-downtime upgrades and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores, and RAM per core to meet the needs of the target environment. Network Overview VSPEX Proven Infrastructure with Brocade networking solution provides the required redundant network links for each vsphere host, the storage array, and the switch interconnect ports, and the switch uplink ports. Brocade networking solutions provides options with Connectrix-B 6510 Fibre Channel switches for block storage and VDX 6740-T Ethernet Fabric switches for file storage connectivity between compute and storage. The Brocade network is designed in the VSPEX reference architecture for block and file based storage traffic types to optimize throughput, manageability, application separation, high availability, and security. The storage network solution is implemented with redundant network links for each host, and VNX storage array. If a link is lost with any of the Brocade network infrastructure ports, the link fails over to another port. All network traffic is distributed across the active links. Figure 6 and Figure 7 depict examples of this highly available Brocade storage network topology. Brocade 6510 Fibre Channel switch for Block Based Storage The Brocade 6510 with Gen 5 Fibre Channel Technology simplifies the storage network infrastructure through innovative technologies and supports the VSPEX highly virtualized topology design. The Brocade validated network solution simplifies server connectivity by deploying as full-fabric switch and enables fast, easy effective scaling from 24 to 48 Ports on Demand (PoD). The Brocade 6510 Fibre Channel switches maximizes availability with redundant architecture for Block Based storage traffic and hot-pluggable components and non-disruptive upgrades. For block, the EMC VNX a unified storage platform is attached to a highly available Brocade storage network by two ports per storage processor. If a link is lost on the storage processor front end port, the link fails over to another port. All storage network traffic is distributed across the active links. Figure 6 Depicts an example of the Brocade network topology for file based storage. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 37
38 Solution Technology Overview Figure 6. Example of highly available Brocade Block Based storage network design Brocade 6510 Fibre Channel switches provide high availability for the VSPEX SAN infrastructure. Active active links for all traffic from the virtualized compute servers to the EMC VNX storage arrays. The Brocade 6510 Switch meets the demands of hyper-scale, private cloud VSPEX storage traffic environments with market-leading Gen 5 Fibre Channel technology and capability that supports the VSPEX virtualized architecture. The failure of a link in a route causes the network to reroute any traffic that was using that particular link as long as an alternate path is available. Brocade Fabric Shortest Path First (FSPF) is a highly efficient routing algorithm that reroutes around failed links in less than a second. ISL Trunking improves on this concept by helping to prevent the loss of the route. A link failure merely reduces the available bandwidth of the logical ISL trunk. In other words, a failure does not completely break the pipe, but simply makes the pipe narrower. As a result, data traffic is much less likely to be affected by link failures, and the bandwidth automatically increases when the link is repaired 38 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
39 Solution Technology Overview Brocade VDX Ethernet Fabric switch for file based storage The Brocade VDX with VCS Fabrics helps simplify networking infrastructures through innovative technologies and VSPEX infrastructure topology design. The Brocade validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security with file storage traffic. Brocade VDX 6740 switches support this strategy by simplifying network architecture while increasing network performance and resiliency with Ethernet fabrics. Brocade VDX with VCS Fabric technology supports active active links for all traffic from the virtualized compute servers to the EMC VNXe storage arrays. This validated solution for file storage with the EMC unified storage platforms attaches to the highly available Brocade network by using link aggregation. Link aggregation enables multiple active (MAC) Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on the VNX array, combining multiple Ethernet ports into a single virtual device. If a link is lost on the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Figure 7 depicts an example of the Brocade network topology for file based storage. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 39
40 Solution Technology Overview Figure 7. Brocade VDX with VCS Fabrics in a highly available file based storage network design The Brocade VDX 6740 Ethernet Fabric switches provide file based connectivity at 10 GbE in between the compute and VNX storage. The Brocade VDX with VCS Fabric technology helps simplify networking infrastructures through innovative technologies for the VSPEX File storage network topology design. The Brocade network validated solution supports segregated network traffic of VSPEX reference architecture for SMB 3.0 File storage traffic. Brocade VDX switches enable a storage network with high availability and redundancy by using link aggregation for EMC VNX storage array. 40 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
41 Solution Technology Overview Storage Overview The storage layer is also a key component of any cloud infrastructure solution that serves data generated by applications and operating system in data center storage processing systems. This increases storage efficiency, management flexibility, and reduces total cost of ownership. In this VSPEX solution, EMC VNXe series arrays provide features and performance to enable and enhance any virtualization environment. EMC VNXe The EMC VNX family is optimized for virtual applications, and delivers industry-leading innovation and enterprise capabilities for file and block storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. Intel Xeon processors power the VNXe series for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, highscalability requirements of midsize and large enterprises. Table 1 shows the customer benefits that are provided by the VNXe series. Table 1. VNXe customer benefits Feature Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-consistent copies High availability, designed to deliver five 9s availability Automated tiering with FAST VP and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere for a single management interface for all NAS, SAN, and replication needs Benefit Tight integration with Microsoft Windows and System Center allows for advanced array features and centralized management Reduced storage costs, more efficient use of resources and easier recovery of applications Higher levels of uptime and reduced outage risk More efficient use of storage resources without complicated planning and configuration Reduced management overhead and toolsets required to manage environment Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 41
42 Solution Technology Overview Different software suites and packs are also available for the VNXe series, which provide multiple features for enhanced protection and performance. Software suites The following VNXe software suites are available: FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. EMC VNXe Virtual Provisioning EMC VNXe Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, VNXe Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and User Capacity Threshold setting. Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNXe systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. You can monitor the progress of a rebalance operation from the Jobs Panel in Unisphere, as shown in Figure Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
43 Solution Technology Overview Figure 8. Storage pool rebalance progress LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNXe series has the capability to expand a pool LUN without disrupting user access. You can expand pool LUNs with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data-protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. For more detailed information of pool LUN expansion, refer to Virtual Provisioning for the New VNX Series. Alerting the user through the Capacity Threshold setting You must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed and capacity shortages can be avoided. Figure 9 explains why provisioning with thin pools requires monitoring. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 43
44 Solution Technology Overview Figure 9. Thin LUN space utilization Monitor the following values for thin pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host-reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. Total allocation must never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. Figure 10 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Available Space, Used Space, Subscription, Alert Threshold and Total Space. Figure 10. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is 44 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
45 Solution Technology Overview the likely result. To avoid this situation, monitor pool utilization, and be alerted when thresholds are reached, set the Percentage Full Threshold to allow enough buffer to take remedial action before an outage situation occurs. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active because there is no risk of running out of space due to oversubscription. Windows Offloaded Data Transfer Windows Offloaded Data Transfer (ODX) provides the ability to offload data transfer from the server to the storage arrays. This feature is enabled by default in Windows Server VNXe arrays are compatible with Windows ODX on Windows Server ODX supports the following protocols: iscsi Fibre Channel (FC) FC over Ethernet (FCoE) Server Message Block (SMB) 3.0 The following data-transfer operations currently support ODX: Transferring large amounts of data via the Hyper-V Manager, such as creating a fixed size VHD, merging a snapshot, or converting VHDs Copying files in File Explorer Using the Copy commands in Windows PowerShell Using the Copy commands in the Windows command prompt Because ODX offloads the file transfer to the storage array, host CPU and network utilization are significantly reduced. ODX minimizes latencies and improves the transfer speed by using the storage array for data transfer. This is especially beneficial for large files, such as database or video files. ODX is enabled by default in Windows Server 2012, so when ODXsupported file operations occur, data transfers automatically offloaded to the storage array. The ODX process is transparent to users. EMC PowerPath EMC PowerPath is a host-based software package that provides automated data path management and load balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It offers the following benefits for the VSPEX Proven Infrastructure: Standardized data management across physical and virtual environments. Automated multipathing policies and load balancing to provide predictable and consistent application availability and performance across physical and virtual environments. Improved service-level agreements by eliminating application impact from I/O failures. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 45
46 Solution Technology Overview VNXe FAST Cache VNXe FAST VP VNXe file shares ROBO VNXe FAST Cache, enables flash drives to function as an expanded cache layer for the array. FAST Cache is an array-wide, nondisruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of highly active data to flash drives. This dramatically improves the response time for the active data and reduces data hot spots that can occur within a LUN. The FAST Cache feature is an optional component of this solution. VNXe FAST VP can automatically tier data across multiple types of drives to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing is part of a regularly scheduled maintenance operation. In many environments it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. VNXe storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. Organizations with remote office and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments need to balance the benefits of local support with the need to maintain central control. Local Systems and storage should be easy for local personnel to administer, but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also leverage Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining simplicity of operation and unified storage functionality for local managers. 46 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
47 Solution Technology Overview Backup and recovery Overview Backup and recovery, another important component in this VSPEX solution, provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster. EMC Powered Backup is a smart method of backup. It consists of best of class, integrated protection storage and software designed to meet backup and recovery objectives now and in the future. With EMC marketleading protection storage, deep data source integration, and featurerich data management services, you can deploy an open, modular protection storage architecture that allows you to scale while lowering cost and complexity. EMC Avamar deduplication EMC Data Domain deduplication storage systems EMC RecoverPoint EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, network-attached storage (NAS) servers, and desktops/laptops. Learn more at EMC Data Domain Deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with highspeed, inline deduplication for backup and archive workloads. Learn more at EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. EMC RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data locally (continuous data protection, CDP), remotely (continuous remote replication, CRR), or both (local and remote replication, CLR). RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and the data is transferred via FC. RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order. In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously. RecoverPoint uses lightweight splitting technology on the application server, in the fabric or in the array, to mirror application writes to the RecoverPoint cluster. RecoverPoint supports several types of write splitters: Array-based Intelligent fabric-based Host-based Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 47
48 Solution Technology Overview Other technologies In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. EMC XtremCache EMC XtremCache is a server flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe flash technology. Server-side flash caching for maximum speed XtremCache performs the following functions to improve system performance: Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application. Automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server flash card. This means that the hottest data (most active data) automatically resides on the PCIe card in the server for faster access. Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application accelerates with XtremCache, the array performance for other applications remains the same or slightly enhanced. Write-through caching to the array for total protection XtremCache accelerates reads and protects data by using a writethrough cache to the storage to deliver persistent high-availability, integrity, and disaster recovery. Application agnostic XtremCache is transparent to applications; there is no need to rewrite, retest, or recertify to deploy XtremCache in the environment. Minimum impact on system resources Unlike other caching solutions on the market, XtremCache does not require a significant amount of memory or CPU cycles, as all flash and wear-leveling management are done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using XtremCache on server resources. XtremCache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. 48 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
49 Solution Technology Overview XtremCache active/passive clustering support The configuration of XtremCache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The XtremCache-enabled active/passive cluster ensures data integrity, and accelerates application performance. XtremCache performance considerations XtremCache performance considerations include: On a write request, XtremCache first writes to the array, then to the cache, and then completes the application I/O. On a read request, XtremCache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds; therefore, the array limits how fast the cache can work. As the number of writes increases, XtremCache performance decreases. XtremCache is most effective for workloads with a 70 percent or greater read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in XtremCache 1.5. Note: For more information, refer to the Introduction to EMC Xtrem Cache White Paper. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 49
50 Solution Technology Overview 50 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
51 Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High availability and failover Validation test profile EMC Powered Backup and configuration guidelines Sizing guidelines Reference workload Applying the reference workload Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 51
52 Solution Architecture Overview Overview This chapter provides a comprehensive guide to the major architectural aspects of this solution. Server capacity is presented in generic terms for required minimums of CPU, memory, and network resources; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. 52 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
53 Solution Architecture Overview Solution architecture Overview The VSPEX solution for Microsoft Hyper-V Private Cloud with VNXe validates the configuration for up to 125 virtual machines. Note: VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload. Logical architecture The architecture diagrams in this section show the layout of the major components in this solution. Two types of storage, block-based and filebased, are shown in the following diagrams. Figure 11 shows the infrastructure validated with block-based storage, where an 8 Gb FC or 10 Gb-iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic. Figure 11. Logical architecture for block storage Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 53
54 Solution Architecture Overview Figure 12 characterizes the infrastructure validated with file-based storage, where 10 GbE carries storage traffic and all other traffic. Figure 12. Logical architecture for file storage Key components The architectures include the following key components: Microsoft Hyper-V Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 2 on page 57. Hyper-V provides highly available infrastructure through features such as: Live Migration Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Live Storage Migration Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. Failover Clustering High Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster. 54 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
55 Solution Architecture Overview Dynamic Optimization (DO) Provides load balancing of computing capacity in a cluster with support of SCVMM. Microsoft System Center Virtual Machine Manager (SCVMM) This solution does not require SCVMM. However, if deployed, it simplifies provisioning, management, and monitoring of the Hyper-V environment. Microsoft SQL Server 2012 SCVMM, if used, requires a SQL Server database instance to store configuration and monitoring details. DNS Server Use DNS services for the various solution components to perform name resolution. This solution uses Microsoft DNS service running on Windows Server 2012 R2. Active Directory Server Various solution components require Active Directory (AD) services to function properly. The Microsoft AD Service runs on a Windows Server 2012 R2. IP network A standard Ethernet network carries all network traffic with redundant cabling and switching. A shared IP network carries user and management traffic. Storage network The storage network is an isolated network that provides hosts with access to the storage arrays. VSPEX offers different options for block-based and file-based storage. Brocade Storage network for block This solution provides three options for block-based storage networks. Fibre Channel (FC) is a set of standards that define protocols for performing high speed serial data transfer. FC provides a standard data transport frame among servers and shared storage devices. o Connectrix-B 6510 Fibre Channel Switch Provides fast and easy scaling from 24 to 48 Ports on Demand (PoD) and supports 2, 4, 8, or 16 Gbps for VNX series storage array. (Deployment of Connectrix-B 6510 FC switches demonstrated in Chapter 5.) Fibre Channel over Ethernet (FCoE) is a storage networking protocol that supports FC natively over Ethernet, by encapsulating FC frames into Ethernet frames. This allows the encapsulated FC Frames to run alongside traditional Internet Protocol (IP) traffic. o Brocade VDX 6740 Ethernet Fabric Switch Provides efficient, easy to configure, resiliency that scales from 24 to 64 Ports on Demand (PoD) at 10GbE for FCoE attached VNX series. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 55
56 Solution Architecture Overview 10 Gb Ethernet (iscsi) enables the transport of SCSI blocks over a TCP/IP network. iscsi works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. o Brocade VDX 6740-T Ethernet Fabric Switch Provides efficient, easy to configure, resiliency that scales from 24 to 64 Ports on Demand (PoD) at 1 GbE or 10GbE for iscsi attached VNX series arrays. Brocade Storage network for file With file-based storage, a private, non-routable 10 GbE subnet carries the storage traffic. Brocade VDX 6740-T Ethernet Fabric Switch Provides efficient, easy to configure, resiliency that scales from 24 to 64 Port on Demand (PoD) at 1 GbE or 10GbE for file attached VNX series arrays. (Deployment of VDX Ethernet Fabric series switches demonstrated in Chapter 5.) VNXe storage array The VSPEX Private Cloud configuration begins with the VNXe series storage arrays, including: EMC VNXe3200 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper- V hosts for up to 125 virtual machines. VNXe series storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports FC and iscsi protocols. The SPs provide access for all external hosts, and for the file side of the VNXe array. Standby power supply (SPS) is 1U in size and provides enough power to each SP to ensure that any data in flight destags to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and made persistent. Disk array enclosures (DAEs) house the drives used in the array. 56 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
57 Solution Architecture Overview Hardware resources Table 2 lists the hardware used in this solution. Table 2. Solution hardware Component Microsoft Hyper-V servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 125 virtual machines: Minimum of 250 GB RAM Add 2GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBAs per server File 4 x 10 GbE NICs per server Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V HA and meet the listed minimums. Brocade Network infrastructure Minimum switching capacity Block Brocade Connectrix-B Fibre Channel Switches Two Brocade 6510 switches 24 to 48 PoD 2 x 8 or 16 Gbps ports per VMware vsphere server, for storage network 2 x 8 Gbps ports per SP, for storage data File Brocade Ethernet Fabric Switch Two Brocade VDX 6740-T switches 24 to 64 PoD 4 x 10 GbE ports per VMware vsphere server 2 x 10 GbE ports per Data Mover for data Management 1 x 1 GbE port per Control Station for management EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper. Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 57
58 Solution Architecture Overview Component EMC VNXe series storage array Block File Configuration Common: 1 x 1 GbE interface per SP for management 2 front end Fibre Channel ports per SP. system disks for VNXe OE For 125 virtual machines: EMC VNXe x 600 GB 10k rpm 2.5-inch Serial-Attached SCSI (SAS) drives 2 x 200 GB flash drives(optional) 4 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare(optional) Common: 2 x 10 GbE interfaces per Storage Processor 1 x 1 GbE interface per SP for management System disks for VNXe OE For 125 virtual machines EMC VNXe x 600 GB 10k rpm 2.5-inch SAS drives 2 x 200 GB flash drives(optional) 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare(optional) Shared infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, add the following: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. Note: The solution recommends using a 10 Gb network or an equivalent 1Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. 58 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
59 Solution Architecture Overview Software resources Table 3 lists the software used in this solution. Table 3. Solution software Software Microsoft Hyper-V Microsoft Windows Server Microsoft System Center Virtual Machine Manager Configuration Windows Server 2012 R2 Datacenter Edition (Datacenter Edition is necessary to support the number of virtual machines in this solution) Version 2012 R2 Microsoft SQL Server EMC VNXe EMC VNXe OE 8.0 Version 2012 Enterprise Edition Note: Any supported database for SCVMM is acceptable. EMC Storage Integrator (ESI) EMC PowerPath Check for latest version Check for latest version Brocade Network Brocade FOS for block on 6510 FC series switch Brocade NOS for file on VDX 6740-T Ethernet Fabric series switch Fabric OS v7.3 Network OS v5.0.0 Next-Generation Backup EMC Avamar 6.1 SP1 EMC Data Domain OS 5.2 Virtual machines (used for validation not required for deployment) Base operating system Microsoft Windows Server 2012 R2 Datacenter Edition Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 59
60 Solution Architecture Overview Server configuration guidelines Overview When designing and ordering the compute or server layer of the VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Dynamic Memory and Smart Paging can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. Current VSPEX sizing guidelines specify a virtual CPU core to physical CPU core ratio of 4:1 (for Ivy Bridge or new processors, use a ratio of 8:1). This ratio was based upon an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, OEM server vendors that are VSPEX partners may suggest differing (normally higher) ratios. Follow the updated guidance supplied by your OEM server vendor. Table 4 lists the hardware resources that are used for the compute layer. Table 4. Hardware resources for compute layer Component Microsoft Hyper-V servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 125 virtual machines: Minimum of 250 GB RAM Add 2GB for each physical server Brocade Network Block File 2 x 10 GbE NICs per server 2 HBA per server 4 x 10 GbE NICs per server Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Hyper-V HA and meet the listed minimums. 60 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
61 Solution Architecture Overview Hyper-V memory virtualization Microsoft Hyper-V has a number of advanced features to maximize performance, and overall resource utilization. The most important features relate to memory management. This section describes some of these features, and the items to consider when using these features in the VSPEX environment. In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 13. Figure 13. Hypervisor memory consumption Understanding the technologies in this section enhances this basic concept. Dynamic Memory Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any given time. In Windows Server 2012 R2, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 61
62 Solution Architecture Overview Smart Paging Even with Dynamic Memory, Hyper-V allows more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory. Smart Paging is a memory management technique that uses disk resources as temporary memory replacement. It swaps out less-used memory to disk storage, and swaps in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, so Windows Server 2012 R2 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host. Windows Server 2012 R2 extends this functionality to the virtual machines, which provides improved performance in symmetrical multiprocessor (SMP) environments. Memory configuration guidelines The memory configuration guidelines take into account Hyper-V memory overhead, and the virtual machine memory settings. Hyper-V memory overhead Virtualized memory has some associated overhead, which includes the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. Leave at least 2 GB memory for the Hyper-V parent partition in this solution. Virtual machine memory In this solution, each virtual machine gets 2 GB memory in the fixed mode. 62 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
63 Solution Architecture Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined in Table 5 consider jumbo frames, VLANs, and LACP on EMC unified storage. The client access network is for users of the system, or clients, to communicate with the infrastructure. Administrators use the Management Network as a dedicated way to access the management connections on the storage array, network switches, and hosts. The Storage Network is communication between the compute layer and the storage layer. The Brocade Storage Network provides the communication between the compute layer and the storage layer. For detailed Brocade storage network resource requirements, refer to Table 5. Table 5. Hardware resources for network Component Configuration Brocade Network infrastructure Minimum switching capacity Block Brocade Fibre Channel Switch Two Brocade 6510 switches 24 to 48 PoD 2 x 10 GbE ports per Hyper-V server* 1 x 1 GbE port per Control Station for management* 2 ports per Hyper-V server, for storage network 2 ports per SP, for storage data File Brocade Ethernet Fabric Switch Two Brocade VDX 6740-T switches 24 to 64 PoD 4 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management* 2 x 10 GbE ports per Data Mover for data Note: The solution may use a 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VLAN Isolate the network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient. This solution calls for a minimum of three VLANs for the following usage: Client access Storage (for iscsi or SMB only) Management Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 63
64 Solution Architecture Overview Figure 14 depicts the VLANs and the network connectivity requirements for a block-based VNXe array. Figure 14. Required networks for block storage 64 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
65 Solution Architecture Overview Figure 15 depicts the VLANs and the network connectivity requirements for a file-based VNXe array. Figure 15. Required networks for file storage The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note: Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 65
66 Solution Architecture Overview Enabling jumbo frames (iscsi or SMB only) This solution recommends setting the MTU to 9,000 (jumbo frames) for efficient storage and virtual machine migration traffic. Refer to the switch vendor guidelines to enable jumbo frames for storage and host ports on the switches. Enabling link aggregation (SMB only) A link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNXe, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Brocade Virtual Link Aggregation Group (vlag) Brocade Virtual Link Aggregation Groups (vlags) are used for the Microsoft Hyper-V host and customer infrastructure. In the case of the VNX, a dynamic Link Aggregation Control Protocol (LACP) vlag is not used with MC/S and iscsi. While Brocade ISLs are used as interconnects between Brocade VDX switches within a Brocade VCS fabric, industry standard LACP LAGs are supported for connecting to other network devices outside the Brocade VCS fabric. Typically, LACP LAGs can only be created using ports from a single physical switch to a second physical switch. In a Brocade VCS fabric, a vlag can be created using ports from two Brocade VDX switches to a device to which both VDX switches are connected. This provides an additional degree of device-level redundancy, while providing active-active link-level load balancing. Brocade Inter- Switch Link (ISL) Trunks This solution uses Brocade Inter-Switch Link (ISL) Trunking within the Brocade VCS fabric to provide additional redundancy and load balancing between the iscsi clients and iscsi storage. Typically, multiple links between two switches are bundled together in a Link Aggregation Group (LAG) to provide redundancy and load balancing. Setting up a LAG requires lines of configuration on the switches and selecting a hash-based algorithm for load balancing based on source-destination IP or MAC addresses. All flows with the same hash traverse the same link, regardless of the total number of links in a LAG. This might result in some links within a LAG, such as those carrying flows to a storage target, being over utilized and packets being dropped, while other links in the LAG remain underutilized. Instead of LAG-based switch interconnects, Brocade VCS Ethernet fabrics automatically form ISL trunks when multiple connections are added between two Brocade VDX switches. Simply adding another cable increases bandwidth, providing linear scalability of switch-to-switch traffic, and this does not require any configuration on the switch. In addition, ISL trunks use a frame-by-frame load balancing technique, which evenly balances traffic across all members of the ISL trunk group. 66 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
67 Solution Architecture Overview Equal-Cost Multipath (ECMP) A standard link-state routing protocol that runs at Layer 2 determines if there are Equal-Cost Multipaths (ECMPs) between RBridges in an Ethernet fabric and load balances the traffic to make use of all available ECMPs. If a neighbor switch is reachable via several interfaces with different bandwidths, all of them are treated as equal-cost paths. While it is possible to set the link cost based on the link speed, such an algorithm complicates the operation of the fabric. Simplicity is a key value of Brocade VCS Fabric technology, so an implementation is chosen in the test case that does not consider the bandwidth of the interface when selecting equal-cost paths. This is a key feature needed to expand network capacity, to keep ahead of customer bandwidth requirements. Pause Flow Control Brocade VDX Series switches support the Pause Flow Control feature. IEEE 802.3x Ethernet pause and Ethernet Priority-Based Flow Control (PFC) are used to prevent dropped frames by slowing traffic at the source end of a link. When a port on a switch or host is not ready to receive more traffic from the source, perhaps due to congestion, it sends pause frames to the source to pause the traffic flow. When the congestion is cleared, the port stops requesting the source to pause traffic flow, and traffic resumes without any frame drop. When Ethernet pause is enabled, pause frames are sent to the traffic source. Similarly, when PFC is enabled, there is no frame drop; pause frames are sent to the source switch. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 67
68 Solution Architecture Overview Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance. Hyper-V allows more than one method of using storage when hosting virtual machines. The tested solutions described below use different protocols FC /iscsi(for blocks) and CIFS (for files), and the storage layout described adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based upon their understanding of the system usage and load if required. However, the building blocks described in this document ensure acceptable performance. The VSPEX storage building blocks section provides specific recommendations for the customization. Table 6 lists hardware resources for storage. Table 6. Hardware resources for storage Component EMC VNXe series storage array Block File Configuration Common: 1 x 1 GbE interface per SP for management 2 front end Fibre Channel ports per SP system disks for VNXe OE For 125 virtual machines: EMC VNXe x 600 GB 10k rpm 2.5-inch SAS drives 2 x 200 GB flash drives(optional) 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare(optional) Common: 2 x 10 GbE interfaces per SP 1 x 1 GbE interface per SP for management System disks for VNXe OE For 125 virtual machines: EMC VNXe x 600 GB 10k rpm 2.5-inch SAS drives 2 x 200 GB flash drives(optional) 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares 1 x 200 GB flash drive as a hot spare(optional) 68 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
69 Solution Architecture Overview Hyper-V storage virtualization for VSPEX This section provides guidelines to set up the storage layer of the solution to provide high-availability and the expected level of performance. Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes v2 and VHDX features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 16, the storage array presents either block-based LUNs (as CSV), or file-based CIFS share (as SMB shares) to the Windows hosts to host virtual machines. Figure 16. Hyper-V virtual disk types CIFS Windows Server 2012 R2 supports using CIFS (SMB 3.0) file shares as shared storage for a Hyper-V virtual machine. CSV A Cluster Shared Volume (CSV) is a shared disk containing a New Technology File System (NTFS) volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass Through Windows 2012 also supports Pass Through, which allows a virtual machine to access a physical disk mapped to the host that does not have a volume configured on it. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 69
70 Solution Architecture Overview SMB 3.0 (file-based storage only) The SMB protocol is the file sharing protocol that is used by default in Windows. With the introduction of Windows Server 2012 R2, it provides a vast set of new SMB features with an updated (SMB 3.0) protocol. Some of the key features available with Windows Server 2012 SMB 3.0 are: SMB Transparent Failover SMB Scale Out SMB Multichannel SMB Direct SMB Encryption VSS for SMB file shares SMB Directory Leasing SMB PowerShell With these new features, SMB 3.0 offers richer capabilities that, when combined, provide organizations with a high performance storage alternative to traditional Fibre Channel storage solutions at a lower cost. Note: For more details about SMB 3.0, refer to Chapter 3. ODX Offloaded Data Transfer (ODX) is a feature of the storage stack in Microsoft Windows Server 2012 R2 that gives users the ability to use the investment in external storage arrays to offload data transfers from the server to the storage arrays. When used with storage hardware that supports the ODX feature, file copy operations are initiated by the host, but performed by the storage device. ODX eliminates the data transfer between the storage and the Hyper-V hosts by using a token-based mechanism for reading and writing data within storage arrays and reduces the load on your network and hosts. Using ODX helps to enable rapid cloning and migration of virtual machines. Because the file transfer is offloading to the storage array when using ODX, the host resource usage, such as CPU and network, is significantly reduced. By maximizing the use of storage array, ODX minimizes latencies and improve the transfer speed of large files, such as database or video files. When performing file operations that are supported by ODX, data transfers are automatically offloaded to the storage array and are transparent to users. ODX is enabled by default in Windows Server 2012 R2. 70 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
71 Solution Architecture Overview VHDX Hyper-V in Windows Server 2012 R2 contains an update to the VHD format called VHDX, which has much larger capacity and built-in resiliency. The main features of the VHDX format are: Support for virtual hard disk storage with the capacity of up to 64 TB. Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures. Optimal structure alignment of the virtual hard disk format to suit large sector disks. The VHDX format also has the following features: Larger block size for dynamic and differential disks, which enables the disks to better meet the needs of the workload. The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors. The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates. Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware). VSPEX storage building blocks Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache or FAST VP (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce this complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. VSPEX solutions have been engineered to provide a variety of sizing configurations which afford flexibility when designing the solution. Customers can start out by deploying smaller configurations and scale up as their needs grow. At the same time, customers can avoid overpurchasing by choosing a configuration that closely meets their needs. To accomplish this, VSPEX solutions can be deployed using one or both of the scale-points below to obtain the ideal configuration while guaranteeing a given performance level. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 71
72 Solution Architecture Overview Building block for 15 virtual servers The first building block can contain up to 15 virtual servers, with five SAS drives in a storage pool, as shown in Figure 17. Figure 17. Building block for 15 virtual servers This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 15 more virtual servers. Building block for 125 virtual servers The second building block can contain up to 125 virtual servers. It contains 40 SAS drives, as shown in Figure 18. This figure also shows the four drives required for the VNXe operating system. The preceding sections outline an approach to grow from 15 virtual machines in a pool to 125 virtual machines in a pool. Figure 18. Building block for 125 virtual servers 72 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
73 Solution Architecture Overview Implement this building block with all of the resources in the pool initially, or expand the pool over time as the environment grows. Table 7 lists the flash and SAS requirements in a pool for different numbers of virtual servers. Table 7. Number of disks required for different number of virtual machines Virtual servers SAS drives * Note: Due to increased efficiency with larger stripes, the building block with 40 SAS drives can support up to 125 virtual servers. To grow the environment beyond 125 virtual servers, create another storage pool using the building block method described here. VSPEX Private Cloud validated maximums VSPEX Private Cloud configurations are validated on the VNXe3200 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX private cloud configuration. In addition to the VSPEX private cloud building blocks, each storage array must contain the drives used for the VNXe Operating Environment (OE), and hot spare disks for the environment. Notes: Allocate at least one hot spare for every 30 disks of a given type and size. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 73
74 Solution Architecture Overview VNXe3200 VNXe3200 is validated for up to 125 virtual servers. Figure 19 shows a typical configuration. Figure 19. Storage layout for 125 virtual machines using VNXe3200 This configuration uses the following storage layout: Forty 600 GB SAS disks are allocated to a block-based storage pool for 125 virtual machines Two 600 GB SAS disks are configured as hot spares For blocks, allocate at least two LUNs to the Hyper-V Failover Cluster from a single storage pool to serve as CSV For files, allocate at least two SMB shares to the Hyper-V Failover Cluster from a single storage pool for the virtual servers Optionally configure two 200 GB flash drives for FAST VP Optionally configure one 200 GB as a hot spare Optionally configure flash drives as FAST Cache (up to 200 GB) in the array. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration, the VNXe3200 can support 125 virtual servers as defined in the Reference workload section. Conclusion The scale levels listed in Figure 20 highlight the entry points and supported maximums values for the arrays in the VSPEX Private Cloud environment. The entry points represent optimal model demarcations in terms of the number of virtual machines within the environment. This helps you to determine which VNXe array to choose based on your requirements. You can choose to configure any of the listed arrays with a smaller number of virtual machines than the maximum values supported by using the building block approach described earlier. 74 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
75 Solution Architecture Overview Figure 20. Maximum scale levels and entry points of different arrays Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 75
76 Solution Architecture Overview High availability and failover Overview This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, business operations survive from single-unit failures with little or no impact. Virtualization layer Configure high availability in the virtualization layer, and configure the hypervisor to automatically restart failed virtual machines. Figure 21 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 21. High availability on the virtualization layer By implementing high availability on the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the data center. This type of server has redundant power supplies, as shown in Figure 22. Connect these servers to separate power distribution units (PDUs) in accordance with your server vendor s best practices. Figure 22. Redundant power supplies To configure HA in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as demonstrated in Figure Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
77 Solution Architecture Overview Brocade Network layer The advanced networking features of the VNX family and Brocade VDX Ethernet Fabric and Connectrix-B 6510 Fibre Channel Family of switches provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage networks to guard against link failures, as shown in Figure 23 and Figure 24. Spread these connections across multiple Brocade Fabric switches to guard against component failure in the network. Figure 23. Brocade Network layer high availability (VNXe) block storage network variant Figure 24. Brocade Network layer high availability (VNXe) file storage Ensure that there is no single point of failure (SPOF) to allow the compute layer to access storage, and communicate with users even if a component fails. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 77
78 Solution Architecture Overview Storage layer The VNXe series design is for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk, as shown in Figure 25. Figure 25. VNXe series HA components EMC storage arrays support HA by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. 78 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
79 Solution Architecture Overview Validation test profile Profile characteristics The VSPEX solution was validated with the environment profile described in Table 8. Table 8. Profile characteristics Profile characteristic Value Number of virtual machines 125 Virtual machine OS Windows Server 2012 R2 Datacenter Edition Processors per virtual machine 1 Number of virtual processors per physical CPU core RAM per virtual machine Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or CIFS shares to store virtual machine disks Number of virtual machines per LUN or CIFS share 4* 2 GB 100 GB 25 IOPS 6/10/16 62 or 63 per LUN of CIFS share Disk and RAID type for LUNs or CIFS shares RAID 5, 600 GB, 10k rpm, 2.5- inch SAS disks *For Ivy Bridge or later processors, use 8 vcpu per physical core Note: This solution was tested and validated with Windows Server 2012 R2 as the operating system for Hyper-V hosts and virtual machines; however it also supports Windows Server 2008 R2 and Windows Server Hyper- V hosts on all supported versions of Windows Server use the same sizing and configuration. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 79
80 Solution Architecture Overview EMC Powered Backup and configuration guidelines Sizing guidelines Reference workload For complete EMC Powered Backup guidelines for this VSPEX Private Cloud solution, refer to the EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. The sections include instructions on how to correlate those reference workloads to customer workloads, and how that may change the end delivery from the server and network perspective. Modify the storage definition by adding drives for greater capacity and performance, and by adding features such as FAST Cache and FAST VP. The disk layouts provide support for the appropriate number of virtual machines at the defined performance level and for typical operations such as snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response time. Overview When you move an existing server to a virtual infrastructure, you can gain efficiency by right-sizing the virtual hardware resources assigned to that system. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. Defining the reference workload To simplify the discussion, this section presents a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can determine which reference architecture to choose. For the VSPEX solutions, the reference workload is a single virtual machine. Table 9 lists the characteristics of this virtual machine. 80 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
81 Solution Architecture Overview Table 9. Virtual machine characteristics Characteristic Value Virtual machine operating system Microsoft Windows Server 2012 R2 Datacenter Edition Virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine I/O operations per second (IOPS) per virtual machine I/O pattern 2 GB 100 GB 25 Random I/O read/write ratio 2:1 This specification for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference to measure other virtual machines. Applying the reference workload Overview When you consider an existing server for movement into a virtual infrastructure, you have the opportunity to gain efficiency by right sizing the virtual hardware resources assigned to that system. The solution creates a pool of resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 9 on page 81. The customer virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 81
82 Solution Architecture Overview Example 1: Custom-built application A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully utilized. A careful analysis of the existing application reveals that the application can use one processor, and needs 3 GB memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. Based on these numbers, the resource pool needs the following resources: CPU of one reference virtual machine Memory of two reference virtual machines Storage of one reference virtual machine I/Os of one reference virtual machine In this example, an appropriate virtual machine uses the resources for two of the reference virtual machines. If implemented on a VNXe3200 storage system which can support up to 125 virtual machines, resources for 123 reference virtual machines remain. Example 2: Point-of-Sale system The database server for a customer s Point-of-Sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB memory. It uses 200 GB storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of four reference virtual machines Memory of eight reference virtual machines Storage of two reference virtual machines I/Os of eight reference virtual machines In this case, the one appropriate virtual machine uses the resources of eight reference virtual machines. If implemented on a VNXe3200 storage system which can support up to 125 virtual machines, resources for 117 reference virtual machines remain. Example 3: Web server The customer s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB memory. It uses 25 GB storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of two reference virtual machines Memory of four reference virtual machines Storage of one reference virtual machine 82 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
83 Solution Architecture Overview I/Os of two reference virtual machines In this case, the one appropriate virtual machine uses the resources of four reference virtual machines. If implemented on a VNXe3200 storage system which can support up to 125 virtual machines, resources for 121 reference virtual machines remain. Example 4: Decision-support database The database server for a customer s decision support system must move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64 GB memory. It uses 5 TB storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of 10 reference virtual machines Memory of 32 reference virtual machines Storage of 52 reference virtual machines I/Os of 28 reference virtual machines In this case, one virtual machine uses the resources of 52 reference virtual machines. If implemented on a VNXe3200 storage system which can support up to 125 virtual machines, resources for 73 reference virtual machines remain. Summary of examples These four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 125 reference virtual machines, and resources for 59 reference virtual machines remain in the resource pool as shown in Figure 26. Figure 26. Resource pool flexibility In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are beyond the scope of the document. Examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 83
84 Solution Architecture Overview Implementing the solution Overview Resource types This solution requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are general requirements that are independent of any particular implementation except that the requirements grow linearly with the target level of scale. This section describes some considerations for implementing the requirements. The solution defines the hardware requirements in terms of these basic resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, their use in the solution, and key implementation considerations in a customer environment. CPU resources Memory resources The solution defines the number of CPU cores that are required, but not a specific type or configuration. New deployments should use recent revisions of common processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution. In any running system, monitor the utilization of resources and adapt as needed. The reference virtual machine and required hardware resources in the solution assume that there are four virtual CPUs for each physical processor core (4:1 ratio). (For Ivy Bridge or later processors, use 8 vcpus per physical core.) Usually, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Each virtual server in the solution must have 2 GB of memory. In a virtual environment, it is common to provision virtual machines with more memory than is installed on the physical hypervisor server because of budget constraints. Memory over-commitment assumes that each virtual machine does not use all its allocated memory. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem via page file swapping. 84 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
85 Solution Architecture Overview This solution is validated with statically assigned memory and no overcommitment of memory resources. If a real-world environment uses overcommitted memory, monitor the system memory utilization and associated page file I/O activity consistently to ensure that a memory shortfall does not cause unexpected results. Network resources The solution outlines the minimum needs of the system. If additional bandwidth is needed, add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and can add ports using EMC UltraFlex I/O modules. For reference purposes in the validated environment, each virtual machine generates 25 IOPS with an average size of 8 KB. This means that each virtual machine is generating at least 200 KB/s traffic on the storage network. For an environment rated for 100 virtual machines, this comes out to a minimum of approximately 20 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual machine migration Administrative and management operations The requirements for each network depend on how it will be used. It is not practical to provide precise numbers in this context. However, the network described in the solution should be sufficient to handle average workloads for the previously described use cases. Regardless of the network traffic requirements, always have at least two physical network connections shared for a logical network so that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The storage building blocks described in this solution contain layouts for the disks used in the system validation. Each layout balances the available storage capacity with the performance capability of the drives. Consider a few factors when examining storage sizing. Specifically, the array has a collection of disks assigned to a storage pool. From that storage pool, provision CIFS shares to the Windows cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 85
86 Solution Architecture Overview It is acceptable to Replace drives with larger capacity drives of the same type and performance characteristics, or with higher performance drives of the same type and capacity. Similarly, Change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. Increase the scale using the building blocks with larger numbers of drives up to the limit defined in the VSPEX Private Cloud validated maximums section. Observe the following best practices: Use the latest best practices guidance from EMC regarding drive placement within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best Practices for Performance. When expanding the capability of a storage pool using the building blocks described in this document, use the same type and size of drive in the pool. Create a new pool to use different drive types and sizes. This prevents uneven performance across the pool. Configure at least one hot spare for every type and size of drive on the system. Configure at least one hot spare for every 30 drives of a given type. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system and conforms to EMC published best practices. Implementation summary The requirements in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual machine. In any customer implementation, the load of a system varies over time as users interact with the system. However, if the customer virtual machines differ significantly from the reference definition, and vary in the same resource group, add more of that resource type to the system to compensate. 86 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
87 Solution Architecture Overview Quick assessment of customer environment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as listed in Table 10. Table 10. Blank worksheet row Application CPU (virtual CPUs) Memor y (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example applicati on Resource requirement s Equivalent reference virtual machines N/A Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU Memory IOPS Capacity Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 87
88 Solution Architecture Overview CPU requirements Memory requirements Storage performance requirements IOPS Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as perfmon in Microsoft Windows to examine the CPU utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. In any operation that involves performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as Microsoft Windows perfmon, to determine memory efficiency. In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. The storage performance requirements for an application are usually the least understood aspect of performance. Several components become important when discussing the I/O performance of a system: The number of requests coming in or IOPS. The size of the request, or I/O size, For example, a request for 4 KB of data is easier and faster to process than a request for 4 MB of data. The average I/O response time, or I/O latency. The reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as Microsoft Windows perfmon. Perfmon provides several counters that can help. The most common are: Logical Disk or Disk Transfer/sec Logical Disk or Disk Reads/sec Logical Disk or Disk Writes/sec 88 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
89 Solution Architecture Overview Note: At the time of publication, Windows perfmon does not provide counters to expose IOPS and latency for CIFS-based VHDX storage. Monitor these areas from the VNXe array as discussed in Chapter 7. The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application. I/O size I/O latency The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even powers of 2, such as 4 KB, 8 KB, 16 KB, or 32 KB. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of the actual I/O sizes. The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application is using mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application generates 100 IOPS at 32 KB, the factor indicates to plan for 400 IOPS since the reference virtual machine assumes 8 KB I/O sizes. You can use the average I/O response time, or I/O latency, to measure how quickly the storage system processes I/O requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target, and at the same time, monitor the system and reevaluate the resource pool utilization if needed. To monitor I/O latency, use the Logical Disk\Avg. Disk sec/transfer counter in Microsoft Windows perfmon. If the I/O latency is continuously over the target, reevaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended. Storage capacity requirements The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year, requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 89
90 Solution Architecture Overview Determining equivalent reference virtual machines With all of the resources defined, determine an appropriate value for the equivalent reference virtual machines line by using the relationships in Table 11. Round all values up to the closest whole number. Table 11. Reference virtual machine resources Resource Value for reference virtual machine Relationship between requirements and equivalent reference virtual machines CPU 1 Equivalent reference virtual machines = resource requirements Memory 2 Equivalent reference virtual machines = (resource requirements)/2 IOPS 25 Equivalent reference virtual machines = (resource requirements)/25 Capacity 100 Equivalent reference virtual machines = (resource requirements)/100 For example, the Point of Sale system used in Example 2: Point-of-Sale system requires four CPUs, 16 GB memory, 200 IOPS, and 200 GB storage. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 12 demonstrates how that machine fits into the worksheet row. Table 12. Example worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example application Resource requirements Equivalent reference virtual machines N/A Use the highest value in the row to fill in the Equivalent reference virtual machines column. As shown in Figure 27, the example requires eight reference virtual machines. 90 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
91 Solution Architecture Overview Figure 27. Required resource from the reference virtual machine pool Implementation example stage 1 A customer wants to build a virtual infrastructure to support one custombuilt application, one Point of Sale system, and one web server. The customer computes the sum of the Equivalent reference virtual machines column on the right side of the worksheet as listed in Table 13 to calculate the total number of reference virtual machines required. The table shows the result of the calculation, along with the value, rounded up to the nearest whole number. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 91
92 Solution Architecture Overview Table 13. Example applications stage 1 Application Server resources Storage resources CPU (virtual CPUs) Memory IOPS Capacity Reference virtual machines Example application #1: Custom built application Example Application #2: Point of sale system Example Application #3: Web server Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines 1 3 GB GB N/A GB GB N/A GB GB N/A Total equivalent reference virtual machines 14 This example requires 14 reference virtual machines. According to the sizing guidelines, one storage pool with 10 SAS drives and 2 or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNXe3200, for up to 125 reference virtual machines. Figure 28 shows one reference virtual machine is available after implementing VNXe3200 with 5 SAS drives and two flash drives. 92 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
93 Solution Architecture Overview Figure 28. Aggregate resource requirements stage 1 Figure 29 shows the pool configuration in this example. Figure 29. Pool configuration stage 1 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 93
94 Solution Architecture Overview Implementation example stage 2 Next, this customer must add a decision support database to this virtual infrastructure. Using the same strategy, calculate the number of Equivalent reference virtual machines required, as shown in Table 14. Table 14. Example applications - stage 2 Application Server resources CPU (virtual CPUs) Storage resources Memory IOPS Capacity Reference virtual machines Example application #1: Custom built application Example application #2: Point of Sale system Example application #3:Web server Example application #4: Decision support database Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines 1 3 GB N/A GB GB N/A GB GB N/A GB 700 5,120 GB N/A Total equivalent reference virtual machines 66 This example requires 66 reference virtual machines. According to the sizing guidelines, one storage pool with 25 SAS drives and two or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNXe3200, for up to 125 reference virtual machines. 94 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
95 Solution Architecture Overview Figure 30 shows nine reference virtual machines available after implementing VNXe3200 with 25 SAS drives and two flash drives. Figure 30. Aggregate resource requirements - stage 2 Figure 31 shows the pool configuration in this example. Figure 31. Pool configuration stage 2 Fine-tuning hardware resources Usually, the process described in Determining equivalent reference virtual machines determines the recommended hardware size for servers and storage. However, in some cases there is a need to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this guide; however, you can perform additional customization at this point. Storage resources In some applications, there is a need to separate application data from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. To achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool. With the method outlined in Determining equivalent reference virtual machines, it is easy to build a virtual infrastructure scaling from 15 reference virtual machines to 125 reference virtual machines with the building blocks described in VSPEX storage building blocks, while keeping in mind the recommended limits of each storage array documented in VSPEX Private Cloud validated maximums. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 95
96 Solution Architecture Overview Server resources For some workloads the relationship between server needs and storage needs does not match what is outlined in the Reference virtual machine. Size the server and storage layers separately in this scenario. Figure 32. Customizing server resources To do this, first total the resource requirements for the server components as shown in Table 15. In the Server Component Totals line at the bottom of the worksheet, add up the server resource requirements from the applications in the table. Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage Component Totals line at the bottom of Table 15 describes the required amount of storage. 96 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
97 Solution Architecture Overview Table 15. Server resource component totals Application Server resources CPU (virtual CPUs) Storage resources Memory IOPS Capacity Reference virtual machines Example application #1: Custom built application Example application #2: Point of Sale system Example application #3: Web server #1 Example application #4: Decision Support System database #1 Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines 1 3 GB GB N/A GB GB N/A GB GB N/A GB 700 5,120 GB N/A Total equivalent reference virtual machines 174 Server customization Server component totals NA Note: Calculate the sum of the Resource Requirements row for each application, not the Equivalent reference virtual machines row, to get the Server/Storage Component Totals. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 97
98 Solution Architecture Overview In this example, the target architecture required 17 virtual CPUs and 155 GB of memory. With the stated assumptions of four virtual machines per physical processor core, and no memory over-provisioning, this translates to 5 physical processor cores and 155 GB of memory. With these numbers, the solution can be effectively implemented with fewer server and storage resources. Note: Keep high-availability requirements in mind when customizing the resource pool hardware. Appendix C provides a blank server resource component totals worksheet. EMC VSPEX Sizing Tool To simplify the sizing of this solution EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions. The VSPEX Sizing Tool enables you to input your resource requirements from the customer s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions while providing platform configuration information that meets those requirements. This tool can be accessed at EMC VSPEX Sizing Tool. 98 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
99 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare, connect, and configure Brocade network switches Configure Brocade VDX 6740 switch (File Storage) Configure Brocade 6510 Switch storage network (Block Storage) Providing power to the switch Configuring the 6510 switch Preparing and configuring storage array Installing and configuring Hyper-V hosts Installing and configuring SQL Server database System Center Virtual Machine Manager server deployment Summary Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 99
100 VSPEX Configuration Guidelines Overview The deployment process consists of the main stages listed in Table 16. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. The table also lists the main stages in the solution deployment process, and also includes references to sections that contain relevant procedures. Table 16. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks Obtain the deployment tools Gather customer configuration data Rack and cable the components Configure the switches and networks, connect to the customer network Install and configure the VNXe Configure virtual machine storage Install and configure the servers Set up SQL Server (used by SCVMM) Install and configure SCVMM Deployment prerequisites Customer configuration data Refer to the vendor documentation. Prepare, connect, and configure Brocade network switches Preparing and configuring storage array Preparing and configuring storage array Installing and configuring Hyper-V hosts Installing and configuring SQL Server database System Center Virtual Machine Manager server deployment 100 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
101 VSPEX Configuration Guidelines Pre-deployment tasks Overview The pre-deployment tasks shown in Table 17 include procedures not directly related to environment installation and configuration, but provide needed results at the time of installation. Examples of pre-deployment tasks are collecting hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite. Table 17. Tasks for pre-deployment Task Description Reference Gather documents Gather tools Gather data Gather the related documents listed in Appendix D. These documents provide detail on setup procedures and deployment best practices for the various components of the solution. Gather the required and optional tools for the deployment. Use Table 18 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process. References: EMC documentation Brocade documentation Table 18: Deployment prerequisites checklist Appendix B Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 101
102 VSPEX Configuration Guidelines Deployment prerequisites Table 18 lists the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 3. Table 18. Deployment prerequisites checklist Requirement Description Reference Hardware Sufficient physical server capacity to host 125 virtual servers Windows Server 2012 R2 servers to host virtual infrastructure servers Note: The existing infrastructure may already meet this requirement. Brocade 6510 Fibre Channel switches (Block based storage network connectivity) or Brocade VDX 6740-T Ethernet Fabric switches (File based storage network connectivity) Table 2 EMC VNXe3200 (125 virtual machines): Multiprotocol storage array with the required disk layout Software SCVMM 2012 R2 installation media Microsoft Windows Server 2012 R2 installation media Brocade VDX AMPP and integration with VMWare VCenter server Microsoft Windows Server 2012 R2 installation media (optional for virtual machine guest OS) Microsoft SQL Server 2012 R2 or newer installation media Note: The existing infrastructure may already meet this requirement. Licenses Microsoft Windows Server 2012 R2 Standard Edition (or higher) license keys (optional) Microsoft Windows Server 2012 R2 Datacenter Edition license keys Note: An existing Microsoft Key 102 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
103 VSPEX Configuration Guidelines Requirement Description Reference Management Server (KMS) may already meet this requirement. Microsoft SQL Server license key Note: The existing infrastructure may already meet this requirement. SCVMM 2012 R2 license keys 40GbE Port Upgrade Licenses for Brocade VDX 6740-T switches with NOS v5.0 Customer configuration data Assemble information such as IP addresses and hostnames during the planning process to reduce the onsite time. Appendix B provides a table to maintain a record of relevant customer information. Add, record, or modify information as needed during the deployment process. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 103
104 VSPEX Configuration Guidelines Prepare, connect, and configure Brocade network switches Overview This section lists the Brocade network infrastructure requirements to support this VSPEX architecture. Table 19 provides a summary of the tasks for switch and network configuration, and references for further information. Table 19. Tasks for switch and network configuration Task Description Reference Completing network cabling Configure Brocade Network Connect the switch interconnect ports. Connect the VNX ports. Connect the Hyper-V host ports. Connect the Windows server ports Configure Brocade 6510 Switch storage network (Block Storage) Configure Brocade VDX 6740-T switch (File Storage) Configure storage array and Windows host infrastructure networking as specified in Preparing and configuring storage array and Installing and configuring Hyper-V hosts. Preparing and configuring storage array Installing and configuring Hyper-V hosts. Configure Brocade 6510 Switch storage network (Block Storage) Configure Brocade VDX 6740-T switch (File Storage) Preparing and configuring storage array Installing and configuring Hyper-V hosts Prepare Brocade Storage Network Infrastructure The Brocade network switches deployed with the VSPEX solution provide redundant links for each Hyper-V host, the storage array, and the switch interconnect ports, and the switch uplink ports. This Brocade storage network configuration provides both scalable bandwidth performance and redundancy. The Brocade network solution can be deployed alongside other components of a newly deployed VSPEX solution or as an upgrade for 1GbE to 10 GbE transitions of existing compute and storage VSPEX solution. This network solution has validated levels of performance and high-availability, this section illustrates the network switching capacity listed in Figure 33. Figure 33 and Figure 34 show sample redundant network connectivity with Brocade storage infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that there are no single points of failure. 104 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
105 VSPEX Configuration Guidelines File Based Storage Network Figure 34 shows a sample redundant Brocade VDX Ethernet Fabric switch for 10 GbE network between compute and storage. The diagram illustrates the use of redundant switches with 10GbE/40GbE links to ensure that no single points of failure exist in the CIFS based storage network connectivity. Note: The Brocade VDX Ethernet Fabric switch provide supports converged network for customers needing FCoE or iscsi block based storage network as well. Figure 33. Sample Brocade network architecture File storage Note: Ensure there are adequate switch ports between the file based attached storage array & Hyper-V hosts, and ports to existing customer infrastructure. Virtual machine networking and Hyper-V management are customer- facing networks. Separate them if required. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 105
106 VSPEX Configuration Guidelines Note: Use existing infrastructure that meets the requirements for customer infrastructure and management networks. In this deployment, we are using VLANs to separate traffic- VLAN 30 for Live Migration traffic, VLAN 20 for Storage traffic and VLAN 10 for Management. Refer to Step 5 in the deployment section for details. Block Based Storage Network Figure 34, shows a sample redundant Brocade 6510 Fibre Channel Fabric (FC) switch infrastructure for block based storage network between compute and storage array. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in the network connectivity. Brocade 6510 FC switches with Gen 5 Fibre Channel Technology simplifies the storage network infrastructure through innovative technologies and supports the VSPEX highly virtualized topology design. The Brocade 6510 FC switches are validated for the FC protocol option. Note: The Brocade VDX Ethernet Fabric switch supports converged networks for customers needing FCoE or iscsi block based storage network as well. Figure 34. Sample Brocade network architecture Block storage 106 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
107 VSPEX Configuration Guidelines Note: Ensure there are adequate storage switch ports between the block based attached storage array and Hyper-V hosts. Note: Use existing infrastructure that meets the requirements for customer infrastructure and management networks. Complete Network Cabling Connect Brocade switch ports to all servers, storage arrays, and uplinks. Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections. Ensure that the uplinks are connected to the existing customer network. Ensure the following: Connect Brocade switch ports to all servers, storage arrays, interswitch links (ISLs), and uplinks. All servers and switch uplinks plug into separate switching infrastructures and have redundant connections. Complete connection to the existing customer network. Note: Brocade switches have Installation Guides providing instructions on racking, cabling, and powering that you can refer to for details. Note: At this point, the new equipment is being connected to the existing customer network. Be careful that unforeseen interactions do not cause service issues on the customer network. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 107
108 VSPEX Configuration Guidelines Configure Brocade VDX 6740-T switch (File Storage) This section describes Brocade VDX switch configuration procedure with Hyper-V compute and VNX storage. The Brocade VDX switches provide infrastructure connectivity between Hyper-V servers, existing customer network, and CIFS attached VNX storage as described in the following sections for this VSPEX solution. In this deployment, it is assumed that this new equipment is being connected to the existing customer network and potentially existing compute servers with either 1 GbE or 10 GbE attached NICs. VSPEX with the Brocade VDX 6740-T Ethernet Fabric (24/64 port) switches for 10GbE attached Hyper-V Servers are enabled with VCS Fabric Technology which has the following salient features. It is an Ethernet fabric switched network. The Ethernet fabric utilizes an emerging standard called Transparent Interconnection of Lots of Links (TRILL) as the underlying technology. All switches automatically know about each other and all connected physical and logical devices. All paths in the fabric are available. Traffic is always distributed across equal-cost paths. Traffic from the source to the destination can travel across multiple equal cost paths. Traffic always travels across the shortest path in the fabric. If a single link fails, traffic is automatically switched to other available paths. If one of the links in Active Path #1 goes down, traffic is seamlessly switched across Active Path #2. Spanning Tree Protocol (STP) is not necessary because the Ethernet fabric itself is loop free and to connected servers, devices, and the rest of the network appears as a single logical switch. The fabric is self-forming. When two Brocade VCS Fabric modeenabled switches are connected, the fabric is automatically created and the switches discover the common fabric configuration. The fabric is masterless. No single switch stores configuration information or controls fabric operations. Any switch can fail or be removed without causing disruptive fabric downtime or delayed traffic. The fabric is aware of all members, devices, and virtual machines (VMs). If the VM moves from one Brocade VCS Fabric port to another Brocade VCS Fabric port in the same fabric, the port-profile is automatically moved to the new port, leveraging the Brocade Automatic Migration of Port Profiles (AMPP) feature. 108 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
109 VSPEX Configuration Guidelines All switches in an Ethernet fabric can be managed as if they were a single logical chassis. To the rest of the network, the fabric looks no different from any other Layer 2 switch (Logical Chassis feature). VCS is enabled by default on the Brocade VDX Brocade VDX switches are available in both port side exhaust and port side intake configurations. Depending upon the hot-aisle, cold-aisle considerations you should choose the appropriate airflow models for your deployment. For more information refer to the Brocade VDX 6740 Hardware Reference Manual as provided in Appendix D. Listed below is the procedure required to deploy the Brocade VDX 6740-T switches with VCS Fabric Technology in the VSPEX Private Cloud Solution for up to 125 Virtual Machines. Table 20. Brocade VDX 6740 Configuration Steps Brocade VDX Configuration Steps Step 1: Verify and apply Brocade VDX NOS licenses Step 2: Configure logical chassis VCS ID and RBridge IDs on the VDXs Step 3: Assign Switch Name Step 4: Brocade VCS Fabric ISL Port Configuration Step 5: Create required VLANs Step 6: Create vlags for Microsoft Hyper-V hosts Step 7: Create vlags for VNX ports Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks Step 9: Configure MTU and Jumbo Frames Step 10: Enable Flow Control Support Step 11: Auto QOS for NASRefer to Appendix D for related documents. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 109
110 VSPEX Configuration Guidelines Step 1: Verify and apply Brocade VDX NOS licenses Before starting the switch configurations, make sure you have the required licenses for the VDX 6740-T Switches available. With NOS version 5.0 or later Brocade VCS Fabric license is built into the code so you will only require port upgrade licenses depending on the number of port density required in the setup. For this deployment, we will be assuming that 48x10GbE ports are activated on the base Brocade 6740s in the setup and we will be applying one 40GbE Port Upgrade License on each switch which will enable two 40GbE ports on each box. We will use these ports for Inter Switch Links (ISLs) between the two VDXs. A. Displaying the Switch License ID The switch license ID identifies the switch for which the license is valid. You will need the switch license ID when you purchase and activate a license key, if applicable. To display the switch license ID, enter the show license id command in the privileged EXEC mode, as shown. sw0# show license id Rbridge-Id License ID =================================================== 1 10:00:00:27:F8:BB:7E:85 B. Applying the Licenses to the switches Once you have the 40G Port Upgrade license strings generated from the Brocade Licensing portal for both the switches, you can apply them to the switches, as shown. sw0# license add licstr "*B Iflp5mb:NvYn,E4pLcOsVJfqrXDeeu9nMwqM2bQhqtf96TiqVORiWThxA:qsmQ8L3f IB0tJbTsSuRW,Sfl60zkfbeI2IQiEjHjZFgVb1HLbwLWd3l2JXaDtvcR8DxwiC:wfU #" 2014/03/07-03:40:27, [SEC-3051], 552,, INFO, sw0, The license key *B Iflp5mb:NvYn,E4pLcOsVJfqrXDeeu9nMwqM2bQhqtf96TiqVORiWThxA:qsmQ8L3f IB0tJbTsSuRW,Sfl60zkfbeI2IQiEjHjZFgVb1HLbwLWd3l2JXaDtvcR8DxwiC:wfU # is Added. License Added [*B Iflp5mb:NvYn,E4pLcOsVJfqrXDeeu9npwqM2bQhqtf96TiqVORiWThxA:qsmQ8L3f IB0tJbTsSuRW,Sfl60zkfbeI2IQiEjHjZFgVb1HLbwLWd3l2JXaDtvcR8DxwiC:wfU #] For license change to take effect, it may be necessary to enable ports... As noted in the switch output, note that you may have to enable ports for the licenses to take effect. You can do that by doing no shut on the interfaces you are using. The 40GbE ports can also be used in breakout mode as four 10GbE ports. For details, refer to the Network OS Administration Guide, v5.0 to configure them. 110 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
111 VSPEX Configuration Guidelines C. Displaying Licenses on the switches You can display installed licenses with the show license command. The following example displays a Brocade VDX 6740-T licensed for full port density of 48 ports and two 40Gbe QSFP ports. This configuration does not include FCoE features. sw0# show license rbridge-id: 1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 10G Port Upgrade license Feature name:port_10g_upgrade License is valid Capacity: 24 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 40G Port Upgrade license Feature name:port_40g_upgrade License is valid Capacity: 2 Refer to Network OS Software Licensing Guide v5.0 as referred in Appendix D for additional licensing related information. Step 2: Configure logical chassis VCS ID and RBridge IDs on the VDXs When VCS is deployed as a Logical Chassis, it can be managed by a single Virtual IP and configuration changes are automatically saved across all switches in the fabric. RBridge ID is a unique identifier for an RBridge (physical switch in a VCS fabric) and VCS ID is a unique identifier for a VCS fabric. The factory default VCS ID is 1. All switches in a VCS fabric must have the same VCS ID. The default is set to 1 on each VDX switch so it doesn t need to be changed in a one-cluster implementation. The RBridge ID is also set to 1 by default on each VDX switch, but if more than one switch is to be added to the fabric then each switch needs its own unique ID as in this implementation. In this deployment VCS ID 1 is assigned on all VDXs and RBridge IDs are assigned as per the Deployment Topology in Figure 58. In the following example we will show configuration for Logical Chassis with RB21 as primary. Other Rbridges can be configured in a similar manner. The value range for RBridge ID is The value range for VCS ID is BRCD6740# vcs rbridge-id 21 In Privileged EXEC mode, enter the vcs command with options to set the VCS ID, RBridge ID and enable logical chassis mode for the switch. After you execute the below command you are asked if you want to apply the default configuration and reboot the switch; answer Yes. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 111
112 VSPEX Configuration Guidelines sw0# vcs vcsid 1 rbridge-id 21 logical-chassis enable This operation will perform a VCS cluster mode transition for this local node with new parameter settings. This will change the configuration to default and reboot the switch. Do you want to continue? [y/n]: Y Note: To create a Logical Chassis cluster, the user needs to perform the above steps on every VDX in the VCS fabric, changing only the RBridge ID each time, based on Figure 45. Any global and local configuration changes now made are distributed automatically to all nodes in the logical chassis cluster. You can enter the configuration mode for any VDX in the cluster from the cluster principal node using their respective Rbridge ID. Also, note that once in Logical Chassis mode you will be able to make configuration changes for the cluster Rbridges only from the Principal node and any attempts to make changes from secondary nodes will return an error, as shown: sw0(config)# int vlan 20 %Error: This operation is not supported from a secondary node sw0(config)# Optionally, the cluster can also be managed by a Virtual IP (in Logical Chassis and Fabric Cluster mode only) which is tied to the principal node/switch in the cluster. The management interface of the principal switch can be accessed by means of this Virtual IP address, as shown: sw0(config)# vcs virtual ip address In the above example, the entire fabric can be managed with one Virtual IP Note: For details on Logical Chassis and Virtual Cluster IP, refer to the Network OS Administration Guide, v Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
113 VSPEX Configuration Guidelines Step 3: Assign Switch Name Every switch is assigned the default host name of sw0, but must be changed for easy recognition and management using the switchattributes command. Use the switch-attributes command to set host name, as shown: sw0# configure terminal sw0(config)# switch-attributes 21 host-name BRCD6740-RB21 After you have enabled the Logical Chassis mode and assigned switch names on each node in the cluster, run the show vcs command to determine which node has been assigned as the cluster principal node. This node can be used to configure the entire VCS fabric. The arrow (>) denotes the cluster principal node. The asterisk (*) denotes the current logged-in node. BRCD # show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 1 VCS GUID : 34f262b4-e64f-4a18-a986-a767d389803e Total Number of Nodes : 2 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName >10:00:00:27:F8:BB:94:18* Online Online BRCD :00:00:27:F8:BB:7E: Online Online BRCD <truncated output> Step 4: Brocade VCS Fabric ISL Port Configuration The VDX platform comes preconfigured with a default port configuration that enables ISL and Trunking for easy and automatic VCS fabric formation. However, for edge port devices the port configuration requires editing to accommodate specific connections. The interface format is: rbridge id/slot/port number e.g 21/0/49 The default port configuration for the 40Gb ports can be seen with the show running-configuration command, as shown: BRCD # show running-config interface FortyGigabitEthernet 21/0/49 interface FortyGigabitEthernet 21/0/49 fabric isl enable fabric trunk enable no shutdown Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 113
114 VSPEX Configuration Guidelines There are two types of ports in a VCS fabric, ISL ports, and the edge ports. The ISL port connects VCS fabric switches whereas edge ports connect to end devices or non-vcs Fabric mode switches or routers. Figure 35. Port types Configuring Fabric ISLs and Trunks Brocade ISLs connect VDX switches in VCS mode. All ISL ports connected to the same neighbor VDX switch attempt to form a trunk. Trunk formation requires that all ports between the switches are set to the same speed and are part of the same port group. For redundancy, the recommendation is to have at least two trunks between two Brocade VDX switches, but the actual number of trunks required may vary depending on customer I/O/Bandwidth and subscription ratio requirements. The maximum number of ports allowed per trunk group is sixteen on Brocade VDX 6740-T and eight in Brocade VDX 6720 and In a deployment with the Brocade VDX 6740s, the 10GbE or 40GbE ports can be used for ISLs. In this example configuration, we be using the 40 GbE ports to form two ISLs each which guarantees frame-based load balancing across the ISLs. 114 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
115 VSPEX Configuration Guidelines Shown below are the port groups for the VDX 6740 and 6740T platforms. Trunk Group 1-1/10 GbE SFP ports Trunk group 4-1/10 GbE SFP ports Trunk Group 2-1/10 GbE SFP ports Trunk Group 3A - 40 GbE QSFP ports Trunk Group 3-1/10 GbE SFP ports Trunk Group 4A - 40 GbE QSFP ports Figure 36. Port Groups of the VDX Trunk Group 1-1/10 GbE BaseT ports Trunk Group 4-1/10 GbE BaseT ports Trunk Group 2-1/10 GbE BaseT ports Trunk Group 3A - 40 GbE QSFP ports Trunk Group 3-1/10 GbE BaseT ports Trunk Group 4A - 40 GbE QSFP ports Figure 37. Port Groups of the VDX 6740T and Brocade VDX6740T-1G Note: On the Brocade VDX 6740, ports in groups 3 and 3A, as well as port groups 4 and 4A, can be trunked together only when the 40 GbE QSFP ports are configured in breakout mode. On the Brocade VDX 6740T/6740T-1G model, this trunking is not allowed. For more information about Brocade trunking, refer to the Brocade Network OS Administrator s Guide, v5.0. You can use the fabric isl enable, fabric trunk enable, no fabric isl enable, and no fabric trunk enable to toggle the ports which are part of a trunked ISL, if needed. The following example shows the running configuration of an ISL port on RB21..BRCD # show running-config interface FortyGigabitEthernet 21/0/49 interface FortyGigabitEthernet 21/0/49 fabric isl enable fabric trunk enable no shutdown! Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 115
116 VSPEX Configuration Guidelines You can also verify ISL configurations using the show fabric isl or show fabric trunk commands on RB21, as shown: BRCD # show fabric isl Rbridge-id: 21 #ISLs: 2 Src Src Nbr Nbr Nbr-WWN BW Trunk Nbr-Name Index Interface Index Interface Fo 21/0/49 0 Fo 22/0/49 10:00:00:27:F8:BB:7E:85 40G Yes "BRCD " 2 Fo 21/0/51 2 Fo 22/0/51 10:00:00:27:F8:BB:7E:85 40G Yes "BRCD " BRCD # show fabric trunk Rbridge-id: 21 Trunk Src Source Nbr Nbr Group Index Interface Index Interface Nbr-WWN Fo 21/0/49 0 Fo 22/0/49 10:00:00:27:F8:BB:7E: Fo 21/0/51 2 Fo 22/0/51 10:00:00:27:F8:BB:7E:85 Step 5: Create required VLANs It is best practice to separate network traffic into VLANs. The steps in this section provide guideline to create required VLANs. In this example deployment, we are using VLANS 10, 20, and 30, as shown. VLAN Name VLAN Purpose VLAN ID VLAN Description Storage VLAN 20 This VLAN is for CIFS traffic Cluster VLAN 30 This VLAN is for cluster live migration Management VLAN 10 Management VLAN To create a VLAN interface, perform the following steps from privileged EXEC mode. 1. Enter the configure terminal command to access global configuration mode. BRCD # configure terminal Entering configuration mode terminal BRCD6740-RB21(config)# 2. Enter the interface VLAN command to assign the VLAN interface number. BRCD (config)# interface Vlan 20 BRCD (config-Vlan-20)# 116 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
117 VSPEX Configuration Guidelines 3. Create other required VLANs as described in above table. You can view the defined VLANs in the VCS cluster using the show VLAN brief command. Note: Once in Logical Chassis mode you will be able to create VLANs or other configuration changes for the cluster Rbridges only from the Principal node and any attempts to make changes from secondary nodes will return an error. In this deployment Rbridge 21 is the Principal node and all configuration changes will need to be run on this node. Figure 38. Creating VLANs Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 117
118 VSPEX Configuration Guidelines Step 6: Create vlags for Microsoft Hyper-V hosts 1. Configure vlag Port-channel Interfaces 44 and 55 on Brocade VDX 6740-RB21(Principal Node) for Hyper-V hosts A and B. BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 44 BRCD6740-RB21(config-Port-channel-44)# mtu 9216 BRCD6740-RB21(config-Port-channel-44)# speed BRCD6740-RB21(config-Port-channel-44)# description Host_A-vLAG-44 BRCD6740-RB21(config-Port-channel-44)# switchport BRCD6740-RB21(config-Port-channel-44)# switchport mode trunk BRCD6740-RB21(config-Port-channel-44)# switchport trunk allowed vlan 20 BRCD6740-RB21(config-Port-channel-44)# no shutdown BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 55 BRCD6740-RB21(config-Port-channel-55)# mtu 9216 BRCD6740-RB21(config-Port-channel-55)# speed BRCD6740-RB21(config-Port-channel-55)# description Host_B-vLAG-55 BRCD6740-RB21(config-Port-channel-55)# switchport BRCD6740-RB21(config-Port-channel-55)# switchport mode trunk BRCD6740-RB21(config-Port-channel-55)# switchport trunk allowed vlan 20 BRCD6740-RB21(config-Port-channel-55)# no shutdown 2. Configure Interface TenGigabitEthernet 21/0/10 and 21/0/11 on Brocade VDX6740-RB2. BRCD6720-RB21# configure terminal BRCD6720-RB21(config)# interface TenGigabitEthernet 21/0/11 BRCD6720-RB21(conf-if-te-21/0/11)# description Host_B-vLAG-55 BRCD6720-RB21(conf-if-te-21/0/11)# channel-group 55 mode active type standard BRCD6720-RB21(conf-if-te-21/0/11)# lacp timeout long BRCD6720-RB21(conf-if-te-21/0/11)# no shutdown BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface TenGigabitEthernet 21/0/11 BRCD6740-RB21(conf-if-te-21/0/11)# description Host_B-vLAG-55 BRCD6740-RB21(conf-if-te-21/0/11)# channel-group 55 mode active type standard BRCD6740-RB21(conf-if-te-21/0/11)# lacp timeout long BRCD6740-RB21(conf-if-te-21/0/11)# no shutdown 3. Repeat above steps 1-2 to configure Interface TenGigabitEthernet 21/0/10 and 22/0/11 on Brocade VDX6740-RB22 via RB21 CLI, which is the principal node. 4. Validate vlag port-channel interfaces 44 and 55 in the Brocade VCS cluster with RB21 and RB22 to Hyper-V hosts A and B. 118 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
119 VSPEX Configuration Guidelines BRCD # show interface port-channel 44 Port-channel 44 is up, line protocol is up Hardware is AGGREGATE, address is c.adee Current address is c.adee Description: Host_A-vLAG-44 Interface index (ifindex) is Minimum number of links to bring Port-channel up is 1 MTU 9216 bytes LineSpeed Actual : Mbit Allowed Member Speed : Mbit Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 4d19h49m Queueing strategy: fifo Receive Statistics: <truncated output> BRCD # show int port-channel 55 Port-channel 55 is up, line protocol is up Hardware is AGGREGATE, address is c.adec Current address is c.adec Description: Host_B-vLAG-55 Interface index (ifindex) is Minimum number of links to bring Port-channel up is 1 MTU 9216 bytes LineSpeed Actual : Mbit Allowed Member Speed : Mbit Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 4d19h49m Queueing strategy: fifo Receive Statistics: <truncated output> 5. Validate Interface TenGigabitEthernet 21/0/10 and 21/0/11 on Brocade VDX RB21, as shown. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 119
120 VSPEX Configuration Guidelines BRCD # show interface TenGigabitEthernet 21/0/10 TenGigabitEthernet 21/0/10 is up, line protocol is up Hardware is Ethernet, address is c.adb6 Current address is c.adb6 Pluggable media present Description: Host_A-vLAG-44 Interface index (ifindex) is MTU 9216 bytes LineSpeed Actual : Mbit, Duplex: Full LineSpeed Configured : Auto, Duplex: Full Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 5d23h36m Queueing strategy: fifo Receive Statistics:... <truncated output> BRCD # show interface TenGigabitEthernet 21/0/11 TenGigabitEthernet 21/0/11 is up, line protocol is up Hardware is Ethernet, address is c.adb8 Current address is c.adb8 Pluggable media present Description: Host_B-vLAG-55 Interface index (ifindex) is MTU 9216 bytes LineSpeed Actual : Mbit, Duplex: Full LineSpeed Configured : Auto, Duplex: Full Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 5d23h36m Queueing strategy: fifo Receive Statistics:... <truncated output> 6. Repeat validation following Step 5 on Interface TenGigabitEthernet 22/0/10 and 22/0/11on Brocade BRCD6740-RB22 as well. 120 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
121 VSPEX Configuration Guidelines Step 7: Create vlags for VNX ports EMC s VNX5400/5600/5800 storage arrays support LACP-based dynamic LAGs, so in order to provide link and node level redundancy, dynamic LACP based vlags can be configured on the Brocade VDX 6740-T switches. Note: In some port-channel configurations, depending on the storage ports (1G or 10G), the speed on the port-channel might need to be set manually on the VDX 6740 as shown: BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 33 BRCD6740-RB21(config-Port-channel-33)# speed [1000,10000,40000] (1000): BRCD6740-RB21(config-Port-channel-33)# To configure dynamic vlags on each Brocade VDX 6740-T switch interface, use the following steps- 1. Configure Port-channel Interface 33 on BRCD6740-RB21 for VNX(enabled for storage VLAN 20) BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 33 BRCD6740-RB21(config-Port-channel-33)# mtu 9216 BRCD6740-RB21(config-Port-channel-33)# description VNX-vLAG-33 BRCD6740-RB21(config-Port-channel-33)# switchport BRCD6740-RB21(config-Port-channel-33)# switchport mode trunk BRCD6740-RB21(config-Port-channel-33)# switchport trunk allowed vlan Configure Interface TenGigabitEthernet 21/0/23 and 21/0/24 on BRCD6740-RB21 for Port-Channel 33 and LACP BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface TenGigabitEthernet 21/0/23 BRCD6740-RB21(conf-if-te-21/0/23)# description VNX-SPA-fxg-1-0 BRCD6740-RB21(conf-if-te-21/0/23)# channel-group 33 mode active type standard BRCD6740-RB21(conf-if-te-21/0/23)# lacp timeout long BRCD6740-RB21(conf-if-te-21/0/23)# no shutdown BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface TenGigabitEthernet 21/0/24 BRCD6740-RB21(conf-if-te-21/0/24)# description VNX-SPA-fxg-1-1 BRCD6740-RB21(conf-if-te-21/0/24)# channel-group 33 mode active type standard BRCD6740-RB21(conf-if-te-21/0/24)# lacp timeout long BRCD6740-RB21(conf-if-te-21/0/24)# no shutdown 3. Repeat above steps 1-2 on Logical Chassis Principal Node Rbridge 21 to configure Port-Channel 33 on Rbridge 22 s interfaces 22/0/23 and 22/0/24((enabled for storage VLAN 20)going to SPB on the VNX as well. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 121
122 VSPEX Configuration Guidelines 4. Validate vlag Port-channel Interface on BRCD6740-RB21 and BRCD6740-RB22 to VNX. BRCD6740-RB21# show interface Port-channel 33 Port-channel 33 is up, line protocol is up Hardware is AGGREGATE, address is c.adee Current address is c.adee Description: VNX-vLAG-33 Interface index (ifindex) is Minimum number of links to bring Port-channel up is 1 MTU 9216 bytes LineSpeed Actual : Mbit Allowed Member Speed : Mbit 5. Validate Interface TenGigabitEthernet 21/0/23-24 on BRCD6740- RB21 and Interface TenGigabitEthernet 22/0/23-24 on BRCD6740- RB22. BRCD6740-RB21# show interface TenGigabitEthernet 21/0/23 TenGigabitEthernet 21/0/23 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb6 Current address is c.adu6 Description: VNX-SPA-fxg-1-0 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> BRCD6740-RB21# show interface TenGigabitEthernet 21/0/24 TenGigabitEthernet 21/0/24 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb7 Current address is c.ade6 Description: VNX-SPA-fxg-1-1 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> BRCD6740-RB21# show interface TenGigabitEthernet 22/0/23 TenGigabitEthernet 22/0/23 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb8 Current address is c.adp6 Description: VNX-SPB-fxg-2-0 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> 122 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
123 VSPEX Configuration Guidelines BRCD6740-RB21# show interface TenGigabitEthernet 22/0/24 TenGigabitEthernet 22/0/24 is up, line protocol is up (connected) Hardware is Ethernet, address is c.adb9 Current address is c.adh6 Description: VNX-SPB-fxg-2-1 Interface index (ifindex) is MTU 9216 bytes LineSpeed : Mbit, Duplex: Full Flowcontrol rx: on, tx: on... <truncated output> Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks Brocade VDX 6740 switches can be uplinked to be accessible from customer s existing network infrastructure. On VDX 6740 platforms, the user can use 40GbE or 10GbE ports for this. In this example deployment we are using 10GbE ports. The uplink should be configured to match whether or not the customer s network is using tagged or untagged traffic. The following example can be leveraged as a guideline to connect VCS fabric to existing infrastructure network: Figure 39. Example VCS/VDX network topology with Infrastructure connectivity Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 123
124 VSPEX Configuration Guidelines Creating virtual link aggregation groups (vlags) to the Infrastructure Network Create vlags from each RBridge to Infrastructure Switches that in turn provide access to resources at the core network. This example illustrates the configuration for Port-Channel 4 on RB21 and RB Use the channel-group command to configure interfaces as members of a port channel to the infrastructure switches that interface to the core. This example uses port channel 4 on Grp1, RB21. BRCD6740-RB21(config)# interface port-channel 4 BRCD6740-RB21(config-Port-channel-4)# switchport BRCD6740-RB21(config-Port-channel-4)# switchport mode trunk BRCD6740-RB21(config-Port-channel-4)# switchport trunk allowed vlan all BRCD6740-RB21(config-Port-channel-4)# no shutdown 2. Use the channel-group command to configure interfaces as members of a Port-Channel 4 to the infrastructure switches that interface to the core.. BRCD6740-RB21(config)# in te 21/0/5 BRCD6740-RB21(conf-if-te-21/0/5)# channel-group 4 mode active type standard BRCD6740-RB21(conf-if-te-21/0/5)# in te 21/0/6 BRCD6740-RB21(conf-if-te-21/0/6)# channel-group 4 mode active type standard 3. Repeat above steps 1-2 on Logical Chassis Principal Node Rbridge 21 to configure Port-Channel 4 on Rbridge 22 s interfaces 22/0/5 and 22/0/6. 4. Use the do show port-chan command to confirm that the vlag comes up and is configured correctly. Note: The LAG must be configured on the MLX MCT as well before the vlag can become operational. BRCD6740-RB21(config-Port-channel-4)# do show port-chan 4 LACP Aggregator: Po 4 (vlag) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 21 (2) rbridge-id: 22 (2) Admin Key: Oper Key 0004 Partner System ID - 0x0001,01-80-c Partner Oper Key Member ports on rbridge-id 21: Link: Te 21/0/5 (0x F) sync: 1 * Link: Te 21/0/6 (0x ) sync: 1 <truncated output> 124 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
125 VSPEX Configuration Guidelines Step 9: Configure MTU and Jumbo Frames Brocade VDX Series switches support the transport of jumbo frames. This solution recommends an MTU setting at 9216 (Jumbo frames) for efficient NAS storage and migration traffic. Jumbo frames are enabled by default on the Brocade ISL trunks. However, to accommodate end-to-end jumbo frame support on the network for the edge systems, this feature can be enabled under the vlag interface. Please note that for end-to-end flow control, Jumbo frames need to be enabled both on the host servers and the storage with the same MTU size of Configuring MTU Note: This must be performed on all RBbridges where a given interface port-channel is located. In this example, interface port-channel 44 is on RBridge 21 and RBridge 22, so we will apply configurations from both RBridge 21 and RBridge 22. Example to enable Jumbo Frame Support on applicable VDX interfaces for which Jumbo Frame support is required: BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface Port-channel 44 BRCD6740-RB21(config-Port-channel-44)# mtu (<NUMBER: >) (9216): 9216 Step 10: Enable Flow Control Support Ethernet Flow Control is used to prevent dropped frames by slowing traffic at the source end of a link. When a port on a switch or host is not ready to receive more traffic from the source, perhaps due to congestion, it sends pause frames to the source to pause the traffic flow. When the congestion is cleared, the port stops requesting the source to pause traffic flow, and traffic resumes without any frame drop. It is recommended to enable Flow Control on vlag interfaces towards the VNX on the VDX 6740s, as shown: Enable QOS Flow Control for both tx and rx on VLAG interfaces on RB21 and RB22 going to the VNX BRCD6740-RB21# conf t BRCD6740-RB21 (config)# interface Port-channel 33 BRCD6740-RB21 (config-port-channel-33)# qos flowcontrol tx on rx on Step 11: Auto QOS for NAS The Auto QoS feature introduced in NOS v5.0 automatically classifies traffic based on either a source or a destination IPv4 address. Once the traffic is identified, it is assigned to a separate priority queue. This allows a minimum bandwidth guarantee to be provided to the queue so that the identified traffic is less affected by network traffic congestion than other traffic. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 125
126 VSPEX Configuration Guidelines Note: As this command was created primarily to benefit Network Attached Storage devices, the commands used in the following sections use the term NAS. However, there is no strict requirement that these nodes be actual NAS devices, as Auto QoS will prioritize the traffic for any set of specified IP addresses. There are four steps to enabling and configuring Auto QoS for NAS: 1. Enable Auto QoS. 2. Set the Auto QoS CoS value. 3. Set the Auto QoS DSCP value. 4. Specify the NAS server IP addresses. For detailed instructions for setting this feature up, refer to the Network OS Administration Guide, v Configure Brocade 6510 Switch storage network (Block Storage) Listed below is the procedure required to deploy the Brocade 6510 Fibre Channel (FC) switches in the VSPEX Private Cloud Solution with VMware vsphere for up to 125 Virtual Machines for block storage. The Brocade 6510 FC switches provide for infrastructure connectivity between Hyper-V servers and attached VNX storage of the VSPEX solution. At the point of deployment, compute nodes are connected to an FC storage network with 4-8, or 16G FC attached HBAs. The Brocade 6510 is a 1U, 48 port auto sensing Fibre Channel switch designed to meet large scale enterprise requirements that can also be used in small to medium sized workgroups. The Brocade 6510 offers the following: Provide flexibility, simplicity, and enterprise-class functionality in a 48-port switch for virtualized data centers and private cloud architectures Enables fast, easy, and cost-effective scaling from 24 to 48 ports using Ports on Demand (PoD) capabilities Simplifies deployment with the Brocade EZSwitchSetup wizard Accelerates deployment and troubleshooting time with Dynamic Fabric Provisioning (DFP), critical monitoring, and advanced diagnostic features Maximizes availability with redundant, hot-pluggable components and non-disruptive software upgrades Simplifies server connectivity and SAN scalability by offering dual functionality as either a full-fabric SAN switch or an NPIV-enabled Brocade Access Gateway 126 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
127 VSPEX Configuration Guidelines In addition, it is important to consider the airflow direction of the switches. Brocade 6510 FC switches are available in both port side exhaust and port side intake configurations. Depending upon the hot-aisle, cold-aisle considerations choose the appropriate airflow. For more information refer to the Brocade 6510 Hardware Reference Manual as provided in Appendix D. Providing power to the switch Verify connection of both power cords to both power supplies. Both connections should be on separate power circuits to protect against AC failure Configuring the 6510 switch Once powered on, the EZSwitchSetup CD can guide you through basic configuration. If you choose not to utilize the EZSwitchSetup CD follow the basic configuration instructions in the rest of this section. All Brocade Fibre Channel Switches have factory defaults listed in Table 21. Table 21. Brocade switch default settings Setting Factory default Factory Default MGMT IP: Factory Default Subnet: Factory Default Gateway: Factory Default admin/user password Password: Factory Default Domain ID: 1 Brocade Switch Management CLI Web Tools Connectrix Manager Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 127
128 VSPEX Configuration Guidelines Listed below is the procedure required to deploy the Brocade 6510 FC switches in the VSPEX Private Cloud Solution for up to 125 Virtual Machines. Table 22. Brocade 6510 FC switch Configuration Steps Step Step 1: Initial Switch Configuration Step 2: Fibre Channel Switch Licensing Step 3: Zoning Configuration Step 4: Switch Management and Monitoring Refer to Appendix D for related documents Step 1: Initial Switch Configuration Configure Hyper Terminal 1. Connect the serial cable to the serial port on the switch and to an RS-232 serial port on the workstation. 2. Open a terminal emulator application (such as HyperTerminal on a PC) and configure the application as follows Table 23. Parameter Brocade switch default settings Value Bits per second 9600 Databits 8 Parity None Stop bits 1 Flow control None Configure IP Address for Management Interface Switch IP address You can configure the Brocade 6510 with a static IP address, or you can use a DHCP (Dynamic Host Configuration Protocol) server to set the IP address of the switch. DHCP is enabled by default. The Brocade 6510 supports both IPv4 and IPv6. Using DHCP to set the IP address When using DHCP, the Brocade 6510 obtains its IP address, subnet mask, and default gateway address from the DHCP server. The DHCP client can only connect to a DHCP server that is on the same subnet as the switch. If your DHCP server is not on the same subnet as the Brocade 6510, use a static IP address. 128 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
129 VSPEX Configuration Guidelines Setting a static IP address 1. Log into the switch using the default password, which is password. 2. Use the ipaddrset command to set the Ethernet IP address. If you are going to use an IPv4 IP address, enter the IP address in dotted decimal notation as prompted. As you enter a value and press Enter for a line in the following example, the next line appears. For instance, the Ethernet IP Address appears first. When you enter a new IP address and press Enter or simply press Enter accept the existing value, the Ethernet Subnetmask line appears. In addition to the Ethernet IP address itself, you can set the Ethernet subnet mask, the Gateway IP address, and whether to obtain the IP address via Dynamic Host Control Protocol (DHCP) or not. SW6510:admin> ipaddrset Ethernet IP Address [ ]: Ethernet Subnetmask [ ]: Gateway IP Address [ ]: DHCP [Off]: off If you are going to use an IPv6 address, enter the network information in semicolon-separated notation as a standalone command. SW6510:admin> ipaddrset -ipv6 --add 1080::8:800:200C:417A/64 IP address is being changed... Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 129
130 VSPEX Configuration Guidelines Configure Domain ID and Fabric Parameters RCD-FC-6510:FID128:admin> switchdisable BRCD-FC-6510:FID128:admin> configure Configure... Fabric parameters (yes, y, no, n): [no] y Domain: (1..239) [1] 10 WWN Based persistent PID (yes, y, no, n): [no] Allow XISL Use (yes, y, no, n): [no] R_A_TOV: ( ) [10000] E_D_TOV: ( ) [2000] WAN_TOV: ( ) [0] MAX_HOPS: (7..19) [7] Data field size: ( ) [2112] Sequence Level Switching: (0..1) [0] Disable Device Probing: (0..1) [0] Suppress Class F Traffic: (0..1) [0] Per-frame Route Priority: (0..1) [0] Long Distance Fabric: (0..1) [0] BB credit: (1..27) [16] Disable FID Check (yes, y, no, n): [no] Insistent Domain ID Mode (yes, y, no, n): [no] yes Disable Default PortName (yes, y, no, n): [no] Edge Hold Time(0 = Low(80ms),1 = Medium(220ms),2 = High(500ms): [220ms]): (0..2) [1] Virtual Channel parameters (yes, y, no, n): [no] F-Port login parameters (yes, y, no, n): [no] Zoning Operation parameters (yes, y, no, n): [no] RSCN Transmission Mode (yes, y, no, n): [no] Arbitrated Loop parameters (yes, y, no, n): [no] System services (yes, y, no, n): [no] Portlog events enable (yes, y, no, n): [no] ssl attributes (yes, y, no, n): [no] rpcd attributes (yes, y, no, n): [no] webtools attributes (yes, y, no, n): [no] Since Insistent Domain ID Mode is enabled, ensure that switches in fabric do not have duplicate domain IDs configured, otherwise this may cause switch to segment, if Insistent domain ID is not obtained when fabric reconfigures. BRCD-FC-6510:FID128:admin> switchenable Set Switch Name SW6510:FID128:admin> switchname BRCD-FC-6510 Committing configuration... Done. 130 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
131 VSPEX Configuration Guidelines Verify Domain ID and Switch Name BRCD-FC-6510:FID128:admin> switchshow more switchname: BRCD-FC-6510 switchtype: switchstate: Online switchmode: Native switchrole: Principal switchdomain: 10 switchid: fffc0a switchwwn: 10:00:00:27:f8:61:80:8a zoning: OFF switchbeacon: OFF FC Router: OFF Allow XISL Use: OFF LS Attributes: [FID: 128, Base Switch: No, Default Switch: Yes, Address Mode 0] Date and Time Setting The Brocade 6510 maintains the current date and time inside a batterybacked real-time clock (RTC) circuit. Date and time are used for logging events. Switch operation does not depend on the date and time; a Brocade 6510 with an incorrect date and time value still functions properly. However, because the date and time are used for logging, error detection, and troubleshooting, you should set them correctly. Time Zone, Date and Clock Server can be configured on all Brocade switches. Time Zone You can set the time zone for the switch by name. You can also set country, city or time zone parameters. BRCD-FC-6510:FID128:admin> tstimezone Time Zone : US/Pacific BRCD-FC-6510:FID128:admin > tstimezone US/Central BRCD-FC-6510:FID128:admin > tstimezone Time Zone : US/Central Setting the date Enter the date command, using the following syntax (double quotation marks required): Syntax: date mmddhhmmyy switch:admin> date Fri Sep 29 17:01:48 UTC 2007 switch:admin> date " " Thu Sep 27 12:30:00 UTC 2007 switch:admin> Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 131
132 VSPEX Configuration Guidelines Synchronizing local time using NTP switch:admin>tsclockserver LOCL switch:admin>tsclockserver " " switch:admin>tsclockserver switch:admin> NTP1 is the IP address or DNS name of the first NTP server, which the switch must be able to access. The value ntp2 is the name of the second NTP server and is optional. The entire operand <ntp1;ntp2> is optional; by default, this value is LOCL, which uses the local clock of the principal or primary switch as the clock server. Verify Switch Component Status BRCD-FC-6510:FID128:admin> switchstatusshow Switch Health Report Report time: 08/14/ :19:56 PM Switch Name: BRCD-FC-6510 IP address: SwitchState: HEALTHY Duration: 218:52 Power supplies monitor HEALTHY Temperatures monitor HEALTHY Fans monitor HEALTHY Flash monitor HEALTHY Marginal ports monitor HEALTHY Faulty ports monitor HEALTHY Missing SFPs monitor HEALTHY Error ports monitor HEALTHY Fabric Watch is not licensed Detailed port information is not included BRCD-FC-6510:FID128:admin> 132 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
133 VSPEX Configuration Guidelines Step 2: FC Switch Licensing Verify and/or Install Licenses Brocade GEN5 Fibre Channel switches come with preinstalled basic licenses required for FC operation. The Brocade 6510 provides 48 ports in a single (1U) height switch that enables the creation of very dense fabrics in a relatively small space. The Brocade 6510 offers Ports on Demand (POD) licensing as well. Base models of the switch contain 24 ports, and up to two additional 12-port POD licenses can be purchased. 1. licenseshow (Record License Info) if applicable 2. If POD license needs to be installed on the switch you would require a Transaction Key (From License Purchase Paper Pack) and Switch WWN (from wwn sn or switchshow command output) 3. licenseadd key can be used to add the license to the switch. Obtaining New License Keys To obtain POD license keys, contact [email protected] Step 3: FC Zoning Configuration Please visit the Brocade.com website to locate related documentation for your product and related resources. You can download additional publications supporting your Brocade 6510 switch and get up-to-theminute information at MyBrocade.com under product downloads. Specifically refer to the FOS 7.x Administrator Guide for additional zoning concerns and best practices. Zone Objects A zone object is any device in a zone, such as: Physical port number or port index on the switch Node World Wide Name (N-WWN) Port World Wide Name (P-WWN) Zone Schemes You can establish a zone by identifying a zone objects by using one or more of the following zoning schemes Domain, Index- All members are specified by Domain ID and Port Number or Domain Index number pair or aliases. World Wide Name (WWN)-All members are specified only by WWN or Aliases of the WWN. They can be Node or Port version of WWN. Mixed Zoning- A zone containing members specified by a combination of domain, port or domain, index and WWN. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 133
134 VSPEX Configuration Guidelines Configuration of Zones Following are recommendations for zoning: Use nsshow to list the WWN of the Host and Storage (Initiator and Target). Record Port WWN Create the Alias for Device using alicreate "Alias, "WWN" Create the Zone using zonecreate Zone Name, WWN/Alias Create the Zone Configuration using cfgcreate cfgname, Zone Name Save the Zone Configuration using cfgsave Enable the Zone Configuration using cfgenable cfgname Use the following Zoning steps to configure Fabric-B switch. BRCD-FC-6510:FID128:admin> nsshow { Type Pid COS PortName NodeName TTL(sec) N 0a0500; 3;10:00:00:05:33:64:d6:35;20:00:00:05:33:64:d6:35; na FC4s: FCP PortSymb: [30] "Brocade " Fabric Port Name: 20:05:00:27:f8:61:80:8a Permanent Port Name: 10:00:00:05:33:64:d6:35 Port Index: 5 Share Area: No Device Shared in Other AD: No Redirect: No Partial: No N 0a0a00; 3;50:06:01:6c:36:60:07:c3;50:06:01:60:b6:60:07:c3; na FC4s: FCP PortSymb: [27] "CLARiiON::::SPB10::FC::::::" NodeSymb: [25] "CLARiiON::::SPB::FC::::::" Fabric Port Name: 20:05:00:05:1e:02:93:75 Permanent Port Name: 50:06:01:6c:36:60:07:c3 Port Index: 10 Share Area: No Device Shared in Other AD: No Redirect: No Partial: No 134 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
135 VSPEX Configuration Guidelines Create zone alias A zone alias is a name assigned to a logical group of ports or WWN s. By creating an alias, you can assign a familiar name to a device or group multiple devices into a single name. Enter the alicreate command, use the following syntax Syntax: alicreate "aliasname", "member[; member...] SW6510:FID128:admin> alicreate error: Usage: alicreate "arg1", "arg2" SW6510:FID128:admin> alicreate "ESX_Host_HBA1_P0","10:00:00:05:33:64:d6:35" SW6510:FID128:admin> alicreate "VNX_SPA_P0","50:06:01:60:b6:60:07:c3" Create Zone Fabric OS allows you to create zones to manage devices. Note: The zonecreate command will add all zone members aliases that match the aliasname_pattern in the zone database to the new zone. Enter the zonecreate command, using either of the following syntax Syntax: zonecreate "zonename", "member[; member...]" Syntax: zonecreate "zonename", "aliasname_pattern*[;members]" SW6510:FID128:admin> zonecreate error: Usage: zonecreate "arg1", "arg2" SW6510:FID128:admin> zonecreate "ESX_Host_A","ESX_Host_HBA1_P0;VNX_SPA_P0" Creating a zone configuration If you create or make changes to a zone, you must enable the configuration for the changes to take effect. Enter the cfgcreate command, using the following syntax: Syntax: cfgcreate "cfgname", "member[; member...]" The cfgsave command ends and commits the current zoning transaction buffer to nonvolatile memory. SW6510:FID128:admin> cfgcreate error: Usage: cfgcreate "arg1", "arg2" SW6510:FID128:admin> cfgcreate "vspex", "ESX_Host_A" Save cfg and enable cfg SW6510:FID128:admin> cfgsave You are about to save the Defined zoning configuration. This action will only save the changes on Defined configuration. Any changes made on the Effective configuration will not take effect until it is re-enabled. Until the Effective configuration is reenabled, merging new switches into the fabric is not recommended and may cause unpredictable results with the potential of mismatched Effective Zoning configurations. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 135
136 VSPEX Configuration Guidelines Do you want to save Defined zoning configuration only? (yes, y, no, n): [no] y Updating flash... SW6510:FID128:admin> cfgenable "vspex" You are about to enable a new zoning configuration. This action will replace the old zoning configuration with the current configuration selected. If the update includes changes to one or more traffic isolation zones, the update may result in localized disruption to traffic on ports associated with the traffic isolation zone changes Do you want to enable 'vspex' configuration (yes, y, no, n): [no] y zone config "vspex" is in effect Updating flash... Verify Zone Configuration SW6510:FID128:admin> cfgshow Defined configuration: cfg: vspex ESX_Host_A zone: ESX_Host_A ESX_Host_HBA1_P0; VNX_SPA_P0 alias: ESX_Host_HBA1_P0 10:00:00:05:33:64:d6:35 alias: VNX_SPA_P0 50:06:01:60:b6:60:07:c3 Effective configuration: cfg: vspex zone: ESX_Host_A 10:00:00:05:33:64:d6:35 50:06:01:60:b6:60:07:c3 SW6510:FID128:admin> cfgactvshow Effective configuration: cfg: vspex zone: ESX_Host_A 10:00:00:05:33:64:d6:35 50:06:01:60:b6:60:07:c3 136 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
137 VSPEX Configuration Guidelines Step 4: Switch Management and Monitoring The following table shows a list of commands that can be used to manage and monitor Brocade Fibre Channel switches in a production environment. Switch Management Switchshow Switch Monitoring 1. porterrshow 2. Portperfshow 3. Portshow 4. errshow 5. Errdump 6. Sfpshow 7. Fanshow 8. Psshow 9. Sensorshow 10. Firmwareshow 11. Fosconfig --show 12. Memshow 13. Portcfgshow 14. Supportsave to collect switch logs Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 137
138 VSPEX Configuration Guidelines Preparing and configuring storage array Implementation instructions and best practices may vary because of the storage network protocol selected for the solution. Each case contains the following steps: 1. Configure the VNXe. 2. Provision storage to the hosts. 3. Optionally configure FAST VP. 4. Optionally configure FAST Cache. The sections below cover the options for each step separately depending on whether one of the block protocols (FC, iscsi), or the file protocol (CIFS) is selected. For FC, or iscsi, refer to the instructions marked for block protocols. For CIFS, refer to the instructions marked for file protocols. VNXe configuration for block protocols This section describes how to configure the VNXe storage array for host access using block protocols such as FC, or iscsi. In this solution, the VNXe provides data storage for Windows hosts. Table 24. Tasks for VNXe configuration for block protocols Task Description Reference Preparing the VNXe Setting up the initial VNXe configuration Provisioning storage for Hyper-V hosts Physically install the VNXe hardware using the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNXe. Create the storage areas required for the solution. EMC VNXe3200 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Preparing the VNXe The VNXe3200 Unified Installation Guide provides instructions to assemble, rack, cable, and power up the VNXe. There are no specific setup steps for this solution. 138 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
139 VSPEX Configuration Guidelines Setting up the initial VNXe configuration After the initial VNXe setup, configure key information about the existing environment to enable the storage array to communicate with the other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces For data connections using FC protocol Ensure that one or more servers are connected to the VNXe storage system, either directly or through qualified FC switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. For data connections using iscsi protocol Connect one or more servers to the VNXe storage system, either directly or through qualified IP switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information: 1. Set up a storage network IP address: Logically isolate the storage network from the other networks in the solution, as described in Chapter 3. This ensures that other network traffic does not impact traffic between the hosts and the storage. 2. Enable jumbo frames on the VNXe iscsi ports: Use jumbo frames for iscsi networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment: a. In Unisphere, select Settings > Network > More Configuration >Port Settings. b. Select the appropriate iscsi network interface. c. On the right panel, set the MTU size to 9,000. d. Click Apply to apply the changes. The reference documents listed in Table 24 on page 138 provide more information on how to configure the VNXe platform. Storage configuration guidelines provides more information on the disk layout. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 139
140 VSPEX Configuration Guidelines Provisioning storage for Hyper-V hosts This section describes provisioning block storage for Hyper-V hosts. To provision file storage, refer to VNXe configuration for file protocols. Complete the following steps in Unisphere to configure LUNs on the VNXe array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter 4. a. Log in to Unisphere. b. Select the array for this solution. c. Select Storage > Storage Configuration > Storage Pools. d. Click List View. e. Click Create. Note: The pool does not use system drives for additional storage. Table 25. Storage allocation table for block Configuration Number of pools Number of 10K SAS drives per pool Number of LUNs per pool LUN size (TB) 125 virtual machines Total x 7 TB LUNs Note: Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. 2. Create the hot spare disks at this point. Refer to the appropriate VNXe installation guide for additional information. Figure 19 depicts the target storage layout for 125 virtual machines. 3. Use the pools created in step 1 to provision thin LUNs: a. Select Storage > LUNs. b. Click Create. c. Select Create a LUN. d. Specify the LUN Name. e. Select the pool created in step 1. Always create two thin LUNs in one physical storage pool. User Capacity depends on the specific number of virtual machines. Refer to Table 25 for more information. 140 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
141 VSPEX Configuration Guidelines f. Configure appropriate Snapshot Schedule. g. Configure appropriate Host Access for each host h. Review the Summary of LUN Configuration and click Finish to create the LUNs VNXe configuration for file protocols This section and Table 26 describe file storage provisioning tasks for Hyper- V hosts. Table 26. Tasks for storage configuration for file protocols Task Description Reference Prepare the VNXe Set up the initial VNXe configuration Create a network interface Create a CIFS server Create a storage pool for file Create the file systems Physically install the VNXe hardware with the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNXe. Configure the IP address and network interface information for the CIFS server. Create the CIFS server instance to publish the storage. Create the block pool structure and LUNs to contain the file system. Establish the SMB shared file system. VNXe3200 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Preparing the VNXe The VNXe3200 Unified Installation Guide provide instructions to assemble, rack, cable, and power up the VNXe. There are no specific setup steps for this solution. Setting up the initial VNXe configuration After the initial VNXe setup, configure key information about the existing environment to allow the storage array to communicate with the other devices in the environment. Ensure one or more servers connect to the VNXe storage system, either directly or through qualified IP switches. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 141
142 VSPEX Configuration Guidelines Storage network IP address CIFS services and Active Directory Domain membership Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Enabling jumbo frames on the VNXe storage network interfaces Use Jumbo frames for storage networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment. Complete the following steps to enable jumbo frames: 1. In Unisphere, select Settings > More Configuration >Port Settings. 2. Select the appropriate network interface from the I/O modules panel. 3. On the right panel, set the MTU size to 9, Click Apply to apply the changes. Creating link aggregation on the VNXe storage network interfaces Link aggregation provides network redundancy on the VNXe3200 system. Complete the following steps to create a network interface link aggregation: 1. Log in to the VNXe. 2. Select a network interfaces from the I/O Modules panel. 3. On the right panel, Aggregate with another network interface. 4. Click the Create Aggregation button. 5. Click Yes to apply the changes. The reference documents listed in Table 24 provide more information on how to configure the VNXe platform. The 142 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
143 VSPEX Configuration Guidelines Server configuration guidelines section provides more information on the disk layout. Creating a CIFS server A network interface maps to a CIFS server. CIFS servers provide access to file shares over the network Complete the following steps to create a network interface: 1. Log in to the VNXe 2. Click Settings > NAS Servers. 3. Click Create. From the Create NAS Server wizard, complete the following steps: 1. Specify Server Name. 2. Select the Storage Pool that will provide the file share. 3. Type an IP Address for the interface. 4. Type a Server Name for the interface. 5. Type the Subnet Mask for the interface. 6. Click Show Advanced. 7. Select a storage processor that will support the file share. 8. Select the Ethernet Port to the link aggregated interface created in Creating link aggregation on the VNXe storage network interfaces. 9. If required, specify the VLAN ID. 10. Click Next. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 143
144 VSPEX Configuration Guidelines Figure 40. Configure NAS Server Address 11. Select Windows Shares (CIFS). 12. Specify the appropriate information of the Standalone or Join to the Active Directory. 13. Type in the DNS/NIS if required. 14. Review the NAS Server Summary and click Finish to complete the wizard. 144 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
145 VSPEX Configuration Guidelines Figure 41. Configure NAS Server type Provisioning storage for Windows hosts This section describes provisioning block storage for Windows hosts. To provision file storage, refer to VNXe configuration for file protocols Complete the following steps in Unisphere to configure LUNs on the VNXe array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums as described in Chapter 4. a. Log in to Unisphere. b. Select Storage > Storage Configuration > Storage Pools. c. Click List View. d. Click Create. Note: The pool does not use system drives for additional storage. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 145
146 VSPEX Configuration Guidelines Table 27. Storage allocation table for file Configuration 125 virtual servers Number of pools Number of 10 K SAS drives per pool Number of File Systems per pool File System size (TB) Total x 7 TB LUNs Creating file systems To create an SMB file share, complete the following tasks: 1. Create a storage pool and a network interface. 2. Create a file system. VNXe requires a storage pool and a NAS Server to create a file system. If no storage pools or interfaces exist, follow the steps in Provisioning storage for Windows hosts and Creating a CIFS server to create a storage pool and a network interface. Create two thin file systems from the storage pool. Refer to Table 27 for details on the number of file systems. Complete the following steps to create VNXe file systems for SMB file shares: 1. Log in to Unisphere. 2. Select Storage > File Systems. 3. Click Create. The File System Creation wizard appears. 4. Select a NAS server. 5. Specify the file system name. 6. Specify the storage pool and size. Size depends on the specific number of virtual machines. Refer to Table 27 for more information. 7. Specify the share name of the file system. 8. Configure host access for each host. 9. Select an appropriate snapshot schedule. 10. Review File System Creation Summary and click Finish to complete the wizard. 146 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
147 VSPEX Configuration Guidelines FAST VP configuration (optional) Optionally, this procedure applies to both file and block storage implementations. Complete the following steps to configure FAST VP. Assign two flash drives in the storage pool: 1. Select Storage > Storage Configuration > Storage Pools. 2. Select the pool created when provisioning file or block storage and click Details. 3. Click Fast VP. You can see the amount of data relocated or to relocate in different tier. You can either manually click Start Data Relocation to start relocation or go to Fast VP Settings for more configuration. Figure 42 shows the Fast VP tab. Figure 42. Fast VP tab Note: The Tier Status area shows FAST VP information specific to the selected pool. 4. In Fast VP Settings, click General, select Enable Scheduled Relocations to enable the scheduled relocations and select an appropriate Data Relocation Rate, as shown in Figure 43. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 147
148 VSPEX Configuration Guidelines Figure 43. Scheduled Fast VP relocation Use the dialog box to control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O. 5. Click Schedule, and select appropriate days and times for schedule relocation. Figure 44 shows an example of Fast VP relocation schedule. Figure 44. Fast VP Relocation Schedule 148 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
149 VSPEX Configuration Guidelines Note: FAST VP is an automated tool that provides the ability to create a relocation schedule. Schedule the relocations during off-hours to minimize any potential performance impact. FAST Cache configuration (optional) Optionally, configure FAST Cache. To configure FAST Cache on the storage pools for this solution, complete the following steps: Note: FAST Cache is an optional component of this solution that can provide improved performance as outlined in Chapter Configure flash drives as FAST Cache: a. Select Storage > Storage Configuration > Fast Cache to configure the Fast Cache. b. Click Create to start the configuration wizard. The wizard will show if it is licensed to use the Fast Cache feature and has eligible flash disks. c. Click Next. The wizard shows the number of disks and the RAID type. d. Click Finish to complete the configuration. Figure 45 shows the steps to create Fast Cache. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 149
150 VSPEX Configuration Guidelines Figure 45. Create Fast Cache Note: If a sufficient number of flash drives are not available, the Next button will be greyed out. 2. Enable FAST Cache on the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. All the LUNs created in the storage pool have FAST Cache enabled or disabled. Configure FAST Cache for a pool during the Create Storage Pool wizard, as shown in Figure 46. After installing FAST Cache on the VNXe series, it is enabled by default at storage pool creation. 150 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
151 VSPEX Configuration Guidelines Figure 46. Advanced tab in the Create Storage Pool dialog box If a storage pool is created before FAST Cache is installed, use Settings in the Storage Pool Detail dialog box to configure FAST Cache, as shown in Figure 47. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 151
152 VSPEX Configuration Guidelines Figure 47. Settings tab in the Storage Pool Properties dialog box Note: The VNXe FAST Cache feature does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves. Installing and configuring Hyper-V hosts Overview This section provides the requirements for the installation and configuration of the Windows hosts and infrastructure servers to support the architecture. Table 28 describes the required tasks. Table 28. Tasks for server installation Task Description Reference Installing Windows hosts Installing Hyper- V and configure Failover Clustering Install Windows Server 2012 R2 on the physical servers for the solution. 1. Add the Hyper-V Server role. 2. Add the Failover Clustering feature. 3. Create and configure the Hyper-V cluster Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
153 VSPEX Configuration Guidelines Task Description Reference Configuring windows hosts networking Installing PowerPath on Windows Servers Planning virtual machine memory allocations Configure Windows hosts networking, including NIC teaming and the Virtual Switch network. Install and configure PowerPath to manage multipathing for VNXe LUNs Ensure that Windows Hyper-V guest memory management features are configured properly for the environment. PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Installing Windows hosts Follow Microsoft best practices to install Windows Server 2012 R2 and the Hyper-V role on the physical servers for this solution. Installing Hyper- V and configuring failover clustering To install and configure Failover Clustering, complete the following steps: 1. Install and patch Windows Server 2012 R2 on each Windows host. 2. Configure the Hyper-V role, and the Failover Clustering feature. 3. Install the HBA drivers, or configure iscsi initiators on each Windows host. For details, refer to EMC Host Connectivity Guide for Windows. Table 28 on page 152 provides the steps and references to accomplish the configuration tasks. Configuring Windows host networking To ensure performance and availability, the following network interface cards (NICs) are required: At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary). At least two 10 GbE NICs for the storage network. At least one NIC for Live Migration. Note: Enable jumbo frames for NICS that transfer iscsi or SMB data. Set the MTU to 9,000. Consult the NIC configuration guide for instruction. Installing PowerPath on Windows servers Install PowerPath on Windows servers to improve and enhance the performance and capabilities of the VNXe storage array. For the detailed installation steps, refer to PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 153
154 VSPEX Configuration Guidelines Planning virtual machine memory allocations Server capacity serves two purposes in the solution: Supports the new virtualized server infrastructure. Supports the required infrastructure services such as authentication or authorization, DNS, and databases. For information on minimum infrastructure service hosting requirements, refer to Appendix A. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Memory configuration Take care to properly size and configure the server memory for this solution. This section provides an overview of memory management in a Hyper-V environment. Memory virtualization techniques enable the hypervisor to abstract physical host resources such as Dynamic Memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. With advanced processors (such as Intel processors with EPT support), this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. There are multiple techniques available within the hypervisor to maximize the use of system resources such as memory. Do not substantially over commit resources as this can lead to poor system performance. The exact implications of memory over commitment in a real-world environment are difficult to predict. Performance degradation due to resource-exhaustion increases with the amount of memory over-committed. 154 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
155 VSPEX Configuration Guidelines Installing and configuring SQL Server database Overview Most customers use a management tool to provision and manage their server virtualization solution even though it is not required. The management tool requires a database backend. SCVMM uses SQL Server 2012 as the database platform. This section describes how to set up and configure a SQL Server database for the solution. Table 29 lists the detailed setup tasks. Table 29. Tasks for SQL Server database setup Task Description Reference Creating a virtual machine for Microsoft SQL Server Installing Microsoft Windows on the virtual machine Installing Microsoft SQL Server Configuring a SQL Server for SCVMM Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements. Install Microsoft Windows Server 2012 R2 Datacenter Edition on the virtual machine. Install Microsoft SQL Server on the designated virtual machine. Configure a remote SQL Server instance or SCVMM Creating a virtual machine for Microsoft SQL Server Create the virtual machine with enough computing resources on one of the Windows servers designated for infrastructure virtual machines. Use the storage designated for the shared infrastructure. Note: The customer environment may already contain a SQL Server for this role. In that case, refer to the section Configuring a SQL Server for SCVMM. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 155
156 VSPEX Configuration Guidelines Installing Microsoft Windows on the virtual machine Installing SQL Server The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings. Use the SQL Server installation media to install SQL Server on the virtual machine. The Microsoft TechNet website provides information on how to install SQL Server. One of the installable components in the SQL Server installer is the SQL Server Management Studio (SSMS). Install this component on the SQL server directly, and on an administrator console. To change the default path for storing data files, perform the following steps: 1. Right-click the server object in SSMS and select Database Properties. The Properties window appears. 2. Change the default data and log directories for new databases created on the server. Configuring a SQL Server for SCVMM To use SCVMM in this solution, configure the SQL Server for remote connections. The requirements and steps to configure it correctly are available in the article Configuring a Remote Instance of SQL Server for VMM. Refer to the list of documents in Appendix D for more information. Note: Do not use the Microsoft SQL Server Express based database option for this solution. Create individual login accounts for each service that accesses a database on the SQL Server. 156 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
157 VSPEX Configuration Guidelines System Center Virtual Machine Manager server deployment Overview This section provides information on how to configure SCVMM. Complete the tasks in Table 30. Table 30. Tasks for SCVMM configuration Task Description Reference Creating the SCVMM host virtual machine Installing the SCVMM guest OS Installing the SCVMM server Installing the SCVMM Management Console Installing the SCVMM agent locally on the hosts Adding a Hyper-V cluster into SCVMM Adding file share storage in SCVMM (file variant only) Creating a virtual machine in SCVMM Performing partition alignment, and assign File Allocation Unite Size Creating a template virtual machine Deploying virtual machines from the template virtual machine Create a virtual machine for the SCVMM Server. Install Windows Server 2012 R2 Datacenter Edition on the SCVMM host virtual machine. Install an SCVMM server. Install an SCVMM Management Console. Install an SCVMM agent locally on the hosts SCVMM manages. Add the Hyper-V cluster into SCVMM. Add SMB file share storage to a Hyper-V cluster in SCVMM. Create a virtual machine in SCVMM. Using Diskpart.exe to perform partition alignment, assign drive letters, and assign file allocation unit size of virtual machine s disk drive Create a template virtual machine from the existing virtual machine. Create the hardware profile and Guest Operating System profile at this time. Deploy the virtual machines from the template virtual machine. Create a virtual machines Install the guest operating system How to Install a VMM Management Server How to Install the VMM Console Installing a VMM Agent Locally on a Host Adding and Managing Hyper-V Hosts and Scale- Out File Servers in VMM How to Assign SMB 3.0 File Shares to Hyper-V Hosts and Clusters in VMM Creating and Deploying Virtual Machines in VMM Disk Partition Alignment Best Practices for SQL Server How to Create a Virtual Machine Template How to Create and Deploy a Virtual Machine from a Template Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 157
158 VSPEX Configuration Guidelines Creating a SCVMM host virtual machine To deploy the Microsoft Hyper-V server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Microsoft Hyper-V server with the customer guest OS configuration by using an infrastructure server datastore presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines SCVMM must manage. Installing the SCVMM guest OS Install the guest OS on the SCVMM host virtual machine. Install the requested Windows Server version on the virtual machine and select appropriate network, time, and authentication settings. Installing the SCVMM server Set up the VMM database and the default library server, and then install the SCVMM server. Refer to the Microsoft TechNet Library topic Installing the VMM Server to install the SCVMM server. Installing the SCVMM Management Console SCVMM Management Console is a client tool to manage the SCVMM server. Install the VMM Management Console on the same computer as the VMM server. Refer to the Microsoft TechNet Library topic Installing the VMM Administrator Console to install the SCVMM Management Console. Installing the SCVMM agent locally on a host If the hosts must be managed on a perimeter network, install a VMM agent locally on the host before adding it to VMM. Optionally, install a VMM agent locally on a host in a domain before adding the host to VMM. Refer to the Microsoft TechNet Library topic Installing a VMM Agent Locally to install a VMM agent locally on a host. Adding a Hyper- V cluster into SCVMM Add the deployed Microsoft Hyper-V cluster to SCVMM. SCVMM manages the Hyper-V cluster. Refer to the Microsoft TechNet Library topic Adding and Managing Hyper- V Hosts and Scale-Out File Servers in VMM to add the Hyper-V cluster. 158 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
159 VSPEX Configuration Guidelines Adding file share storage to SCVMM (file variant only) Creating a virtual machine in SCVMM To add file share storage to SCVMM, complete the following steps: 1. Open the VMs and Services workspace. 2. In the VMs and Services pane, right-click the Hyper-V Cluster name. 3. Click Properties. 4. In the Properties window, click File Share Storage. 5. Click Add, and then add the file share storage to SCVMM. Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, then install the software, and change the Windows and application settings. Refer to the Microsoft TechNet Library topic How to Create and Deploy a Virtual Machine from a Blank Virtual Hard Disk to create a virtual machine. Performing partition alignment, and assigning File Allocation Unite Size Perform disk partition alignment on virtual machines whose operation system is prior to Windows Server It is recommended to align the disk drive with an offset of 1,024 KB, and format the disk drive with a file allocation unit (cluster) size of 8 KB. Refer to the Microsoft TechNet Library topic Disk Partition Alignment Best Practices for SQL Server to perform partition alignment, assign drive letters, and assign file allocation unit size using diskpart.exe Creating a template virtual machine Converting a virtual machine into a template removes the virtual machine. Back up the virtual machine, because the virtual machine may be destroyed during template creation. Create a hardware profile and a Guest Operating System profile when creating a template. Use the profiler to deploy the virtual machines. Refer to the Microsoft TechNet Library topic How to Create a Virtual Machine Template. Deploying virtual machines from the template virtual machine The deployment wizard enables you to save the PowerShell scripts and reuse them to deploy other virtual machines with the same configuration. Refer to the Microsoft TechNet Library topic How to Deploy a Virtual Machine. Summary This chapter presented the required steps to deploy and configure the various aspects of the VSPEX solution, including the physical and logical components. At this point, the VSPEX solution is fully functional. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 159
160
161 Chapter 6 Verifying the Solution This chapter presents the following topics: Overview Post-installing checklist Deploying and testing a single virtual server Verifying the redundancy of the solution components Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 161
162 Verifying the Solution Overview This chapter provides a list of items to review after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure the configuration meets core availability requirements. Complete the tasks listed in Table 31. Table 31. Tasks for testing the installation Task Description Reference Post-installing checklist Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. Hyper-V : How many network cards do I need? Deploying and test a single virtual server Verifying redundancy of the solution components Verify that each Hyper-V host has access to the required Cluster Shared Volume\CIFS Share and VLANs. Verify that the Live Migration interfaces are configured correctly on all Hyper-V hosts. Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface. Perform a reboot for each storage processor in turn, and ensure that the storage connectivity is maintained. Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact. On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host. Using a VNXe System with Microsoft Windows Hyper-V Virtual Machine Live Migration Overview Deploying Hyper-V Hosts Using Microsoft System Center 2 Machine Manager N/A Vendor documentation Creating a Hyper-V Host Cluster in VMM Overview 162 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
163 Verifying the Solution Post-installing checklist The following configuration items are critical to the functionality of the solution. On each Windows Server, verify the following items prior to deployment into production: The VLAN for virtual machine networking is configured correctly. The storage networking is configured correctly. Each server can access the required Cluster Shared Volumes/Hyper-V SMB shares. A network interface is configured correctly for Live Migration. Deploying and testing a single virtual server Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to login to it. Verifying the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures. The steps apply to both block and file environments. Block and File environments Complete the following steps to restart each VNXe storage processor in turn and verify that connectivity to Hyper-V datastores is maintained throughout each restart: 1. Log in to SP A with administrator credentials. 2. Restart SP A by using the command svc_shutdown -r 3. During the restart cycle, check for presence of datastores on Windows Server Hyper-V hosts. 4. When the cycle completes, log in to SP B and restart SP B using the same command as the above. 5. On the host side, enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 163
164 Verifying the Solution 164 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
165 Chapter 7 System Monitoring This chapter presents the following topics: Overview 166 Key areas to monitor 166 VNXe resources monitoring guidelines 169 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 165
166 Bill of Materials Overview Key areas to monitor System monitoring of the VSPEX environment is the same as monitoring any core IT system; it is a relevant and core component of administration. The monitoring levels involved in a highly virtualized infrastructure such as a VSPEX environment are somewhat more complex than a purely physical infrastructure, as the interaction and interrelationships between various components can be subtle and nuanced. However, those who are experienced in administering physical environments should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and data flows. The following business requirements drive the need for proactive, consistent monitoring of the environment: Stable, predictable performance Sizing and capacity needs Availability and accessibility Elasticity: the dynamic addition, subtraction, and modification of workloads Data protection If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are included at the end of this chapter. Since VSPEX Proven Infrastructures comprise end-to-end solutions, system monitoring includes three discrete, but highly interrelated areas: Servers, including virtual machines and clusters Networking Storage This chapter focuses primarily on monitoring key components of the storage infrastructure, the VNXe array, but briefly describes other components as well. 166 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
167 Performance baseline When a workload is added to a VSPEX deployment, server, storage, and networking resources are consumed. As additional workloads are added, modified, or removed, resource availability and more importantly, capabilities change, which impact all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components prior to deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined reference virtual machine. Deploy the first workload, and then measure the end-to-end resource consumption along with platform performance. This removes the guesswork from sizing activities and ensures the initial assumptions were valid. As additional workloads are deployed, reevaluate resource consumption and performance levels to determine cumulative load and impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription does not negatively impact overall system performance. Run these baselines consistently, to ensure the platform as a whole, and the virtual machines themselves operate as expected. The following components make up the critical areas that affect the overall system performance Servers The key resources to monitor from a server perspective include: Processors Memory Disk (local, NAS, and SAN) Networking Monitor these areas from both a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). Depending on your operating system, there are tools available to monitor and capture this data. For example, if your VSPEX deployment uses Windows servers as the hypervisor, you can use Windows perfmon to monitor and log these metrics. Follow your vendor s guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application. Detailed information about this tool is available from the Microsoft TechNet Library topic Using Performance Monitor. Keep in mind that each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of reference virtual machines deployed and their defined workload. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 167
168 Bill of Materials Brocade Networking Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and if network file or block protocols such as NFS, CIFS, SMB and iscsi are implemented, at the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies and IOPS size. Capture additional data from network card or HBA utilities. From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths and inter switch link (ISL) utilization. If networking storage protocols are used, those are discussed in the following section. Storage Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the VNXe storage arrays provide an easy yet powerful way to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus on, including: Capacity IOPS Latency SP utilization CPU Memory Fabric/network interfaces - throughput in, throughput out Additional considerations (though primarily from a tuning perspective) include: I/O size Workload characteristics Cache utilization These factors are outside the scope of this document; however, storage tuning is an essential component of performance optimization. EMC offers the following additional guidance on the subject through EMC Online Support: in the EMC VNX Unified Best Practices for Performance-Applied Best Practices Guide. 168 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
169 VNXe resources monitoring guidelines Monitor the VNXe with the Unisphere GUI, which is accessible by opening an HTTPS session to the SP IP address. The VNXe series is a unified storage platform that provides both block storage and file storage access through a single entity. Monitoring is divided into two parts: Monitoring block storage resources Monitoring file storage resources Monitoring block storage resources This section explains how to use Unisphere to monitor block storage resource usage that includes capacity, IOPS, and latency. Capacity In Unisphere, two panels display capacity information. These panels provide a quick assessment of the overall free space available within the configured LUNs and underlying storage pools. For block, sufficient free storage should remain in the configured pools to allow for anticipated growth and activities such as snapshot creation. It is essential to have a free buffer, especially for thin LUNs because out-of-space conditions usually lead to undesirable behaviors on affected host systems. As such, configure threshold alerts to warn storage administrators when capacity use rises above 80 percent. In that case, auto-expansion may need to be adjusted or additional space allocated to the pool. If LUN utilization is high, reclaim space or allocate additional space. To set capacity threshold alerts for a specific pool, complete the following steps: 1. Select that pool and click the Details button. 2. In Storage Pool Utilization, choose a value for Alert Threshold of this pool, as shown in Figure Used Space, Available Space, and Subscription are key metrics to examine. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 169
170 Bill of Materials Figure 48. Storage Pool Alert settings You can find additional settings relevant to space management under the Settings tab as shown in Figure 49. Snapshot Auto-Delete settings should be enabled if this feature is in use. Figure 49. Storage Pool Snapshot settings 170 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
171 To drill-down into capacity for block, complete the following steps: 1. In Unisphere, select the VNXe system to examine. 2. Select Storage > Storage Configurations > Storage Pools. This opens the Storage Pools panel. 3. Examine the columns titled Percent Used and Available Space and Subscription, as shown in Figure 50. Figure 50. Storage Pools panel Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 171
172 Bill of Materials Monitor capacity at the storage pool and LUN levels: 1. Click Storage and select LUNs. This opens the LUN panel. 2. Select a LUN to examine and click Details. This displays the detailed LUN information, as shown in Figure Verify the LUN capacity details in the dialog box. LUN Size is the total virtual capacity available to the LUN. This may not be available if oversubscribed. Allocated capacity is the total physical capacity currently used by the LUN. Figure 51. LUN Properties dialog box Examine capacity alerts, along with all other system events, by clicking on the Alerts hot-link at the lower-left of the display. Alerts can also be accessed by clicking System, and then selecting System Alerts, as shown in Figure Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
173 Figure 52. System Panel There are also several new features to help administrators monitor VNXe performance, capacity and health, including the interactive System Health panel, which provides detailed component information simply by clicking it, as shown in Figure 53. Figure 53. System Health panel Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 173
174 Bill of Materials IOPS The effects of an I/O workload serviced by an improperly configured storage system, or one whose resources are exhausted, can be felt systemwide. Monitoring the IOPS that the storage array services includes looking at metrics from the host ports in the SPs, along with requests serviced by the back-end disks. VSPEX solutions are carefully sized to deliver a certain performance level for a particular workload level. Ensure that IOPS are not exceeding design parameters. Statistical reporting for IOPS (along with other key metrics) can be examined by opening the System panel by selecting VNXe >System > System Performance. Monitor the statistics online or offline using the Unisphere Analyzer, which requires a license. Another metric to examine is Total Bandwidth (MB/s). An 8 Gbps front-end SP port can process 800 MB per second. The average bandwidth must not exceed 80 percent of the link bandwidth under normal operating conditions. IOPS delivered to the LUNs are often more than those delivered by the hosts. This is particularly true with thin LUNs, as there is additional metadata associated with managing the I/O streams. Unisphere Analyzer shows the IOPS on each LUN, as shown in Figure 54. Figure 54. IOPS on the LUNs 174 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
175 Certain RAID levels also impart write-penalties that create additional backend IOPS. Examine the IOPS delivered to (and serviced from) the underlying physical disks, which can also be viewed in the Unisphere Analyzer as shown in Figure 55. Table 32 shows the rules of thumb for drive performance. Table 32. Rules of thumb for drive performance 15k rpm SAS drives 10k rpm SAS drives NL-SAS drives IOPS 180 IOPS 120 IOPS 80 IOPS Figure 55. IOPS on the drives Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 175
176 Bill of Materials Latency Latency is the by-product of delays processing I/O requests. This context focuses on monitoring storage latency, specifically block-level I/O. Using similar procedures from a previous section, and view the latency from the LUN level, as shown in Figure 56. (Notice that the LUN filter has been applied.) Figure 56. Latency on the LUNs Latency can be introduced anywhere along the I/O stream, from the application layer, through the transport, and out to the final storage devices. Determining precise causes of excessive latency requires a methodical approach. Excessive latency in an FC network is uncommon. Unless there is a defective component such as an HBA or cable, delays introduced in the network fabric layer are normally a result of misconfigured switching fabrics. An overburdened storage array typically causes latency within an FC environment. Focus primarily on the LUNs and the underlying disk pool s ability to service I/O requests. Requests that cannot be serviced are queued, which introduces latency. The same paradigm applies to Ethernet-based protocols such as iscsi. However, additional factors come into play because these storage protocols use Ethernet as the underlying transport. Isolating the network traffic (either physical or logical) for storage is a best-practice, and preferably some implementation of Quality of Service (QoS) in a shared/converged fabric. If network problems are not introducing excessive latency, examine the storage array. In addition to overburdened disks, excessive SP utilization can also introduce latency. SP utilization levels 176 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
177 greater than 80 percent indicate a potential problem. Background processes such as deduplication, auto-expansion/restriping, data tiering movement, and snapshots all compete for SP resources. Monitor these processes to ensure they do not cause SP resource exhaustion. Possible mitigation techniques include staggering background jobs, scheduling tiering to occur during off hours, and adding more physical resources or rebalancing the I/O workloads. Growth may also mandate moving to more powerful and/or additional hardware. For SP metrics, examine data under the System Performance tab of the Unisphere Analyzer, as shown in Figure 57. Review metrics such as Average CPU Utilization % (shown), Average Disk Response Time, and Average Disk Queue Length. Figure 57. SP CPU Utilization High values for any of these metrics indicate the storage array is under stress and likely requires mitigation. Table 33 shows the best practices recommended by EMC. Table 33. Best practice for performance monitoring Utilization (%) Response Time (ms) Queue Length Threshold Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 177
178 Bill of Materials Monitoring file storage resources File-based protocols such as NFS and CIFS/SMB involve additional management processes beyond those for block storage. Unlike VNX systems, the VNXe3200 features integrated file-services and does not require Data Movers to provide that functionality. On the VNXe3200, the storage processors intercept file protocol requests from the client side, and convert the requests to the appropriate SCSI block semantics on the array side. The additional protocols and translation introduce additional load and monitoring requirements such as SP network link utilization, memory utilization, and SP processor utilization. To examine file metrics in the System Performance panel, select the appropriate metric to monitor. In this example, Total Network Bandwidth is selected, as shown in Figure 58. Usage levels in excess of 80 percent indicate potential performance concerns and likely require mitigation through SP reconfiguration, additional physical resources such as additional network ports, and analysis of the current network topology. Figure 58. VNXe file statistics 178 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
179 Capacity The System Capacity panel provides a quick analysis of overall space utilization, as shown in Figure 59. Figure 59. System Capacity panel Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 179
180 Bill of Materials To monitor capacity at the pool and file system level: 1. Select VNXe > Storage > File Systems. The File Systems window panel appears, as shown in Figure 60. Figure 60. File Systems panel 2. Select a file system to examine and click Details, and then select Capacity, which displays detailed file system information, as shown in Figure Similar to the Capacity tab for block, examine the key metrics such as File System Size, Thin status, Used, Free, Allocated space, and Pool Size Used. 180 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
181 Figure 61. File System Capacity panel Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 181
182 Bill of Materials IOPS In addition to block storage IOPS, Unisphere also provides the ability to monitor file system IOPS. Select VNXe > System > System Performance. Then select Total File System throughput/iops, as shown in Figure 62. Figure 62. System Performance panel displaying file metrics Summary Consistent and thorough monitoring of the VSPEX Proven Infrastructure is a best practice. Having baseline performance data helps to identify problems, while monitoring key system metrics helps to ensure that the system functions optimally and within designed parameters. The monitoring process can extend through integration with automation and orchestration tools from key partners such as Microsoft with its System Center suite of products. 182 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
183 Appendix A Bill of Materials This appendix presents the following topic: Bill of materials Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 183
184 Bill of Materials Bill of materials Table 34 lists the hardware used in this solution. Note: EMC recommends that you use 10 GbE network or equivalent 1 GbE network infrastructure for these solutions as long as the underlying requirements around bandwidth and redundancy are fulfilled. Table 34. List of components used in the VSPEX solution for 125 virtual machines Component Windows servers CPU Memory Solution for 125 virtual machines 1 vcpu per virtual machine 4 vcpus per physical core 125 vcpus Minimum of 32 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host Minimum of 250 GB RAM + 2 GB per host Network Block 2 x 10 GbE NICs per server 2 HBAs per server File 4 x 10 GbE NICs per server Note: To implement Microsoft Hyper-V HA functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements. Brocade Storage Network infrastructure Minimum switching capacity Block 2 Brocade 6510 Fibre Channel switches 2 FC ports per Windows server, for storage network 2 FC ports per SP, for storage data 1 x 1 GbE port per 6510 for management fabric File 2 Brocade VDX 6740-T Ethernet Fabric switches 4 x 10 GbE ports per Windows server 1 x 1 GbE port per Data Mover for data 1 x 1 GbE port per Control Station for management EMC Powered Backup Avamar Data Domain Refer to Data Protection For EMC VSPEX Proven Infrastructure 184 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
185 Bill of Materials Component EMC VNXe series storage array Block File Solution for 125 virtual machines VNXe x 1GbE interface per SP for management 2 x 8Gb FC interfaces per storage processor (FC). 2 x 10GbE interfaces per storage processor(iscsi) 40 x 600 GB 10k rpm 2.5-inch SAS drives 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares VNXe x 10 GbE interfaces per Storage Processor(CIFS/SMB) 1 x 1 GbE interface per Storage Processor for management 1 x 1 GbE interface per SP for management 40 x 600 GB 10k rpm 2.5-inch SAS drives 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares Shared infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If this is being implemented without existing infrastructure, a minimum number of additional servers is required: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 185
186 Bill of Materials 186 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
187 Appendix B Customer Configuration Data Sheet This appendix presents the following topic: Customer configuration data sheet Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 187
188 Customer Configuration Data Sheet Customer configuration data sheet Before you start the configuration, gather some customer-specific network, and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a leave behind document for future reference. The VNXe File and Unified Worksheets should be cross-referenced to confirm customer information. Table 35. Common server information Server name Purpose Primary IP Domain Controller DNS Primary DNS Secondary DHCP NTP SMTP SNMP System Center Virtual Machine Manager SQL Server Table 36. Hyper-V server information Server name Purpose Primary IP Hyper-V Host 1 Private Net (storage) addresses Hyper-V Host Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
189 Customer Configuration Data Sheet Table 37. Array information Array name Admin account Management IP Storage pool name Datastore name Block FC WWPN FCOE WWPN iscsi IQN iscsi Port IP File CIFS server IP Table 38. Brocade Network infrastructure information Name Purpose IP 6740-T Ethernet Switch T Ethernet Switch 2 Subnet mask Default gateway 6510 FC Switch Fab A 6510 FC Switch FAB B Table 39. VLAN information Name Network Purpose VLAN ID Allowed Subnets Virtual machine networking managements iscsi storage network (block) CIFS storage network (file) Live Migration (optional) Public (client access) Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 189
190 Customer Configuration Data Sheet Table 40. Service accounts Account Purpose Windows Server administrator Array administrator SCVMM administrator SQL Server administrator Password (optional, secure appropriately) 190 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
191 Appendix C Server Resources Component Worksheet This appendix presents the following topic: Server resources component worksheet Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 191
192 Server Resources Component Worksheet Server resources component worksheet Table 41. Blank worksheet for determining server resources Application Server resources Storage resources Reference virtual machines CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Total equivalent reference virtual machines Server customization Server component totals NA Storage customization Storage component totals Storage component equivalent reference virtual machines NA NA Total equivalent reference virtual machines - storage 192 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
193 Appendix D References This appendix presents the following topic: References Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 193
194 References References EMC documentation The following documents, available on EMC Online Support provide additional and relevant information. If you do not have access to a document, contact your EMC representative. EMC Storage Integrator (ESI) 2.1 for Windows Suite EMC VNX Virtual Provisioning Applied Technology VNX FAST Cache: A Detailed Review Introduction to EMC XtremCache VNXe3200 Unified Installation Guide Using EMC VNX Storage with Microsoft Windows Hyper-V EMC VNX Unified Best Practice For Performance -Applied Best Practices Guide EMC Host Connectivity Guide for Windows EMC VNX Series: Introduction to SMB 3.0 Support Configuring and Managing CIFS on VNX 194 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
195 References Brocade documentation Brocade VDX Switches and VCS Fabric related documentation can be found as stated below: Product Data Sheets for Brocade VDX 6740 Series of switches Brocade VDX 6740/6740T/ 6740T-1G Switch Data sheet uct_data_sheets/vdx-6740-ds.pdf Hardware Reference Manual Brocade VDX 6740 Hardware Reference Manual B_VDX/VDX6740_VDX6740T_HardwareManual.pdf Brocade Network OS (NOS) Guides switch/index.page Note: At the time of release of this document, the NOS 5.0 documentation was not publicly available. The NOS documentation was the latest available at the time of publication. Network OS Administrator s Guide Supporting Network OS v nuals/nos_411_ag_04/index.html Network OS Command Reference Supporting Network OS v nuals/nos_410_cli/wwhelp/wwhimpl/js/html/wwhelp.htm Brocade Network OS (NOS) Software Licensing Guide v B_VDX/NOS_LicensingGuide_v410.pdf The Brocade Network Operating System (NOS) Release notes can be found at Brocade 65xx Switches and FOS Fabrics related documentation can be found as stated below: Product Data Sheets for Brocade VDX 6510 Series of switches: uct_data_sheets/6510-switch-ds.pdf Hardware Reference Manual B_SAN/B6510_HardwareManual.pdf Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 195
196 References Brocade Fabric OS (FOS) Guides Fabric OS Administrator s Guide Supporting Network OS v nuals/fos_730_admin/index.html Fabric OS Command Reference Supporting Network OS v nuals/fos_730_cli/wwhelp/wwhimpl/js/html/wwhelp.htm#href=title.f abric_os.html Brocade 6510 QuickStart Guide B_SAN/B6510_QuickStartGuide.pdf SAN Fabric Administration Best Practices Guide des/san-admin-best-practices-bp.pdf The Brocade Fabric Operating System (FOS) Release notes can be found at Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
197 References Other documentation The following documents, located on the Microsoft website, provide additional and relevant information: Installing the VMM Server Adding and Managing Hyper-V Hosts and Scale-Out File Servers in VMM How to Create a Virtual Machine Template Configuring a Remote Instance of SQL Server for VMM Installing Virtual Machine Manager Installing the VMM Administrator Console Installing a VMM Agent Locally on a Host Adding Hyper-V Hosts and Host Clusters, and Scale-Out File Servers to VMM How to Create a Virtual Machine with a Blank Virtual Hard Disk How to Deploy a Virtual Machine Install and Deploy Windows Server 2012 R2 and Windows Server 2012 Use Cluster Shared Volumes in a Failover Cluster Hardware and Software Requirements for Installing SQL Server 2014 Install SQL Server 2014 How to Install a VMM Management Server Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 197
198
199 Appendix E About VSPEX This appendix presents the following topic: About VSPEX Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup 199
200 About VSPEX About VSPEX EMC has joined forces with the industry leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-of-breed technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk. Validation by EMC ensures predictable performance and enables customers to select technology that uses their existing IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure for customers looking to gain the simplicity that is characteristic of truly converged infrastructures, while at the same time gaining more choice in individual solution components. VSPEX solutions are proven by EMC, and packaged and sold exclusively by EMC channel partners. VSPEX provides channel partners with more opportunity, faster sales cycles, and end-to-end enablement. By working even more closely together, EMC and its channel partners can now deliver infrastructure that accelerates the journey to the cloud for even more customers. 200 Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines Enabled by Brocade Network Fabrics, EMC VNXe3200, and EMC Powered Backup
EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD
Proven Infrastructure EMC VSPEX Brocade Networking Solution for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines Enabled by Brocade VCS Fabrics, EMC VNX and EMC
How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)
EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC
EMC VSPEX PRIVATE CLOUD
EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
EMC VSPEX PRIVATE CLOUD
Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract
EMC VSPEX END-USER COMPUTING
VSPEX EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This document
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE
EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND
EMC VSPEX END-USER COMPUTING
IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION
Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System
DEDICATED NETWORKS FOR IP STORAGE
DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores
EMC VNX-F ALL FLASH ARRAY
EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K
EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications ESSENTIALS. VNX Family
EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with
Scalable Approaches for Multitenant Cloud Data Centers
WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,
BROCADE FABRIC VISION TECHNOLOGY FREQUENTLY ASKED QUESTIONS
FAQ BROCADE FABRIC VISION TECHNOLOGY FREQUENTLY ASKED QUESTIONS Introduction This document answers frequently asked questions about Brocade Fabric Vision technology. For more information about Fabric Vision
EMC VFCACHE ACCELERATES ORACLE
White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS
TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.
EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased
EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP
IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP EMC VSPEX Abstract This describes the high-level steps and best practices required to implement the EMC VSPEX Proven Infrastructure
MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX
White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.
DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2
DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled
EMC VSPEX PRIVATE CLOUD
EMC VSPEX PRIVATE CLOUD EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. November 2014 Copyright 2014 EMC Corporation.
EMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication
EMC BACKUP-AS-A-SERVICE
Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Copyright 2012 EMC Corporation. All rights reserved.
1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support Agreement Joint Use Of Technology CEO
Private cloud computing advances
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
Brocade Network Advisor High Availability Using Microsoft Cluster Service
Brocade Network Advisor High Availability Using Microsoft Cluster Service This paper discusses how installing Brocade Network Advisor on a pair of Microsoft Cluster Service nodes provides automatic failover
Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters
Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade
REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS
REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed
EMC Virtual Infrastructure for Microsoft SQL Server
Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate
EMC Integrated Infrastructure for VMware
EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000
Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V
Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource
EMC VSPEX END-USER COMPUTING
VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Next-Generation Backup EMC VSPEX Abstract
EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS
EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS EXECUTIVE SUMMARY It s no secret that organizations continue to produce overwhelming amounts
Brocade Fabric Vision Technology Frequently Asked Questions
Brocade Fabric Vision Technology Frequently Asked Questions Introduction This document answers frequently asked questions about Brocade Fabric Vision technology. For more information about Fabric Vision
STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER
STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead
EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT
Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization
The Benefits of Brocade Gen 5 Fibre Channel
The Benefits of Brocade Gen 5 Fibre Channel The network matters for storage. This paper discusses key server and storage trends and technology advancements and explains how Brocade Gen 5 Fibre Channel
Optimized Storage Solution for Enterprise Scale Hyper-V Deployments
Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014
DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTIONS FOR VSPEX PRIVATE CLOUD EMC VSPEX December 2014 Copyright 2013-2014 EMC Corporation. All rights reserved. Published in USA. Published December,
EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
EMC Business Continuity for Microsoft SQL Server 2008
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010
EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE
White Paper EMC INTEGRATION FOR MICROSOFT PRIVATE CLOUD USING EMC VNX UNIFIED STORAGE EMC Next-Generation VNX, EMC Storage Integrator for Windows Suite, Microsoft System Center 2012 SP1 Reduce storage
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET
STORAGE CENTER WITH STORAGE CENTER DATASHEET THE BENEFITS OF UNIFIED AND STORAGE Combining block and file-level data into a centralized storage platform simplifies management and reduces overall storage
EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP
DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 11g OLTP Enabled By EMC Next-Generation VNX and EMC Backup EMC VSPEX Abstract This describes how to design virtualized Oracle Database resources on
The Benefits of Virtualizing
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
EMC RECOVERPOINT FAMILY
EMC RECOVERPOINT FAMILY Cost-effective local and remote data protection and disaster recovery solution ESSENTIALS Maximize application data protection and disaster recovery Protect business applications
MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
EMC VSPEX END-USER COMPUTING
DESIGN GUIDE EMC VSPEX END-USER COMPUTING Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User-Computing solution for Citrix XenDesktop 7.5.
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
How To Get A Storage And Data Protection Solution For Virtualization
Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction
Windows Server 2012 授 權 說 明
Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor
Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage
WHITE PAPER Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Brocade continues to innovate by delivering the industry s first 16 Gbps switches for low latency and high transaction
Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology
Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication
Microsoft Private Cloud Fast Track
Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease
VCS Monitoring and Troubleshooting Using Brocade Network Advisor
VCS Monitoring and Troubleshooting Using Brocade Network Advisor Brocade Network Advisor is a unified network management platform to manage the entire Brocade network, including both SAN and IP products.
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Best Practices for Microsoft
SCALABLE STORAGE FOR MISSION CRITICAL APPLICATIONS Best Practices for Microsoft Daniel Golic EMC Serbia Senior Technology Consultant [email protected] 1 The Private Cloud Why Now? IT infrastructure
Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200
Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING
DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
SQL Server Storage Best Practice Discussion Dell EqualLogic
SQL Server Storage Best Practice Discussion Dell EqualLogic What s keeping you up at night? Managing the demands of a SQL environment Risk Cost Data loss Application unavailability Data growth SQL Server
Luxembourg June 3 2014
Luxembourg June 3 2014 Said BOUKHIZOU Technical Manager m +33 680 647 866 [email protected] SOFTWARE-DEFINED STORAGE IN ACTION What s new in SANsymphony-V 10 2 Storage Market in Midst of Disruption
ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP
ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with
Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014
Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance
Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance DATA CENTER SOLUTIONS For More Information: (866) 787-3271 [email protected] KEY BENEFITS Designed
Cloud Optimize Your IT
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
EMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the
Microsoft SQL Server 2005 on Windows Server 2003
EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
How To Build An Ip Storage Network For A Data Center
1 REDEFINING STORAGE CONNECTIVITY JACK RONDONI, VICE PRESIDENT STORAGE NETWORK, BROCADE 2 THE TRENDS DRIVING STORAGE Virtualization Continues to Drive Change Storage Will Continue To Grow SSD Changes Everything
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability
Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
What Is Microsoft Private Cloud Fast Track?
What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated
