EMC VSPEX PRIVATE CLOUD:
|
|
|
- Rosalind Griffin
- 10 years ago
- Views:
Transcription
1 EMC VSPEX PRIVATE CLOUD: EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. June 2015
2 Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. Published June 2015 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX Private Cloud: Proven Infrastructure Guide Part Number H EMC VSPEX Private Cloud:
3 Contents Contents Chapter 1 Executive Summary 7 Introduction... 8 Target audience... 9 Document purpose... 9 Business needs... 9 Chapter 2 Solution Architecture Overview 11 Overview Solution architecture High-level architecture Logical architecture Key components Virtualization layer Overview Configuration guidelines High-availability and failover Compute layer Overview Configuration guidelines High-availability and failover Network layer Overview Configuration guidelines High-availability and failover Storage layer Overview Configuration guidelines High-availability and failover Chapter 3 Sizing the Environment 29 Overview Reference workload Scalability VSPEX building blocks Building block approach EMC VSPEX Private Cloud: 3
4 Contents Validated building block Customizing the building block Configuration sizing guidelines Introduction to the Customer configuration worksheet Using the customer sizing worksheet Calculating the building block requirement Fine-tuning hardware resources Summary Chapter 4 VSPEX Solution Implementation 40 Overview Network implementation Preparing the network switches Configuring the infrastructure network Configuring the VLANs Completing the network cabling Installing and configuring the Microsoft Hyper-V hosts Installing and configuring Microsoft SQL Server databases Overview Deploying the System Center Virtual Machine Manager server Overview Preparing and configuring the storage Prepare the ScaleIO nodes Preparing the installation worksheet Installing the ScaleIO components Creating and mapping volumes Installing the GUI Provisioning a virtual machine Summary Chapter 5 Verifying the Solution 57 Overview Post-install checklist Deploying and testing a single virtual server Verifying the redundancy of the solution components Chapter 6 System Monitoring 60 Overview Key areas to monitor Performance baseline EMC VSPEX Private Cloud:
5 Contents Servers Networking ScaleIO layer Appendix A Reference Documentation 63 EMC documentation Other documentation Appendix B Customer Configuration Worksheet 65 Customer configuration worksheet Printing the worksheet Appendix C Customer Sizing Worksheet 69 Customer sizing worksheet for Private Cloud Figures Figure 1. VSPEX Proven Infrastructures... 8 Figure 2. VSPEX private cloud components Figure 3. Logical architecture for the solution Figure 4. High availability at the virtualization layer Figure 5. Redundant power supplies Figure 6. Required networks for ScaleIO Figure 7. Network layer high availability Figure 8. Protection domains Figure 9. ScaleIO active GUI Figure 10. ScaleIO enterprise features Figure 11. Hyper-V virtual disk types Figure 12. Automatic rebalancing when disks are added Figure 13. Automatic rebalancing when disks or nodes are removed Figure 14. Determine the maximum number of virtual machines that a building block can support Figure 15. Required resource from the reference virtual machine pool Figure 16. Disk format partition option Figure 17. Installation Manager Home page Figure 18. Manage installation packages Figure 19. Upload installation packages Figure 20. Upload CSV file Figure 21. Installation configuration Figure 22. Monitor page Figure 23. Completed Install Operation EMC VSPEX Private Cloud: 5
6 Contents Tables Table 1. Solution architecture configuration Table 2. Recommended 10 Gb switched Ethernet network layer Table 3. VSPEX Private Cloud workload Table 4. Building block node configuration Table 5. Table 6. Maximum number of virtual machines per node, limited by disk capacity32 Maximum number of virtual machines per node, limited by disk performance Table 7. Redefined building block node configuration example Table 8. Node sizing example Table 9. Customer sizing worksheet example Table 10. Reference virtual machine resources Table 11. Example worksheet row Table 12. Node scaling example Table 13. Server resource component totals Table 14. Deployment process overview Table 15. Tasks for switch and network configuration Table 16. Tasks for server installation Table 17. Tasks for SQL Server database setup Table 18. Tasks for SCVMM configuration Table 19. Set up and configure a ScaleIO environment Table 20. CSV installation spreadsheet Table 21. add_volume command parameters Table 22. map_volume_to_sdc command parameters Table 23. Tasks for testing the installation Table 24. Common server information Table 25. Hyper-V server information Table 26. ScaleIO information Table 27. Network infrastructure information Table 28. VLAN information Table 29. Service accounts Table 30. Customer sizing worksheet EMC VSPEX Private Cloud:
7 Chapter 1 Executive Summary This chapter presents the following topics: Introduction... 8 Target audience... 9 Document purpose... 9 Business needs... 9 EMC VSPEX Private Cloud: 7 Microsoft Hyper- V and EMC ScaleIO
8 Chapter 1: Executive Summary Introduction EMC VSPEX Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX provides modular solutions built with technologies that enable faster deployment, greater simplicity, greater choice, higher efficiency, and lower risk. Figure 1 shows the modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. Partners can choose the virtualization, server, and network technologies that best fit a customer s environment, while the server s local disks with elastic EMC ScaleIO software provide the storage. Figure 1. VSPEX Proven Infrastructures This guide is a comprehensive guide to the technical aspects of the VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO solution. This guide describes the solution architecture and key components, and describes how to design, size, and deploy the solution to meet the customer s needs. 8 EMC VSPEX Private Cloud:
9 Chapter 1: Executive Summary Target audience Document purpose Readers of this guide must have the necessary training and background to install and configure a VSPEX solution based on the Hyper-V hypervisor, ScaleIO, and associated infrastructure, as required by this implementation. External references are provided where applicable, and readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the customer installation. Partners selling and sizing a VSPEX Private Cloud with ScaleIO infrastructure should focus on the first five chapters of this guide. After purchase, implementers of the solution should focus on the implementation guidelines in Chapter 4, the solution validation in Chapter 5, and the monitoring guidelines in Chapter 6. This guide includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system. The VSPEX Private Cloud architecture provides customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on a Microsoft Hyper-V virtualization layer. EMC ScaleIO software runs on top of the Hyper-V hypervisor. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. This guide details server capacity minimums for CPU, memory, and network interfaces. The customer can select any server and networking hardware that meets or exceeds the stated minimums. The solution described in this guide is based on the capacity of the cluster server and on a defined reference workload. Because not every virtual machine has the same requirements, this guide includes methods and guidance to adjust the system to be cost effective as deployed. A private cloud architecture is a complex system offering. This guide provides prerequisite software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After you install the last component, the validation tests and monitoring instructions ensure that your system is running properly. Business needs EMC builds VSPEX solutions with proven technologies to create complete virtualization solutions that allow you to make informed decisions for the hypervisor, server, and networking layers. Business applications are moving into consolidated compute, network, and storage environments. This solution reduces the complexity of configuring every component EMC VSPEX Private Cloud: 9
10 Chapter 1: Executive Summary of a traditional deployment model. The solution simplifies integration management while maintaining application design and implementation options. It also provides unified administration while still enabling adequate control and monitoring of process separation. The business benefits of the architectures include: An end-to-end virtualization solution to effectively use the capabilities of the unified infrastructure components Efficient virtualization of virtual machines for varied customer use cases A reliable, flexible, and scalable reference design 10 EMC VSPEX Private Cloud:
11 Chapter 2: Solution Architecture Overview Chapter 2 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Key components Virtualization layer Compute layer Network layer Storage layer EMC VSPEX Private Cloud: 11
12 Chapter 2: Solution Architecture Overview Overview Solution architecture This chapter provides a comprehensive guide to the major aspects of this solution. It generically presents server capacity required minimums for CPU, memory, and network resources. You can select server and networking hardware that meets or exceeds the stated minimums. EMC has validated the specified ScaleIO architecture, and the fulfillment of server and network requirements, to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. High-level architecture EMC has designed and proven this solution to provide virtualization, server, network, and storage resources that enable customers to deploy a small-scale architecture and scale as their business needs change. Figure 2 shows the high-level architecture of the validated solution. Virtual servers Virtualization components. Hypervisor Virtual servers Network connections Supporting infrastructure SDS/SDC SDS/SDC Compute components Storage components Network SDS/SDC Storage network Network components Figure 2. VSPEX private cloud components The solution uses ScaleIO software and Hyper-V to provide the storage and virtualization platforms for an environment of Microsoft Windows server 2012 virtual machines provisioned by the Hyper-V platform. To provide predictable performance for end-user computing solutions, the storage system must be able to handle the peak I/O load from the clients while keeping 12 EMC VSPEX Private Cloud:
13 Chapter 2: Solution Architecture Overview response time to a minimum. In this solution, we used ScaleIO software to use the servers local disks to build the storage system with high performance and scalability. Logical architecture Figure 3 shows the logical architecture of this solution.. Virtual server 1 Virtual server n Microsoft Windows 2012 R2 Hyper-V virtual servers SCVMM DNS Server EMC ScaleIO SQL Server Windows Server 2012 R2 Hyper-V cluster Storage network Active Directory Server Shared infrastructure 10 GbE IP Network Figure 3. Logical architecture for the solution Table 1 lists the solution configuration components. Table 1. Solution architecture configuration Component Microsoft Hyper-V Microsoft System Center Virtual Machine Manager (SCVMM) EMC ScaleIO Microsoft SQL Server Solution configuration Hyper-V provides a common virtualization layer to host the server environment. Hyper-V provides a highly available infrastructure through features such as Live Migration, Failover Clustering, and High Availability (HA). SCVMM is not required for this solution. However, if deployed, SCVMM (or its corresponding functionality in Microsoft System Center Essentials) simplifies provisioning, management, and monitoring of the Hyper-V environment. ScaleIO software provides a storage layer to host and store virtual machines. SCVMM requires an SQL Server database instance to store configuration and monitoring details. EMC VSPEX Private Cloud: 13
14 Chapter 2: Solution Architecture Overview Component Active Directory server DHCP server DNS server IP networks Solution configuration Active Directory services are required for the various solution components to function properly. We used the Microsoft Active Directory Service running on a Windows Server 2012 R2 server for this purpose. The dynamic host configuration protocol (DHCP) server centrally manages the IP address scheme for the virtual machines. This service is hosted on the same virtual machine as the domain controller and domain name server (DNS). The Microsoft DHCP Service running on a Windows 2012 R2 server is used for this purpose. DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 R2 server is used for this purpose. A standard Ethernet network with redundant cabling and switching carries all network traffic. A shared network carries user and management traffic, while a private, non-routable subnet carries virtual SAN (vsan) storage traffic. Key components The key components of this solution include: Virtualization layer Decouples the physical implementation of resources from the applications that use the resources, so that the application view of the available resources is no longer directly tied to the hardware. This enables many key features required by the private cloud. Compute layer Provides memory and processing resources for the virtualization layer software and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and implements the solution by using any server hardware that meets these requirements. Network layer Connects users of the private cloud to the resources in the cloud, and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables you to implement the solution by using any network hardware that meets these requirements. Storage layer Provides storage to implement the private cloud. ScaleIO implements a pure block storage layout with converged nodes to support compute and storage. With multiple hosts accessing shared data through ScaleIO components, ScaleIO provides high-performance data storage while maintaining high availability. 14 EMC VSPEX Private Cloud:
15 Chapter 2: Solution Architecture Overview Virtualization layer Overview Hyper-V performs the hypervisor-based virtualization role for Microsoft Windows Server and provides the virtualization platform for this solution. Hyper-V live migration and live storage migration enable seamless movement of virtual machines or virtual machines files between Hyper-V servers or storage systems, transparently and with minimal performance impact. Hyper-V works with Windows Server 2012 Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure, significantly increasing the availability of virtual machines during planned and unplanned downtime. Configure Failover Clustering on the Hyper-V host to monitor virtual machine health and to migrate virtual machines between cluster nodes. Hyper-V Replica provides asynchronous replication of virtual machines between two Hyper-V hosts at separate sites. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site. Hyper-V snapshots provide consistent point-in-time views of a virtual machine and enables users to revert the virtual machine to a previous point-in-time if necessary. Snapshots function as the source for backups, test and development activities, and other use cases. Microsoft System Center Virtual Machine Manager Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform that enables datacenter administrators to configure and manage virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment. Windows Server Cluster-Aware Updating Windows Cluster-Aware Updating (CAU) enables updating of cluster nodes with little or no loss of availability. CAU is integrated with Windows Server Update Services (WSUS) and can be automated using PowerShell. Configuration guidelines Hyper-V has several advanced features that help maximize performance and overall resource utilization. The most important features relate to memory management. This section describes some of these features, and the items to consider when using these features in a VSPEX environment. Dynamic Memory and Smart Paging Dynamic Memory increases physical memory efficiency by treating memory as a shared resource, dynamically allocating it to virtual machines, and reclaiming unused memory from idle virtual machines. Administrators can dynamically adjust the amount of memory used by each virtual machine at any time. With Dynamic Memory, Hyper-V allows more virtual machines than the available physical memory can support. This introduces the risk that there might not be EMC VSPEX Private Cloud: 15
16 Chapter 2: Solution Architecture Overview sufficient physical memory available to restart a virtual machine if required. Smart Paging is a memory management technique that uses disk resources as a temporary memory replacement when more memory is required to restart a virtual machine. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multinode technology that enables a CPU to access remote-node memory. Because this type of memory access degrades performance, Windows Server 2012 uses processor affinity, which pins threads to a single CPU, to avoid remote-node memory access. This feature is available to the host and to the virtual machines, where it provides improved performance in symmetrical multiprocessor (SMP) environments. Hyper-V memory overhead Virtualized memory has some associated overhead, including the memory consumed by the Hyper-V the parent partition and additional overhead for each virtual machine. Leave at least 2 GB memory for the Hyper-V parent partition in this solution. Virtual machine memory Each virtual machine in this solution is assigned 2 GB memory in fixed mode. High-availability and failover Configure high availability in the virtualization layer, and enable the hypervisor to restart failed virtual machines automatically. Figure 4 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 4. High availability at the virtualization layer Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure will attempt to keep as many services running as possible. Compute layer Overview The choice of a server platform for a VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For these reasons, VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX defines the minimum requirements for the number of processor cores and the amount of RAM. ScaleIO components are designed to work with a minimum of three server nodes. The physical server node, running Hyper-V, can host workloads other than the ScaleIO virtual machine. 16 EMC VSPEX Private Cloud:
17 Chapter 2: Solution Architecture Overview Configuration guidelines When designing and ordering the compute layer of this VSPEX solution, several factors can affect the final purchase. If you understand the system workload well, then you can use virtualization features such as memory ballooning and transparent page sharing to reduce the aggregate memory requirement. You can reduce the number of virtual CPUS (vcpus) if the virtual machine pool does not have a high level of peak or concurrent usage. Conversely, if the deployed applications are highly computational, you might need to increase the number of CPUs and the amount of memory. Apply the following best practices in the compute layer: Use identical, or at least compatible, servers. VSPEX implements hypervisorlevel high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. When implementing high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Note: To enable high availability for the compute layer, each customer needs one additional server to ensure that the system has enough capacity to maintain business operations when a server fails. Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimaldowntime upgrades and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the VSPEX compute layer can be flexible to meet your specific needs. Ensure that there are sufficient processor cores and RAM per core to meet the needs of the target environment. High-availability and failover While the choice of servers to implement in the compute layer is flexible, use enterprise-class servers designed for the datacenter. This type of server has redundant power supplies, as shown in Figure 5. Connect these servers to separate power distribution units (PDUs) in accordance with your server vendor s best practices. EMC VSPEX Private Cloud: 17
18 Chapter 2: Solution Architecture Overview Figure 5. Redundant power supplies To configure high availability in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as shown in Figure 4. Network layer Overview Configuration guidelines The infrastructure network requires redundant network links for each Hyper-V host. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. This section provides guidelines for setting up a redundant, highly- available network configuration. The guidelines consider virtual LANS (VLANs) and the ScaleIO network layer. ScaleIO network The ScaleIO network creates a Redundant Array of Independent Nodes (RAIN) topology between the server nodes, distributing data so that the loss of a single node does not affect data availability. This topology requires ScaleIO nodes to send data to other nodes to maintain consistency. A high-speed, low-latency IP network is required for this to work correctly. We 1 created the test environment with redundant 10 Gb Ethernet networks. The network was not heavily used during testing at small scale points. For that reason, at small points of scale, you can implement the solution using 1 Gb networks. However, EMC recommends a 10 GbE IP network designed for high availability, as shown in Table 2. 1 In this guide, we refers to the EMC Solutions engineering team that validated the solution. 18 EMC VSPEX Private Cloud:
19 Chapter 2: Solution Architecture Overview Table 2. Recommended 10 Gb switched Ethernet network layer Nodes 10 Gb switched Ethernet 1 Gb switched Ethernet Recommended Possible 7 Not recommended VLANs Isolate network traffic so that management traffic, traffic between hosts and storage, and traffic between hosts and clients all move over isolated networks. Physical isolation might be required in some cases for regulatory or policy compliance reasons. Logical isolation with VLANs is sufficient in many cases. EMC recommends separating the network into two types for security and increased efficiency: A management network, used to connect and manage the ScaleIO environment. This network is generally connected to the client management network. Because this network has less I/O traffic, EMC recommends a 1 GbE network. An internal data network, used for communication between the ScaleIO components. This is generally a 10 GbE network. In this solution, we used one VLAN for client access and one VLAN for management. Figure 6 depicts the VLANs and the network connectivity requirements for a ScaleIO environment. Client access network Servers... Storage network Management Management network Network Figure 6. Required networks for ScaleIO EMC VSPEX Private Cloud: 19
20 L1 L2 L1 L2 MGMT 0 MGMT 1 MGMT 0 MGMT 1 CONSOLE CONSOLE `STAT `STAT Cisco Nexus 5020 PS1 PS2 Cisco Nexus 5020 PS1 PS v-6A 50~60Hz v-6A 50~60Hz Chapter 2: Solution Architecture Overview You can use the client access network to communicate with the ScaleIO infrastructure. The network provides communication between each ScaleIO node. Administrators use the management network as a dedicated way to access the management connections on the ScaleIO software components, network switches, and hosts. Note: Some best practices need additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. High-availability and failover Each Windows host has multiple connections to user and Ethernet networks to guard against link failures, as shown in Figure 7. Spread these connections across multiple Ethernet switches to guard against component failure in the network. SLOT3 SLOT2 SLOT3 SLOT2 Server connects to multiple switches Network Switches connect to each other Figure 7. Network layer high availability Storage layer Overview ScaleIO is a software-only solution that uses hosts existing local disks and LAN to realize a vsan that has all the benefits of external storage but at a fraction of the cost and the complexity. ScaleIO turns local internal storage into shared block storage that is comparable to or better than the more expensive external shared block storage. The lightweight ScaleIO software components are installed in the application hosts and communicate using a standard LAN to handle the application I/O requests sent to ScaleIO block volumes. An extremely efficient decentralized block I/O flow, combined with a distributed, sliced volume layout, results in a massively parallel I/O system that can scale to hundreds and thousands of nodes. 20 EMC VSPEX Private Cloud:
21 Chapter 2: Solution Architecture Overview ScaleIO is designed and implemented with enterprise-grade resilience as an essential attribute. Furthermore, the software features efficient distributed auto-healing processes that overcome media and node failures without requiring administrator involvement. Dynamic and elastic, ScaleIO enables administrators to add or remove nodes and capacity on the fly. The software immediately responds to the changes, rebalancing the storage distribution and achieving a layout that optimally suits the new configuration. Architecture Software components The ScaleIO Data Client (SDC) is a lightweight device driver situated in each host whose applications or file system requires access to the ScaleIO virtual SAN block devices. The SDC exposes block devices representing the ScaleIO volumes that are currently mapped to that host. The ScaleIO Data Server (SDS) is a lightweight software component within each host that contributes local storage to the central ScaleIO vsan. Convergence of storage and compute The ScaleIO software components, which have a negligible impact on the applications running in the hosts, are carefully designed and implemented to consume the minimum computing resources required for operation. ScaleIO converges the storage and application layers. The hosts that run applications can also be used to realize shared storage, yielding a wall-to-wall, single layer of hosts. Because the same hosts run applications and provide storage for the vsan, an SDC and SDS are typically both installed in each of the participating hosts. Pure block storage implementation ScaleIO implements a pure block storage layout. Its entire architecture and data path are optimized for block storage access needs. For example, when an application submits a read I/O request to its SDC, the SDC instantly deduces which SDS is responsible for the specified volume address and then interacts directly with the relevant SDS. The SDS reads the data (by issuing a single read I/O request to its local storage or by just fetching the data from the cache in a cache-hit scenario), and returns the result to the SDC. The SDC provides the read data to the application. This flow is simple, consuming as few resources as necessary. The data moves over the network exactly once, and a single I/O request is sent to the SDS storage. The write I/O flow is similarly simple and efficient. Unlike some block storage systems that run on top of a file system or object storage that runs on top of a local file system, ScaleIO offers optimal I/O efficiency. Massively parallel, scale-out I/O architecture ScaleIO can scale to a large number of nodes, thus breaking the traditional scalability barrier of block storage. Because the SDCs propagate the I/O requests directly to the pertinent SDSs, there is no central point through which the requests move and thus a potential bottleneck is avoided. This decentralized data flow is crucial to the linearly scalable performance of ScaleIO. Therefore, a large ScaleIO configuration results in a massively parallel system. The more servers or disks the system has, the EMC VSPEX Private Cloud: 21
22 Chapter 2: Solution Architecture Overview greater the number of parallel channels that will be available for I/O traffic and the higher the aggregated I/O bandwidth and IOPS will be. Mix-and-match nodes The vast majority of traditional scale-out systems are based on a symmetric brick architecture. Unfortunately, datacenters cannot be standardized on exactly the same bricks for a prolonged period, because hardware configurations and capabilities change over time. Therefore, such symmetric scale-out architectures are bound to run in small islands. ScaleIO was designed from the ground up to support a mix of new and old nodes with dissimilar configurations. Hardware agnostic ScaleIO is platform agnostic and works with existing underlying hardware resources. Besides its compatibility with various types of disks, networks, and hosts, it can take advantage of the write buffer of existing local RAID controller cards and can also run in servers that do not have a local RAID controller card. For the local storage of an SDS, you can use internal disks, directly attached external disks, virtual disks exposed by an internal RAID controller, partitions within such disks, and more. Partitions can be useful to combine system boot partitions with ScaleIO capacity on the same raw disks. If the system already has a large, mostly unused partition, ScaleIO does not require repartitioning of the disk, as the SDS can actually use a file within that partition as its storage space. Volume mapping and volume sharing The volumes that ScaleIO exposes to the application clients can be mapped to one or more clients running in different hosts. Mapping can be changed dynamically if necessary. In other words, ScaleIO volumes can be used by applications that expect shared-everything block access and by applications that expect shared-nothing or shared-nothing-with-failover access. Clustered, striped volume layout A ScaleIO volume is a block device that is exposed to one or more hosts. It is the equivalent of a logical unit in the SCSI world. ScaleIO breaks each volume into a large number of data chunks, which are scattered across the SDS cluster s nodes and disks in a fully balanced manner. This layout practically eliminates hot spots across the cluster and allows for the scaling of the overall I/O performance of the system through the addition of nodes or disks. Furthermore, this layout enables a single application that is accessing a single volume to use the full IOPS of all the cluster s disks. This flexible, dynamic allocation of shared performance resources is one of the major advantages of converged scale-out storage. Software-only but as resilient as a hardware array Traditional storage systems typically combine system software with commodity hardware which is comparable to application servers hardware to provide enterprise-grade resilience. With its contemporary architecture, ScaleIO provides similar enterprise-grade, no-compromise resilience by running the storage software directly on the application servers. Designed for extensive fault tolerance and high availability, ScaleIO handles all types of failures, including failures of media, connectivity, and nodes, software interruptions, and more. No single point of failure 22 EMC VSPEX Private Cloud:
23 Chapter 2: Solution Architecture Overview can interrupt the ScaleIO I/O service. In many cases, ScaleIO can overcome multiple points of failure as well. Managing clusters of nodes Many storage cluster designs use tightly coupled techniques that might be adequate for a small number of nodes but begin to break when the cluster is larger than a few dozen nodes. The loosely coupled clustering management schemes of ScaleIO provide exceptionally reliable yet lightweight failure and failover handling in both small and large clusters. Most clustering environments assume exclusive ownership of the cluster nodes and might even physically fence or shut down malfunctioning nodes. ScaleIO uses application hosts. The ScaleIO clustering algorithms are designed to work efficiently and reliably without interfering with the applications with which ScaleIO coexists. ScaleIO will never disconnect or invoke Intelligent Platform Management Interface shutdowns of malfunctioning nodes, because they might still be running healthy applications. Protection domains As shown in Figure 8, you can divide a large ScaleIO storage pool into multiple protection domains, each of which contains a set of SDSs. ScaleIO volumes are assigned to specific protection domains. Protection domains are useful for mitigating the risk of a dual point of failure in a two-copy scheme or a triple point of failure in a three-copy scheme. Figure 8. Protection domains For example, if two SDSs that are in different protection domains fail simultaneously, no data will become unavailable. Just as incumbent storage systems can overcome a large number of simultaneous disk failures as long as they do not occur within the same shelf, ScaleIO can overcome a large number of simultaneous disk or node failures as long as they do not occur within the same protection domain. Management and monitoring ScaleIO provides several tools to manage and monitor the system, including a command line interface (CLI), an active GUI, and representational state transfer (REST) management application program interface (API) commands. The CLI enables EMC VSPEX Private Cloud: 23
24 Chapter 2: Solution Architecture Overview administrators to have direct platform access to perform backend configuration actions and obtain monitoring information. The active GUI, shown in Figure 9, provides system dashboards for capacity, throughput, bandwidth statistics, access to system alerts, and the ability to provision backend devices. The REST management API allows users to execute the same management and monitoring commands available with the CLI using a nextgeneration, cloud-based interface. Figure 9. ScaleIO active GUI Interoperability ScaleIO is integrated with Hyper-V and OpenStack to provide customers with greater flexibility in deploying ScaleIO with existing environments. The OpenStack integration ( Cinder support) allows customers to use commodity hardware with ScaleIO, providing a software-defined block volume solution in an OpenStack environment. Additionally, ScaleIO software can be packaged with EMC ViPR for management and orchestration functions and with EMC ViPR SRM for additional monitoring and reporting capabilities Enterprise Features Whether you are a service provider delivering hosted infrastructure as a service or your IT department delivers infrastructure as a service to functional units within your organization, ScaleIO offers a set of features that gives you complete control over performance, capacity, and data location. For both private cloud datacenters and service providers, these features enhance system control and manageability, ensuring that quality of service is met. With ScaleIO, you can limit the amount of performance IOPS or bandwidth that selected customers can consume. The limiter allows you to impose and regulate resource distribution to prevent application hogging scenarios. You can apply data masking to provide added security for sensitive customer data. ScaleIO offers instantaneous, writeable snapshots for data backups. 24 EMC VSPEX Private Cloud:
25 Chapter 2: Solution Architecture Overview For improved read performance, dynamic random-access memory (DRAM) caching enables you to improve read access by using SDS server RAM. Fault sets a group of SDS that are likely to go down together can be defined to ensure data mirroring occurs outside the group, improving business continuity. You can create volumes with thin provisioning, providing on-demand storage as well as faster setup and startup times. Finally, tight integrations with other EMC products are available. You can use ScaleIO in conjunction with EMC XtremCache for flash cache auto tiering to further accelerate application performance. Figure 10 shows the ScaleIO enterprise features. Figure 10. ScaleIO enterprise features ScaleIO 1.32 ScaleIO 1.32 includes the following new features and functionality: Release of the ScaleIO Free and Frictionless download, a free download of ScaleIO for non-production environments with no time / function / capacity limits Support for VMware ESX 6.0 (VMware certified) Support for SUSE Linux Enterprise Server (SLES) 12 Support for IBM Spectrum Scale (General Parallel File System (GPFS) ) over ScaleIO for Linux environments (Red Hat Enterprise Linux (RHEL) / SLES) Additional flexibility during the configuration process EMC VSPEX Private Cloud: 25
26 Chapter 2: Solution Architecture Overview Configuration guidelines This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance. Microsoft Hyper-V supports more than one method of storage when hosting virtual machines. The ScaleIO solution is based on block protocols, and the ScaleIO layer described in this section uses all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system s usage and load if required. However, the building blocks described in Chapter 3 ensure acceptable performance. Hyper-V storage virtualization Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes v2 and VHDX features to virtualize storage presented from an external shared storage system to the host virtual machines. In Figure 11, the ScaleIO volumes present blockbased LUNs (as CSVs) to the Windows hosts to host the virtual machines. Figure 11. Hyper-V virtual disk types CSV A CSV is a shared disk containing a New Technology File System (NTFS) volume that is accessible to all nodes of a Windows Failover Cluster. The CSV can be deployed over any SCSI-based local or network storage. Pass-through disks Windows Server 2012 also supports pass-through disks, which enable a virtual machine to access a physical disk mapped to a host that does not have a volume configured on it. VHDX Hyper-V in Windows Server 2012 contains an update to the virtual hard disk (VHD) format called VHDX, which has much greater capacity and built-in resiliency. The main features of the VHDX format are: Support for virtual hard disk storage capacity of up to 64 TB Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures 26 EMC VSPEX Private Cloud:
27 Chapter 2: Solution Architecture Overview Optimal structure alignment of the virtual hard disk format to suit large sector disks The VHDX format also has the following features: Larger block size for dynamic and differential disks, which enables the disks to better meet the needs of the workload A 4 KB logical-sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors The ability to store custom file metadata that the user might want to record, such as the operating system version or applied updates Space reclamation features that can result in smaller file sizes and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware) High-availability and failover Redundancy scheme and rebuild process ScaleIO uses a mirroring scheme to protect data against disk and node failures. The ScaleIO architecture supports a distributed two-copy scheme. If an SDS node or SDS disk fails, applications can continue to access ScaleIO volumes; their data is still available through the remaining mirrors. ScaleIO immediately starts a seamless rebuild process to create another mirror for the data chunks that were lost in the failure. During the rebuild process, ScaleIO copies those data chunks to free areas across the SDS cluster, so it is not necessary to add any capacity to the system. The surviving SDS cluster nodes carry out the rebuild process by using the aggregated disk and network bandwidth of the cluster. The process is fast and minimizes both exposure time and application performance degradation. After the rebuild, all the data is fully mirrored and healthy again. If a failed node rejoins the cluster before the rebuild process is completed, ScaleIO dynamically uses data from the rejoined node to further minimize the exposure time and the use of resources. This capability is important for overcoming short outages efficiently. Elasticity and rebalancing Unlike many other systems, a ScaleIO cluster is extremely elastic. Administrators can add and remove capacity and nodes on the fly during I/O operations. When a cluster is expanded with new capacity (such as new SDSs or new disks added to existing SDSs), ScaleIO immediately rebalances the storage by seamlessly migrating data chunks from the existing SDSs to the new SDSs or disks. This migration does not affect the applications, which continue to access the data stored in the migrating chunks. By the end of the rebalancing process, all the ScaleIO volumes are spread across all the SDSs and disks, including the newly added ones, in an optimally balanced manner, as shown in Figure 12. Thus, adding SDSs or disks not only increases the available capacity but also increases the performance of the applications as they access their volumes. EMC VSPEX Private Cloud: 27
28 Chapter 2: Solution Architecture Overview Figure 12. Automatic rebalancing when disks are added When an administrator decreases capacity (for example, by removing SDSs or removing disks from SDSs), ScaleIO performs a seamless migration that rebalances the data across the remaining SDSs and disks in the cluster, as shown in Figure 13. Figure 13. Automatic rebalancing when disks or nodes are removed Notes: In all types of rebalancing, ScaleIO migrates the least amount of data possible. ScaleIO is sufficiently flexible to accept new requests to add or remove capacity while still rebalancing previous capacity additions and removals. To maintain data availability, remove only one node at a time. 28 EMC VSPEX Private Cloud:
29 Chapter 3: Sizing the Environment Chapter 3 Sizing the Environment This chapter presents the following topics: Overview Reference workload Scalability VSPEX building blocks Configuration sizing guidelines EMC VSPEX Private Cloud: 29
30 Chapter 3: Sizing the Environment Overview Reference workload This chapter presents the following information: How to design and size the VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO solution to meet the customer s needs How to design the nodes for the ScaleIO environment and specify the number of nodes Results from the solution testing and validation as to how variations in node size and number affect the maximum number of supported servers. The virtual machines used in the sizing calculations correspond to the definition of the reference workload (reference virtual machine) for the VSPEX Private Cloud. When you move an existing server to a virtual infrastructure, you can gain efficiency by rightsizing the virtual hardware resources assigned to that system. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined specification. To simplify sizing the solution, VSPEX defines a reference workload, which represents a unit of measure for quantifying the resources in the solution reference architecture. By comparing the customer s actual usage to this reference workload, you can determine how to size the solution. For VSPEX Private Cloud solutions, the reference workload is defined as a single virtual machine with the characteristics shown in Table 3. Table 3. VSPEX Private Cloud workload Parameter Virtual machine OS Value Windows Server 2012 R2 Virtual CPUs 1 Virtual CPUs per physical core (maximum) 4 Memory per virtual machine 2 GB IOPS per virtual machine 25 I/O pattern Fully random skew = 0.5 I/O read percentage 67% Virtual machine storage capacity 100 GB This solution uses the VSPEX Private Cloud reference virtual machine for sizing the customer environment in the same way that the reference virtual machine is used in VSPEX Private Cloud solutions for the EMC VNX platform. For further information, refer 30 EMC VSPEX Private Cloud:
31 Chapter 3: Sizing the Environment to EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 1000 Virtual Machines. Scalability VSPEX building blocks ScaleIO is designed to scale from three to thousands of nodes. Unlike most traditional storage systems, as the number of servers grows, so do capacity, throughput, and IOPS. Performance scales linearly with the growth of the deployment. Whenever additional storage and compute resources (such as servers and drives) are needed, you can add them modularly. Storage and compute resources grow together so that the balance between them is maintained. Building block approach Sizing the system to meet the virtual server application requirements is a complicated process. When applications generate I/O, several components serve that I/O for example, server CPU, server dynamic random access memory (DRAM) cache, and disks. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block consists of one server node that is configured and validated to support a certain number of virtual servers in the VSPEX architecture. Each building block node combines several local disk spindles to contribute a shared ScaleIO volume to support the needs of the private cloud environment. The SDS and the SDC are both installed on each building block node to contribute the local disk to the ScaleIO storage pool and expose ScaleIO shared block volumes to run the virtual machines. Validated building block The configuration of the validated reference building block includes the memory size and the number of physical CPU cores and disk spindles shown in Table 4. This configuration provides a flexible solution for VSPEX sizing. Table 4. Building block node configuration Physical CPU cores Memory (GB) SAS drives (10k rpm) SAS capacity (GB) The building block configuration contains six SAS disks per node. The validated solution models these drives at 600 GB each. Solution testing revealed that drive capacity, rather than drive performance, limits the node configuration for a VSPEX Private Cloud and the number of reference virtual machines that a building block can support. The reference building block memory can support 31 reference virtual machines; but the reference building block disk capacity can support only 12 virtual machines, as shown in Table 5. Customizing the building block provides information about how to customize the building block configuration. EMC VSPEX Private Cloud: 31
32 Chapter 3: Sizing the Environment Customizing the building block The reference building block provides a starting point for planning a virtual infrastructure. You can customize the building block node to meet specific customer needs. Table 4 defines the CPU, memory, and disk configuration for the validated reference building block. This VSPEX solution provides additional options for the building block node configuration. Users can redefine the building block with different configurations. The number of virtual machines that the building block can support changes when the building block configuration is redefined. You must consider CPU capability, memory capability, disk capacity, and IOPS to calculate the number of virtual machines that the new building block can support. CPU capability For VSPEX systems, EMC recommends a maximum of four virtual CPUs for each physical core in a virtual machine environment. For example, a server node with 16 physical cores can support up to 64 virtual machines. Memory capability When sizing the memory for a server node, you must consider both the ScaleIO virtual machine and the hypervisor. ScaleIO reserves 2 GB RAM for the hypervisor. EMC recommends that you do not use memory overcommit in this solution. Note: ScaleIO 1.3 introduces a new RAM cache feature, which uses the SDS RAM. By default, the SDS RAM size is 128 MB. Disk capacity ScaleIO uses a RAIN topology to ensure data availability. In general, the capacity available is a function of the capacity per node (formatted capacity) and the number of nodes available. Assuming N nodes and C TB of capacity per server, the storage available, S, is: (N 1) C S = 2 This formula accounts for two copies of the data and the ability to survive a single node failure. The values in Table 5 assume that the CPU and memory resources of each node are sufficient to support the virtual machines. Each node contains six disks. Table 5. Maximum number of virtual machines per node, limited by disk capacity Disk capacity (GB) Disks per node EMC VSPEX Private Cloud:
33 Chapter 3: Sizing the Environment Disk capacity (GB) Disks per node IOPS The primary method for adding IOPS capability to a node, without considering cache technologies, is to increase either the number of disk units or the speed of those units. Table 6 shows the number of virtual machines supported with four, six, eight, or ten SAS drives per node. Table 6. Maximum number of virtual machines per node, limited by disk performance 10k rpm SAS drives Number of virtual machines Note: The values in Table 6 assume that the CPU and memory resources of each node are sufficient to support the virtual machines. The capacity of each disk is 600 GB. Determining the maximum number of virtual machines supported After the entire configuration is defined for a customized building block node, calculate the number of virtual machines that each component can support to determine the number of virtual machines that the building block node can support. For example, consider the redefined building block configuration in Table 7. Table 7. Redefined building block node configuration example Physical CPU cores Memory (GB) 10k rpm SAS drives * 1500 GB As a result, the calculations in Table 8 are applied, giving a new supported virtual machine count for this node. Table 8. Node sizing example Physical Attribute VMs supported Calculation CPU Cores: cores * 4 VMs per core = 80 VMs RAM: 192GB 95 (192 GB total RAM 2GB (Hypervisor Reserved)/ 2 = 95 Storage Capacity: 1500GB 50 See Table 5. EMC VSPEX Private Cloud: 33
34 Chapter 3: Sizing the Environment Storage Performance: 75 See Table 6. Therefore, the total number of virtual machines that this building block node can support is 16. The total number is always the minimum number supported by the individual configuration components, as shown in Figure virtual machines support CPU RAM IOPS 80 virtual machines supported by CPU 95 virtual machines supported by memory 75 virtual machines supported by disk IOPS Capacity 50 virtual machines supported by disk Capacity Figure 14. Determine the maximum number of virtual machines that a building block can support Configuration sizing guidelines Introduction to the Customer configuration worksheet Using the customer sizing worksheet To choose the appropriate reference architecture for a customer environment, determine the resource requirements of the environment and then translate these requirements to an equivalent number of reference virtual machines with the characteristics defined in Table 3. This section describes how to use Customer configuration worksheet to simplify the sizing calculations, and additional factors you should take into consideration when deciding which architecture to deploy. The Customer sizing worksheet for Private Cloud helps you to assess the customer environment and calculate the sizing requirements of the environment. Table 9 shows a completed worksheet for a sample customer environment. Table 9. Customer sizing worksheet example Application Example 1: Custom-built application Example 2: Point-of-sale system Server resources CPU (vcpus) Memory (GB) Storage resources IOPS Capacity (GB) Reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Example 3: Web Resource requirements EMC VSPEX Private Cloud:
35 Chapter 3: Sizing the Environment server Equivalent reference virtual machines Server resources Storage resources Total equivalent reference virtual machines 14 To complete the worksheet: 1. Identify the applications planned for migration into the VSPEX private cloud environment. 2. For each application, determine the compute resource requirements in terms of vcpus, memory (GB), disk performance (IOPS), and disk capacity. 3. For each resource type, determine the equivalent reference virtual machines requirements that is, the number of reference virtual machines required to meet the specified resource requirements. 4. Determine the total number of reference virtual machines needed from the resource pool for the customer environment. Determining the resource requirements CPU The reference virtual machine outlined in Table 3 assumes that most virtual machine applications are optimized for a single CPU. If one application requires a virtual machine with multiple virtual CPUs, modify the proposed virtual machine count to account for the additional resources. Memory Memory plays a key role in ensuring application functionality and performance. Each group of virtual machines will have different targets for the available memory that is considered acceptable. As with the CPU calculation, if one application requires additional memory resources, adjust the planned virtual machine count to accommodate the additional resource requirements. For example, if there are 30 virtual machines, but each needs 4 GB of memory instead of the 2 GB that the reference virtual machine provides, plan for 60 reference virtual machines. IOPS The storage performance requirements for virtual machines are usually the least understood aspect of performance. The reference virtual machine uses a workload generated by an industry-recognized tool to run a wide variety of office productivity applications that should be representative of the majority of virtual machine implementations. Storage capacity The storage capacity requirement for a virtual machine can vary widely depending on the type of provisioning, the types of applications in use, and specific customer policies. EMC VSPEX Private Cloud: 35
36 Chapter 3: Sizing the Environment Determining the equivalent reference virtual machines With all of the resources defined, determine the number of equivalent reference virtual machines by using the relationships listed in Table 10. Round all values to the closest whole number. Table 10. Reference virtual machine resources Resource Value for reference virtual machine Relationship between requirements and equivalent reference virtual machines CPU 1 Equivalent reference virtual machines = Resource requirements Memory 2 Equivalent reference virtual machines = Resource requirements/2 IOPS 25 Equivalent reference virtual machines = Resource requirements/25 Capacity 100 Equivalent reference virtual machines = Resource requirements/100 For instance, Example 2 in Table 9 requires four CPUs, 16 GB of memory, 200 IOPS, and 200 GB of storage. This translates to four reference virtual machines for CPU, eight reference virtual machines for memory, eight reference virtual machines for IOPS, and two reference virtual machines for capacity, as shown in Table 11. Table 11. Example worksheet row Application CPU (vcpus) Memory (GB) IOPS Capacity (GB) Equivalent reference virtual machines Example 2: Pointof-sale system Resource requirements Equivalent reference virtual machines The number of equivalent reference virtual machines for an application equals the maximum required for an individual resource. For example, the number of equivalent reference virtual machines for the example in Table 11 is eight, because that number will meet all the resource requirements vcpu, memory, IOPS, and capacity, as shown in Figure EMC VSPEX Private Cloud:
37 Chapter 3: Sizing the Environment Figure 15. Required resource from the reference virtual machine pool Determining the total reference virtual machines After completing the worksheet for each application that the customer wants to migrate into the virtual infrastructure, compute the total number of reference virtual machines required in the resource pool by calculating the sum of the total reference virtual machines for all applications. In the example in Table 9, the total is 14 virtual machines. Calculating the building block requirement A building block defines a discrete server node size. For example, the validated reference building block node in Table 4 supports 12 reference virtual machines. The total reference virtual machine requirement calculated using the customer sizing worksheet indicates which reference architecture would be adequate for a customer s requirements. For example, if a customer requires 30 reference virtual machines of capability, four of the validated building block nodes provide sufficient resources for current needs and room for growth; this calculation includes one node reserved for high availability. Table 12 shows the example of scaling for the baseline (defined in Table 4) and redefined (defined in Table 7) building block node configurations Table 12. Node scaling example Node number Maximum number of virtual desktops on Baseline building block Maximum number of virtual desktops on redefined building block EMC VSPEX Private Cloud: 37
38 Chapter 3: Sizing the Environment Fine-tuning hardware resources In most cases, the customer sizing worksheet suggests a reference architecture adequate for the customer s needs. In other cases, you might want to customize the hardware resources further. A complete description of the system architecture is beyond the scope of this guide. Storage resources In some applications, there is a need to separate some storage workloads from others. The configuration for the reference architectures places all of the virtual machines in a single ScaleIO volume. With ScaleIO s scale-out storage architecture, all disks contribute to this volume and data is distributed across all disks in the volume. The ScaleIO software tunes the storage resource automatically. Server resources For the server resources in the solution, it is possible to customize the hardware resources more effectively. To do this, first summarize the resource requirements for the server components, as shown in Table 13. In the Server resource component totals line at the bottom of the worksheet, total the server resource requirements for the applications. Note: Calculate the sum of the Resource requirements row for each application, and not the Equivalent reference virtual machines row. Table 13. Server resource component totals Server resources Storage resources Application CPU (vcpus) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example 1: Custom-built application Example 2: Point-of-sale System Example 3: Web Server Example 4: Decision support database Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Total equivalent reference virtual machines 66 Server resource component totals In the example, the target architecture requires 17 virtual CPUs and 155 GB of memory. Assuming four virtual machines per physical processor core and no 38 EMC VSPEX Private Cloud:
39 Chapter 3: Sizing the Environment requirement for memory over-provisioning, the architecture requires five physical processor cores and 155 GB of memory. With these numbers, the solution can be implemented effectively with fewer server and storage resources. Notes: When customizing resources in this way, confirm that storage sizing is still appropriate. Consider high-availability requirements when customizing the resource pool hardware. Summary The requirements stated in the solution are what EMC considers the minimum set of resources to handle the workloads based on the stated definition of a reference virtual server. In any customer implementation, the load of a system varies over time as users interact with the system. If the customer virtual servers differ significantly from the reference definition and vary in the same resource group, you might need to add more of that resource to the system. EMC VSPEX Private Cloud: 39
40 Chapter 4: VSPEX Solution Implementation Chapter 4 VSPEX Solution Implementation This chapter presents the following topics: Overview Network implementation Installing and configuring the Microsoft Hyper-V hosts Installing and configuring Microsoft SQL Server databases Deploying the System Center Virtual Machine Manager server Preparing and configuring the storage Provisioning a virtual machine Summary EMC VSPEX Private Cloud:
41 Chapter 4: VSPEX Solution Implementation Overview This chapter presents the required steps to deploy and configure the various aspects of the VSPEX solution using the ScaleIO software bundle, which includes both the physical and logical components. Table 14 lists the main stages in the solution deployment process. The table also includes references to the sections that describe the related procedures. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. Table 14. Deployment process overview Stage Description Reference 1 Verify the deployment prerequisites. 2 Obtain the deployment tools. 3 Gather customer configuration data. 4 Rack and cable the components. Refer to the vendor documentation. 5 Configure the solution networks and connect to the customer network. Network implementation 6 Configure virtual machine storage. How to Deploy a Virtual Machine 7 Install and configure the servers. Installing and configuring the Microsoft Hyper-V hosts 8 Set up Microsoft SQL Server (used by SCVMM). Installing and configuring Microsoft SQL Server databases 9 Install and configure SCVMM. Deploying the System Center Virtual Machine Manager server 10 Configure the ScaleIO environment. Preparing and configuring the storage Network implementation This section describes the requirements for preparing the network infrastructure needed to support this solution. Table 15 summarizes the tasks to be completed and provides references for further information. Table 15. Tasks for switch and network configuration Task Description Reference Prepare network switches Prepare the network switching. Preparing the network switches EMC VSPEX Private Cloud: 41
42 Chapter 4: VSPEX Solution Implementation Task Description Reference Configure the infrastructure network Configure the VLANs Complete the network cabling Configure the infrastructure networking. Configure private and public VLANs as required. Connect the network interconnect ports; connect the Hyper-V server ports. Installing and configuring the Microsoft Hyper-V hosts Refer to the switch configuration guide for your vendor. Configuring the VLANs Completing the network cabling Preparing the network switches Configuring the infrastructure network Configuring the VLANs Completing the network cabling For validated levels of performance and high availability, this solution requires the switching capacity listed in Table 13. There is no need for new hardware if existing infrastructure meets the requirements. The infrastructure network requires redundant network links for each Windows host, for the network switch interconnect ports, and for the network switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure already exists or is being deployed with other components of the solution. Ensure that there are adequate network switch ports for Windows hosts. EMC recommends that you configure the Windows hosts with three VLANs: Client access network Virtual machine networking (these are customer-facing networks, which can be separated if needed) Storage network ScaleIO data networking (private network) Management network Live migration networking (private network) Ensure that all solution servers, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is a complete connection to the existing customer network. Note: At this point, the new equipment is connected to the existing customer network. Ensure that unexpected interactions do not cause service issues on the customer network. Installing and configuring the Microsoft Hyper-V hosts This section provides information about installing and configuring the Windows hosts and infrastructure servers required to support the architecture. Table 16 outlines the tasks to be completed. 42 EMC VSPEX Private Cloud:
43 Chapter 4: VSPEX Solution Implementation Table 16. Tasks for server installation Task Description Reference Install the Windows hosts Install Hyper-V and configure Failover Clustering Configure Windows Hyper- V networking Plan the virtual machine memory allocations Install Windows Server 2012 R2 on the physical servers for the solution. Add the Hyper-V Server role. Add the Failover Clustering feature. Create and configure the Hyper-V cluster. Configure Windows hosts networking, including network interface card (NIC) teaming and the virtual switch network. Ensure that Windows Hyper-V guest memory-management features are configured properly for the environment. technet.microsoft.com technet.microsoft.com technet.microsoft.com technet.microsoft.com Installing and configuring Microsoft SQL Server databases Overview Most customers use a management tool to provision and manage their server virtualization solution even though this is not required. The management tool requires a database back end. SCVMM uses SQL Server 2012 as the database platform. Note: Do not use Microsoft SQL Server Express edition for this solution. Table 17 lists the tasks for installing and configuring a SQL Server database for the solution. The subsequent sections describe these tasks. Table 17. Tasks for SQL Server database setup Task Description Reference Create a virtual machine for SQL Server Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements. EMC VSPEX Private Cloud: 43
44 Chapter 4: VSPEX Solution Implementation Task Description Reference Install Microsoft Windows on the virtual machine Install Microsoft SQL Server Configure SQL Server for SCVMM Install Microsoft Windows Server 2012 R2 on the virtual machine created to host SQL Server. Install Microsoft SQL Server on the designated virtual machine. Configure a remote SQL Server instance for SCVMM Deploying the System Center Virtual Machine Manager server Overview This section provides information about configuring SCVMM for the solution. Table 18 outlines the tasks to be completed. Table 18. Tasks for SCVMM configuration Task Description Reference Create the SCVMM host virtual machine Install the SCVMM guest OS Install the SCVMM server Install the SCVMM Admin Console Install the SCVMM agent locally on the hosts Add the Hyper-V cluster to SCVMM Create a virtual machine for the SCVMM server. Install Windows Server 2012 R2 Datacenter Edition on the SCVMM host virtual machine. Install an SCVMM server. Install an SCVMM Admin Console. Install an SCVMM agent locally on the hosts that SCVMM manages. Add the Hyper-V cluster to SCVMM. Create a virtual machine Install the guest operating system How to Install a VMM Management Server Installing the VMM Server How to Install the VMM Console Installing the VMM Administrator Console Installing a VMM Agent Locally on a Host How to Add a Host Cluster to VMM 44 EMC VSPEX Private Cloud:
45 Chapter 4: VSPEX Solution Implementation Task Description Reference Create a virtual machine in SCVMM Perform partition alignment Create a template virtual machine Deploy virtual machines from the template virtual machine Create a virtual machine in SCVMM. Use diskpart.exe to perform partition alignment, assign drive letters, and assign the file allocation unit size of the virtual machine s disk drive. Create a template virtual machine from the existing virtual machine. Create the hardware profile and Guest OS profile at this time. Deploy the virtual machines from the template virtual machine. Creating and Deploying Virtual Machines in VMM How to Create a Virtual Machine with a Blank Virtual Hard Disk Disk Partition Alignment Best Practices for SQL Server How to Create a Virtual Machine Template How to Create a Template from a Virtual Machine How to Create and Deploy a Virtual Machine from a Template How to Deploy a Virtual Machine Preparing and configuring the storage This section describes how to install and configure ScaleIO on physical nodes in a Windows Hyper-V environment. Table 19 outlines the tasks to be completed. Table 19. Set up and configure a ScaleIO environment Task Description Reference Prepare the ScaleIO nodes Prepare the installation spreadsheet Install the ScaleIO components Create and map volumes Install the prerequisite and configure disks on ScaleIO nodes Populate the ScaleIO installation spreadsheet with the configuration and topology information for the ScaleIO environment and save it as a commaseparated value (CSV) file. Setup the Installation Manager server; install and configure the ScaleIO components. Create volumes with the required capacity via the CLI. Map the volumes to the specific SDCs for the application. EMC ScaleIO User Guide EMC VSPEX Private Cloud: 45
46 Chapter 4: VSPEX Solution Implementation Task Description Reference Create the CSV disk Install the GUI Scan the ScaleIO LUN from the Windows hosts and transmit the disks to the CSV file system. Install the ScaleIO GUI to manage the system. Use Cluster Shared Volumes in a Failover Cluster EMC ScaleIO User Guide Prepare the ScaleIO nodes Before installing the ScaleIO components, configure the python module and disks on ScaleIO nodes as shown below. To install python module on ScaleIO MDM nodes: 1. Download PythonModulesInstall.exe from EMC Online Support site 2. Run PythonModulesInstall.exe on each MDM nodes. To configure disks on ScaleIO nodes: 1. Open computer management > Disk Management. Select the disk to be used for ScaleIO storage and bring it online. 2. Right-click and select New Simple Volume wizard. 3. Assign a drive letter to the disk. Figure 16. Disk format partition option 4. In the format partition page, select Do not format this volume. 5. Repeat steps 2-4 for all the disks to be used for ScaleIO storage. 46 EMC VSPEX Private Cloud:
47 Chapter 4: VSPEX Solution Implementation Preparing the installation worksheet ScaleIO Installation Manager uses a CSV file to install and configure ScaleIO components. The CSV file contains configuration and topology information that Installation Manager uses to set up and configure all ScaleIO nodes. Notes: Use a combination of a CSV file and Installation Manager to add servers after the initial installation. Use the CSV file to remove installed components. To create the installation CSV file, populate a spreadsheet with all of the required configuration information and save the spreadsheet in CSV format. Installation Manager prompts you to upload the installation CSV file during installation. If you have not created the CSV file, you can download one of the following spreadsheet templates during installation and use it to create the CSV file: Complete Contains all available fields, both required and optional. Minimal Contains only the mandatory fields. Installation Manager assigns default values to the optional fields when you use this spreadsheet template. Table 20 describes both the mandatory and optional fields. Table 20. CSV installation spreadsheet Field Value Required Domain Username If using a domain user, the name of the domain. The name of the domain user. IP IP address of the physical node. Yes Password Root password. Yes Operating System The server s OS: Windows. Yes Is MDM/TB Primary, Secondary, TB, or blank Yes MDM Mgmt IP Is SDS SDS Name SDS All IPs SDS-SDS Only IPs SDS-SDC Only IPs Protection Domain The IP for the management-only network. Yes or No, depending on whether SDS should be installed on the node. The name for the SDS node. The SDS IP addresses to be used for communication among all ScaleIO nodes. Comma-separated, no spaces. The SDS IP addresses to be used for communication among ScaleIO SDS nodes only. Comma-separated, no spaces. The SDS IP addresses to be used for communication among ScaleIO SDS and SDC nodes only. Commaseparated, no spaces. The Protection Domain to which to assign this SDS. Yes EMC VSPEX Private Cloud: 47
48 Chapter 4: VSPEX Solution Implementation Field Value Required Fault Set SDS Device List SDS Pool List Optimize IOPS Is SDC The Fault Set to which to assign this SDS. The devices to add to the SDS. Comma-separated, no spaces. The Storage Pool to which to assign this SDS. Optimize SDS parameters when using fast devices, such as SSD. Yes or No. Yes or No, depending on whether SDC should be installed on the node. Yes Yes Installing the ScaleIO components You can use the ScaleIO CLI or ScaleIO Installation Manager to install and configure ScaleIO components. This section describes the installation procedure using Installation Manager via the web client. For information on using the CLI to install ScaleIO components, refer to EMC ScaleIO User Guide. To install and configure ScaleIO components using Installation Manager: 1. Prepare the Installation Manager server. 2. Log in to the Installation Manager server. 3. Upload the installation packages. 4. Upload the installation CSV file. 5. Configure credentials, syslog, and Call Home. 6. Complete the install and configuration phases. Preparing the Installation Manager server To prepare the Installation Manager server: 1. Copy the appropriate gateway MSI file to the Installation Manager server: 32-bit-EMC-ScaleIO-gateway-1.32-xxx.x-x86.msi 64-bit-EMC-ScaleIO-gateway-1.32-xxx.x-x64.msi 2. Run the MSI file. 3. Enter a new IM_PASSWORD for accessing Installation Manager. Logging in to the Installation Manager server To log in to the Installation Manager: 1. Log in to: where <IM_Server_URL> is the URL of the server you installed the Installation Manager package. 2. Accept the certificate warning. 48 EMC VSPEX Private Cloud:
49 Chapter 4: VSPEX Solution Implementation 3. Type the default username (admin) and the password, and click Login. The Home page appears. Figure 17. Installation Manager Home page Uploading the installation packages To upload the installation packages: 1. Click Packages from the Installation Manager. You might need to reauthenticate with the login credentials. The Manage Installation Packages page appears, as shown in Figure 18. Figure 18. Manage installation packages 2. Browse to the location of the ScaleIO packages, select the files, and click Open. 3. Click Upload. The uploaded installation packages appear in the file table, as shown in Figure 19. EMC VSPEX Private Cloud: 49
50 Chapter 4: VSPEX Solution Implementation Figure 19. Upload installation packages 4. Click Proceed to Install to proceed to the Install page. Uploading the installation CSV file If you have not created the CSV file, use the Minimal or Complete option to download a template and create the CSV file at this time. To upload the installation CSV file: 1. Under Upload Installation CSV, shown in Figure 20, browse to the location of the installation CSV file, select the file, and click Open. 2. Click Upload Installation CSV. Figure 20. Upload CSV file When the CSV file is uploaded successfully, Installation Manager displays the installation configuration for review, as shown in Figure EMC VSPEX Private Cloud:
51 Chapter 4: VSPEX Solution Implementation Figure 21. Installation configuration Configuring credentials, syslog, and Call Home To complete the installation configuration: Note: If you do not configure syslog reporting and the Call Home feature during installation, you can configure them later using the CLI. 1. Type the MDM password and confirm it. The MDM password is used to log in to the MDM server. The password must meet the following criteria: Between six and 31 characters Include at least three of the following groups: [a-z], [A-Z], [0-9], special characters No blank spaces 2. Enter the Lightweight Installation Agent (LIA) password and confirm it. The LIA password is used to authenticate communication between Implementation Manager and the LIA. The password must meet the same criteria as the MDM password. 3. To configure syslog reporting, select Configure MDM the sending of syslog events, and specify the following parameters: Syslog Server Host name of the syslog server to which the messages are to be sent. Port Port of the syslog server (default 1468) EMC VSPEX Private Cloud: 51
52 Chapter 4: VSPEX Solution Implementation Syslog Facility Facility level (default: Local0)) 4. To configure Call Home, select Configure call home, and specify the following parameters: SMTP Server SMTP server that will send the Call Home messages. SMTP Credentials SMTP credentials, if required. MDM Credentials MDM credentials for a new user, with a monitor role, that will be created for the purpose of Call Home functions. from Sender address. to Destination address. Customer name Name of the customer. Severity Minimum event severity for Call Home messages. 5. Review the configuration information. Completing the install and configuration phases Installation Manager s install process performs the following phases: upload, install, and configure. Start each phase by clicking the start phase option on the Monitor page. 1. Click Start Installation. 2. Click Monitor to follow the progress of the current phase. Figure 22 shows the status of the upload phase during the installation process for this solution. Figure 22. Monitor page 3. When the upload phase is complete, click Start install phase to continue to the install phase. 4. When all install commands are completed, click Start configure phase to continue to the configure phase. Note: If you get an error message during the install process, you can abort or retry the install. 5. When all processes are finished, the message shown in Figure 23 appears. 52 EMC VSPEX Private Cloud:
53 Chapter 4: VSPEX Solution Implementation Figure 23. Completed Install Operation 6. Click Mark operation completed. At this stage, the ScaleIO components are installed and running. Creating and mapping volumes SDCs expose volumes as local storage devices to the applications servers. This section describes how to create volumes and map them to SDCs via the CLI. Use the add_volume command to create the volumes. Use the map_volume_to_sdc command to map the volumes to specific SDCs. Use the drv_cfg rescan command to scan for the most up-to-date status on a particular SDC node. CLI basics The CLI is the main management tool of the ScaleIO system. You use CLI commands to configure, maintain, and monitor the system. The CLI is part of the MDM component and is located in the following path in a Windows environment: C:\Program Files\emc\scaleio\MDM\bin All CLI commands use the following format in a Windows environment:./scli [--mdm_ip <IP>] <command> The mdm_ip parameter indicates the MDM that receives and executes the command. In a non-clustered environment, use the MDM IP. In a clustered environment, use the IP addresses of the primary and secondary MDM, as follows: scli mdm_ip query If the command is run from the primary MDM, you can omit the mdm_ip switch. Notes: The order of the parameters and command is not significant. CLI commands are lowercase and case-sensitive. All parameters are proceeded with --. For a list of all ScaleIO CLI commands, refer to EMC ScaleIO User Guide. Creating volumes Command add_volume Syntax EMC VSPEX Private Cloud: 53
54 Chapter 4: VSPEX Solution Implementation scli --add_volume(--protection_domain_id <ID> -- protection_domain_name <NAME>) [--storage_pool_id <ID> -- storage_pool_name <NAME>] --size_gb <SIZE> [--volume_name <NAME>] [Options] [Obfuscation Options] [Use RAM Read Cache Options] Description Use this command to create a volume when the requested capacity is available. To start allocating volumes, the system requires that there be at least three SDS nodes, and that the combined system capacity exceeds 200 GB. The created volume cannot be used until it is mapped to at least one SDC. Parameters Table 21 describes the parameters of the add_volume command. Table 21. add_volume command parameters Parameter --protection_domain_id <ID> --protection_domain_name <NAME> --storage_pool_id <ID> --storage_pool_name <NAME> --size_gb <SIZE> --volume_name <NAME> Description Protection Domain ID Protection Domain name Storage Pool ID Storage Pool name Volume size, in GB basic allocation granularity is 8 GB Name to be associated with the added volume Options: CHOOSE ONE --thin_provisioned --thick_provisioned The specified volume will be thin provisioned The specified volume will be thick provisioned (default) Obfuscation Options: CHOOSE ONE --use_obfuscation --dont_use_obfuscation Enable data obfuscation for this volume (default) Disable data obfuscation for this volume. This overrides the global obfuscation default. Use RAM Read Cache Options: CHOOSE ONE --use_rmcache --dont_use_rmcache <blank> Use RAM Read Cache for devices in the Storage Pool Do not use RAM Read Cache for the devices in Storage Pool Use default (use_rmcache) Example scli --mdm_ip add_volume --size_gb volume_name vol_1 --protection_domain_name rack_ EMC VSPEX Private Cloud:
55 Chapter 4: VSPEX Solution Implementation Mapping a volume to an SDC Command map_volume_to_sdc Syntax scli --map_volume_to_sdc (--volume_id <ID> --volume_name <NAME>) (--sdc_id <ID> --sdc_name <NAME> --sdc_ip <IP>)[-- allow_multi_map] Description This command exposes the volume to the specified SDC, effectively creating a block device on the SDC. Parameters Table 22 describes the parameters of the map_volume_to_sdc command. Table 22. map_volume_to_sdc command parameters Parameter --volume_id <ID> --volume_name <NAME> --sdc_id <ID> --sdc_name <Name> --sdc_ip <IP> --allow_multi_map Description Volume ID Volume name SDC ID SDC name SDC IP address Allow this volume to be mapped to more than one SDC Example scli --mdm_ip map_volume_to_sdc--volume_name vol_1 --sdc_ip Detecting new volumes Command drv_cfg rescan Syntax /opt/emc/scaleio/sdc/bin/drv_cfg --rescan Description Volumes are always exposed to the OS as devices. ScaleIO periodically scans the system to detect new volumes. You can initiate a scan for the most up-to-date status on a particular SDC node. This command is not a CLI command, but rather an executable that is run on the specific SDC. EMC VSPEX Private Cloud: 55
56 Chapter 4: VSPEX Solution Implementation Installing the GUI You can install the ScaleIO GUI on a Windows or Linux workstation. To install the GUI, type the command for the operating system that you use: Windows: EMC-ScaleIO-gui-1.32-xxx.x.msi RHEL: rpm -U scaleio-gui-1.32-xxx.x.noarch.rpm Debian: sudo dpkg -i scaleio-gui-1.32-xxx.x.deb Provisioning a virtual machine Summary To provision virtual machines in SCVMM: 1. Create a virtual machine in SCVMM to use as a virtual machine template: a. Install the virtual machine. b. Install the software. c. Change the Windows and application settings. Refer to the Microsoft TechNet Library topic How to Deploy a Virtual Machine for details. 2. Perform disk partition alignment on virtual machines for operating systems earlier than Windows Server Align the disk drive with an offset of 1,024 KB, and format the disk drive with a file allocation unit (cluster) size of 8 KB. Use diskpart.exe to perform the partition alignment, assign drive letters, and assign the file allocation unit size. Refer to the Microsoft TechNet Library topic Disk Partition Alignment Best Practices for SQL Server for details. 3. Convert the virtual machine into a template. Create a customization specification when creating the template. Refer to the Microsoft TechNet Library topic How to Create a Template from a Virtual Machine for details. 4. Deploy virtual machines from the template virtual machine and the customization specification. Refer to the Microsoft TechNet Library topic How to Deploy a Virtual Machine for details. After performing these steps, the VSPEX solution is fully functional. 56 EMC VSPEX Private Cloud:
57 Chapter 5: Verifying the Solution Chapter 5 Verifying the Solution This chapter presents the following topics: Overview Post-install checklist Deploying and testing a single virtual server Verifying the redundancy of the solution components EMC VSPEX Private Cloud: 57
58 Chapter 5: Verifying the Solution Overview After you configure the solution, complete the tasks in Table 23 to verify the configuration and functionality of specific aspects of the solution and ensure that the configuration supports the core availability requirements. Table 23. Tasks for testing the installation Task Description Reference Post-install checks Deploy and test a single virtual server Verify the redundancy of the solution components Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. Verify that the VLAN for virtual machine networking is configured correctly on each Hyper-V host. Verify that each Hyper-V host has access to the required Cluster Shared Volumes. Verify that ScaleIO networking is configured correctly Verify that the live migration interfaces are configured correctly on all Hyper-V hosts. Deploy a single virtual machine to verify that the solution functions as expected. Verify data protection of the ScaleIO system. Verify the redundancy of network switches. On a Hyper-V host that contains at least one virtual machine, verify that the virtual machine can successfully migrate to an alternate host. Hyper-V: How many network cards do I need? Network Recommendations for a Hyper-V Cluster in Windows Server 2012 Hyper-V: Using Hyper-V and Failover Clustering EMC ScaleIO User Guide Virtual Machine Live Migration Overview Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager Deploying and testing a single virtual server Verifying the redundancy of the solution components Refer to the vendor documentation. Verifying the redundancy of the solution components Creating a Hyper-V Host Cluster in VMM Overview Verifying the redundancy of the solution components 58 EMC VSPEX Private Cloud:
59 Chapter 5: Verifying the Solution Post-install checklist The following configuration items are critical to the functionality of the solution. On each Windows Server, verify the following items before deploying to production: The VLAN for virtual machine networking is configured correctly ScaleIO networking is configured correctly The server can access the required Cluster Shared Volumes A network interface is configured correctly for live migration Deploying and testing a single virtual server To verify that the solution functions as expected, deploy a single virtual machine from the SCVMM interface. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to log in. Verifying the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, test the following maintenance and hardware failure scenarios: Verify the data protection of the ScaleIO system, as follows: a. Power off one ScaleIO node. b. Verify that ScaleIO LUN connectivity is maintained. c. Verify that the data rebuild process is running properly. Disable each of the redundant switches in turn and verify that the Hyper-V host virtual machine remains intact. On a Hyper-V host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host. EMC VSPEX Private Cloud: 59
60 Chapter 6: System Monitoring Chapter 6 System Monitoring This chapter presents the following topics: Overview Key areas to monitor EMC VSPEX Private Cloud:
61 Chapter 6: System Monitoring Overview System monitoring of a VSPEX environment is no different from monitoring any core IT system; it is a relevant and essential component of administration. The monitoring levels involved in a highly virtualized infrastructure, such as a VSPEX environment, are more complex than in a purely physical infrastructure, because the interaction and interrelationships between various components can be subtle and nuanced. However, those experienced in administering virtualized environments should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and workflows. Several business needs require proactive, consistent monitoring of the environment: Stable, predictable performance Sizing and capacity needs Availability and accessibility Elasticity the dynamic addition, removal, and modification of workloads Data protection If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Key areas to monitor VSPEX Proven Infrastructures provide end-to-end solutions and require system monitoring of three discrete, but highly interrelated areas. The following components comprise the critical areas that affect overall system performance: Servers (both virtual machines and clusters) Networking ScaleIO layer This chapter focuses primarily on monitoring key components of the ScaleIO infrastructure, but briefly describes other components. Performance baseline When a workload is added to a VSPEX deployment, server and networking resources are consumed. As more workloads are added, modified, or removed, resource availability and capabilities change; this impacts all other workloads running on the platform. Customers should fully understand the workload characteristics on all key components prior to deploying them on a VSPEX platform; this is required to correctly size resource utilization against the defined reference virtual machine. Deploy the first workload, and then measure the end-to-end resource consumption and the platform performance. This removes the guesswork from sizing activities and EMC VSPEX Private Cloud: 61
62 Chapter 6: System Monitoring ensures that initial assumptions were valid. As more workloads are deployed, reevaluate resource consumption and performance levels to determine cumulative load and the impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription is not negatively affecting overall system performance. Run these assessments consistently to ensure the platform as a whole, and the virtual machines themselves, operate as expected. Servers The key server resources to monitor include: Processors Memory Local disk Networking Monitor these areas both from a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). For a VSPEX deployment with Microsoft Hyper-V, you can use Windows perfmon to monitor and log the metrics. Follow your vendors guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application. For detailed information about perfmon, refer to the Microsoft TechNet Library topic Using Performance Monitor. Keep in mind that each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of reference virtual machines deployed and their defined workload. Networking ScaleIO layer Ensure that there is adequate bandwidth for networking communications, and monitor network loads at the server and virtual machine level. Windows perfmon provides sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS size. Capture additional data from network card or HBA utilities. Monitoring the ScaleIO layer of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. The ScaleIO GUI enables you to review the overall status of the system, drill down to the component level, and monitor the components. The various screens display different views and data that are beneficial to the storage administrator. The key screens to focus on include: Dashboard screen Protection Domains screen Protection Domain Servers screen Storage Pools screen The ScaleIO GUI provides an easy-to-use yet powerful means to gain insight into how the underlying ScaleIO components are operating. The EMC ScaleIO User Guide on EMC Online Support provides detailed information on using the GUI for monitoring the ScaleIO layer. 62 EMC VSPEX Private Cloud:
63 Appendix A: Reference Documentation Appendix A Reference Documentation This appendix presents the following topics: EMC documentation Other documentation EMC VSPEX Private Cloud: 63
64 Appendix A: Reference Documentation EMC documentation Other documentation The following documents, available on EMC Online Support, provide additional and relevant information. If you do not have access to a document, contact your EMC representative. EMC Host Connectivity Guide for Windows EMC ScaleIO User Guide The following documents, located on the Microsoft website, provide additional and relevant information: Adding Hyper-V Hosts and Host Clusters, and Scale-Out File Servers to VMM Configuring a Remote Instance of SQL Server for VMM Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager (video) Hardware and Software Requirements for Installing SQL Server 2012 Hyper-V: How many network cards do I need? How to Add a Host Cluster to VMM How to Create a Virtual Machine Template How to Create a Virtual Machine with a Blank Virtual Hard Disk How to Deploy a Virtual Machine How to Install a VMM Management Server Hyper-V: Using Hyper-V and Failover Clustering Install SQL Server 2012 Installing a VMM Agent Locally on a Host Installing the VMM Administrator Console Installing the VMM Server Installing Virtual Machine Manager Install and Deploy Windows Server 2012 R2 and Windows Server 2012 Use Cluster Shared Volumes in a Failover Cluster Virtual Machine Live Migration Overview 64 EMC VSPEX Private Cloud:
65 Appendix B: Customer Configuration Worksheet Appendix B Customer Configuration Worksheet This appendix presents the following topic: Customer configuration worksheet EMC VSPEX Private Cloud: 65
66 Appendix B: Customer Configuration Worksheet Customer configuration worksheet Before you start the configuration, gather some customer-specific network and host configuration information. The following tables provide essential information for assembling the required network and provide host address, numbering, and naming information. This worksheet can also be used as a leave behind document for future reference. To confirm the customer information, cross-reference with the relevant array configuration worksheet: VNX Block Configuration Worksheet or VNX Installation Assistant for File/Unified Worksheet. Table 24. Common server information Server name Purpose Primary IP address Domain Controller DNS Primary DNS Secondary DHCP NTP SMTP SNMP System Center Virtual Machine Manager SQL Server Table 25. Hyper-V server information Server name Purpose Primary IP address Hyper-V Host 1 Private net (storage) addresses Hyper-V Host 2 Table 26. ScaleIO information Field Array name Value Admin account Management IP Storage pool name 66 EMC VSPEX Private Cloud:
67 Appendix B: Customer Configuration Worksheet Field Value Datastore name Table 27. Network infrastructure information Name Purpose IP address Subnet mask Default gateway Ethernet switch 1 Ethernet switch 2 Table 28. VLAN information Name Network purpose VLAN ID Allowed subnets Client access network Storage network Management network Table 29. Service accounts Account Purpose Password (optional; secure appropriately) Windows Server administrator Installation Manager administrator SCVMM administrator SQL Server administrator Printing the worksheet A standalone copy of the customer configuration worksheet is attached to this document in Microsoft Office Word format. To view and print the worksheet: 1. In Adobe Reader, open the Attachments panel, as follows: Select View > Show/Hide > Navigation Panes > Attachments. or Click the Attachments icon, as shown in Figure 24. Figure 24. Opening attachments in a PDF file EMC VSPEX Private Cloud: 67
68 Appendix B: Customer Configuration Worksheet 2. Under Attachments, double-click the worksheet file. 68 EMC VSPEX Private Cloud:
69 Appendix C: Customer Sizing Worksheet Appendix C Customer Sizing Worksheet This appendix presents the following topic: Customer sizing worksheet for Private Cloud EMC VSPEX Private Cloud: 69
70 Appendix C: Customer Sizing Worksheet Customer sizing worksheet for Private Cloud Before selecting a reference architecture on which to base a customer solution, use the customer sizing worksheet to gather information about the customer s business requirements and to calculate the required resources. Table 30 shows a blank worksheet. A standalone copy of the worksheet is attached to this document in Microsoft Office Word format. Table 30. Customer sizing worksheet Application CPU (vcpus) Memory (GB) IOPS Capacity (GB) Reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Total equivalent reference virtual machines To view and print the worksheet attachment: 1. In Adobe Reader, open the Attachments panel, as follows: Select View > Show/Hide > Navigation Panes > Attachments. or Click the Attachments icon, as shown in Figure 25. Figure 25. Opening attachments in a PDF file 2. Under Attachments, double-click the worksheet file. 70 EMC VSPEX Private Cloud:
EMC VSPEX PRIVATE CLOUD
EMC VSPEX PRIVATE CLOUD EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with technology. November 2014 Copyright 2014 EMC Corporation.
EMC VSPEX PRIVATE CLOUD:
EMC VSPEX PRIVATE CLOUD: VMware vsphere 5.5 and EMC ScaleIO EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with VMware vsphere 5.5
EMC SCALEIO OPERATION OVERVIEW
EMC SCALEIO OPERATION OVERVIEW Ensuring Non-disruptive Operation and Upgrade ABSTRACT This white paper reviews the challenges organizations face as they deal with the growing need for always-on levels
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
EMC Virtual Infrastructure for Microsoft SQL Server
Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
Cloud Optimize Your IT
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX
White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.
MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION
Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System
What s new in Hyper-V 2012 R2
What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows
Veritas Storage Foundation High Availability for Windows by Symantec
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200
The Benefits of Virtualizing
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V
Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT
Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization
How To Build An Ec Vnx Private Cloud For A Hypervisor On A Server With A Hyperconverged Network (Vmx)
EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Enabled by EMC VNX Series and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC
Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
Part 1 - What s New in Hyper-V 2012 R2. [email protected] Datacenter Specialist
Part 1 - What s New in Hyper-V 2012 R2 [email protected] Datacenter Specialist Microsoft Cloud OS Vision Public Cloud Azure Virtual Machines Windows Azure Pack 1 Consistent Platform Windows Azure
Maxta Storage Platform Enterprise Storage Re-defined
Maxta Storage Platform Enterprise Storage Re-defined WHITE PAPER Software-Defined Data Center The Software-Defined Data Center (SDDC) is a unified data center platform that delivers converged computing,
EMC Celerra Unified Storage Platforms
EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
EMC VSPEX END-USER COMPUTING
IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Understanding Microsoft Storage Spaces
S T O R A G E Understanding Microsoft Storage Spaces A critical look at its key features and value proposition for storage administrators A Microsoft s Storage Spaces solution offers storage administrators
DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING
DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
Optimized Storage Solution for Enterprise Scale Hyper-V Deployments
Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March
Veritas Cluster Server from Symantec
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
CA Cloud Overview Benefits of the Hyper-V Cloud
Benefits of the Hyper-V Cloud For more information, please contact: Email: [email protected] Ph: 888-821-7888 Canadian Web Hosting (www.canadianwebhosting.com) is an independent company, hereinafter
Private cloud computing advances
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3
Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper
Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage White Paper June 2011 2011 Coraid, Inc. Coraid, Inc. The trademarks, logos, and service marks (collectively "Trademarks") appearing on the
DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2
DEPLOYING VIRTUALIZED MICROSOFT DYNAMICS AX 2012 R2 EMC Solutions Abstract This document describes the reference architecture of a virtualized Microsoft Dynamics AX 2012 R2 implementation that is enabled
EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES
EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions
Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper
Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
MS Exchange Server Acceleration
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays
TECHNICAL REPORT Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays ABSTRACT This technical report details information and best practices for deploying Microsoft Hyper-V with Dell EqualLogic
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center
White Paper Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center Provide accelerated data access and an immediate performance boost of businesscritical applications with caching
SQL Server Storage Best Practice Discussion Dell EqualLogic
SQL Server Storage Best Practice Discussion Dell EqualLogic What s keeping you up at night? Managing the demands of a SQL environment Risk Cost Data loss Application unavailability Data growth SQL Server
FOR SERVERS 2.2: FEATURE matrix
RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,
EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
How To Virtualize A Storage Area Network (San) With Virtualization
A New Method of SAN Storage Virtualization Table of Contents 1 - ABSTRACT 2 - THE NEED FOR STORAGE VIRTUALIZATION 3 - EXISTING STORAGE VIRTUALIZATION METHODS 4 - A NEW METHOD OF VIRTUALIZATION: Storage
Microsoft SMB File Sharing Best Practices Guide
Technical White Paper Microsoft SMB File Sharing Best Practices Guide Tintri VMstore, Microsoft SMB 3.0 Protocol, and VMware 6.x Author: Neil Glick Version 1.0 06/15/2016 @tintri www.tintri.com Contents
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere
Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical
Virtualization of the MS Exchange Server Environment
MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of
Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V
Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere
Table of Contents UCS E-Series Availability and Fault Tolerance... 3 Solid hardware... 3 Consistent management... 3 VMware vsphere HA and FT... 3 Storage High Availability and Fault Tolerance... 4 Quick-start
Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200
Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS
REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed
CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe
White Paper CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe Simplified configuration, deployment, and management for Microsoft SQL Server on Symmetrix VMAXe Abstract This
Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology
Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication
Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta
Mit Soft- & Hardware zum Erfolg IT-Transformation VCE Converged and Hyperconverged Infrastructure VCE VxRack EMC VSPEX Blue IT-Transformation IT has changed dramatically in last past years The requirements
EMC Integrated Infrastructure for VMware
EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000
IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE
EMC AND BROCADE - PROVEN, HIGH PERFORMANCE SOLUTIONS FOR YOUR BUSINESS TO ACCELERATE YOUR JOURNEY TO THE CLOUD Understand How EMC VSPEX with Brocade Can Help You Transform IT IMPLEMENTING VIRTUALIZED AND
Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation
Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
EMC XTREMIO EXECUTIVE OVERVIEW
EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying
New Features in SANsymphony -V10 Storage Virtualization Software
New Features in SANsymphony -V10 Storage Virtualization Software Updated: May 28, 2014 Contents Introduction... 1 Virtual SAN Configurations (Pooling Direct-attached Storage on hosts)... 1 Scalability
Microsoft SQL Server 2005 on Windows Server 2003
EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference
Windows Server 2012 授 權 說 明
Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor
Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
Parallels Cloud Storage
Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying
EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01
EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,
HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
Luxembourg June 3 2014
Luxembourg June 3 2014 Said BOUKHIZOU Technical Manager m +33 680 647 866 [email protected] SOFTWARE-DEFINED STORAGE IN ACTION What s new in SANsymphony-V 10 2 Storage Market in Midst of Disruption
VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS
Vblock Solution for SAP: SAP Application and Database Performance in Physical and Virtual Environments Table of Contents www.vce.com V VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE
EMC Integrated Infrastructure for VMware
EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013
How To Get A Storage And Data Protection Solution For Virtualization
Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels.
Parallels Cloud Server White Paper An Introduction to Operating System Virtualization and Parallels Cloud Server www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating
SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX
White Paper SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX Abstract This white paper explains the benefits to the extended enterprise of the on-
How To Live Migrate In Hyperv On Windows Server 22 (Windows) (Windows V) (Hyperv) (Powerpoint) (For A Hyperv Virtual Machine) (Virtual Machine) And (Hyper V) Vhd (Virtual Hard Disk
Poster Companion Reference: Hyper-V Virtual Machine Mobility Live Migration Without Shared Storage Storage Migration Live Migration with SMB Shared Storage Live Migration with Failover Clusters Copyright
Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview
Pivot3 Desktop Virtualization Appliances vstac VDI Technology Overview February 2012 Pivot3 Desktop Virtualization Technology Overview Table of Contents Executive Summary... 3 The Pivot3 VDI Appliance...
EMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
Technology Insight Series
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
StarWind Virtual SAN for Microsoft SOFS
StarWind Virtual SAN for Microsoft SOFS Cutting down SMB and ROBO virtualization cost by using less hardware with Microsoft Scale-Out File Server (SOFS) By Greg Schulz Founder and Senior Advisory Analyst
