ABSTRACT. September 2015

Size: px
Start display at page:

Download "ABSTRACT. September 2015"

Transcription

1 ABSTRACT This Solution Guide provides an introduction to the concepts and architectural options available within the Federation Enterprise Hybrid Cloud solution. It should be used as an aid to deciding on the most suitable configuration for the initial deployment of a Federation Enterprise Hybrid Cloud solution. September 2015

2 Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. Published September 2015 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, Avamar, Data Domain, Data Protection Advisor, Enginuity, GeoSynchrony, Hybrid Cloud, PowerPath/VE, RecoverPoint, SMI-S Provider, Solutions Enabler, VMAX, Syncplicity, Unisphere, ViPR, EMC ViPR Storage Resource Management, Virtual Storage Integrator, VNX, VPLEX, VPLEX, Geo, VPLEX Metro, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Federation Enterprise Hybrid Cloud 3.1 Concepts and Architecture Solution Guide Part Number H

3 Contents Chapter 1 Executive Summary Federation solutions Document purpose Audience Essential reading Solution purpose Business challenge Technology solution Terminology Chapter 2 Cloud Management Platform Options Overview Management terminology and hierarchy Cloud management platform resources Management platform resource types Cloud management platform models Distributed management model Collapsed management model Resource pool considerations Network quality of service considerations Deciding on the management model Chapter 3 Single-Site/Single vcenter Topology Overview When to use the single-site topology Architecture Single-site networking considerations NSX controller placement Single-site storage considerations Storage design Storage consumption Storage provisioning ViPR virtual pools Recovery of cloud management platform Single-site topology Backup of single-site/single vcenter enterprise hybrid cloud Single-site/single vcenter topology backup Chapter 4 Dual-Site/Single vcenter Topology Overview When to use the dual-site/single vcenter topology

4 Contents Standard dual-site/single vcenter topology Continuous availability dual-site/single vcenter topology Continuous availability network considerations NSX Controller placement Stretched networks and network technology choice Data Center Interconnect VPLEX Witness VPLEX topologies Deciding on VPLEX topology Uniform host access configuration with VPLEX host Cross-Connect Non-uniform host access configuration without VPLEX Cross-Connect Site affinity for virtual machines Continuous availability storage considerations ViPR virtual arrays ViPR virtual pools ViPR and VPLEX consistency groups interaction Virtual Pool Collapser function Storage provisioning Recovery of cloud management platform Standard dual-site/single vcenter topology CA dual-site/single vcenter topology Backup in dual-site/single vcenter enterprise hybrid cloud Dual-site/single vcenter topology backup Chapter 5 Dual-Site/Dual vcenter Topology Overview When to use the dual-site/dual vcenter topology Standard dual-site/dual vcenter topology Disaster recovery dual-site/dual vcenter topology Disaster recovery network considerations Physical network design Requirements based on the management model Network Controller placement IP mobility between the primary and recovery sites Security design vcenter Site Recovery Manager considerations Overview RecoverPoint and ViPR Storage Replication Adapters Site mappings Disaster recovery support for Automation Pod vapps Protection groups Recovery plans Collapsed management model vrealize Automation considerations Configuring primary and recovery site endpoints

5 Contents Configuring the infrastructure for disaster recovery services Configuring application blueprints for disaster recovery Disaster recovery storage considerations ViPR managed Workload Pod storage Additional storage at each site ViPR virtual arrays ViPR virtual pools RecoverPoint journal considerations Storage provisioning Recovery of cloud management platform Standard dual-site/dual vcenter topology DR dual-site/dual vcenter topology Best practices Naming conventions NSX logical networks Backup in dual-site/dual vcenter topology DR dual-site/dual vcenter topology backup Chapter 6 Network Topologies Overview Supported networking technologies Physical connectivity Logical network topologies Network layouts Chapter 7 Data Protection Overview Concepts Scalable backup architecture Avamar replication pairs VMware vcenter folder structure and backup service level relationship Avamar pair to vsphere cluster association Avamar designations Avamar proxy server configuration Avamar administratively full Policy-based replication Replication control Standard Avamar configuration Architecture Scenarios for use Characteristics Distribution examples Redundant Avamar/single vcenter configuration Architecture Scenarios for use vcenter folder assignments

6 Contents Characteristics Distribution examples Redundant Avamar/dual vcenter configuration Architecture Scenarios for use vcenter folder assignments Characteristics Distribution examples Chapter 8 Solution Rules and Permitted Configurations Overview Architectural assumptions Assumption and justifications vcenter Single Sign-On Single sign-on domains First vcenter Server Single Sign-On instance in each domain Subsequent vcenter Single Sign-On instances in each domain VMware vrealize tenants and business groups vrealize tenant design vrealize tenant best practice vrealize business group design vrealize business best practice EMC ViPR tenants and projects ViPR tenants ViPR projects ViPR consistency groups General storage considerations vsphere datastore clusters VMware vcenter endpoints Single-site/single vcenter and dual-site/single vcenter topologies Dual-site/dual vcenter topologies Permitted topology configurations Combining topologies Management model and topology combinations Permitted topology upgrade paths Single site to continuous availability upgrade Single-site to disaster recovery upgrade Bulk import of virtual machines Importing from non-federation Enterprise Hybrid Cloud environments DR dual-site/dual vcenter topology restrictions Multimachine blueprints vrealize Automation Failover granularity RecoverPoint cluster limitations RecoverPoint licensing

7 Contents VMware Site Recovery Manager limitations Implied Federation Enterprise Hybrid Cloud storage maximums Storage support Network support NSX security support Resource sharing Resource isolation Resource sharing Application tenant integration Data protection considerations Supported Avamar platforms Scale out limits Software resources Federation Enterprise Hybrid Cloud software resources Sizing guidance Federation Enterprise Hybrid Cloud sizing Chapter 9 Conclusion Conclusion Chapter 10 References EMC documentation

8 Contents Figures Figure 1. Cloud management terminology and hierarchy Figure 2. Cloud management platform component layout Figure 3. Cloud management - distributed vsphere cluster model Figure 4. Cloud management - collapsed vsphere cluster model Figure 5. Federation Enterprise Hybrid Cloud single-site architecture Figure 6. Storage service offerings for the hybrid cloud Figure 7. Blueprint storage configuration in vrealize Automation Figure 8. Figure 9. Federation Enterprise Hybrid Cloud standard dual-site/single vcenter architecture Federation Enterprise Hybrid Cloud CA dual-site/single vcenter architecture Figure 10. Continuous availability data center interconnect example Figure 11. High-level deployment of EMC VPLEX Witness Figure 12. Deployment model with VPLEX host Cross-Connect Figure 13. VPLEX storage views with host Cross-Connect Figure 14. Datastore paths in a VPLEX with host Cross Connect configuration Figure 15. VPLEX architecture without VPLEX Cross-Connect Figure 16. VPLEX Storage views without VPLEX Cross-Connect Figure 17. vsphere Datastore Storage paths without VPLEX Cross-Connect Figure 18. Sample view of Site Affinity DRS Group and Rule configuration Figure 19. Deploying virtual machines with site affinity Figure 20. Interactions between local and VPLEX distributed pools Figure 21. Virtual Pool Collapser example Figure 22. Figure 23. Federation Enterprise Hybrid Cloud standard dual-site/dual vcenter architecture Federation Enterprise Hybrid Cloud DR dual-site/dual vcenter architecture Figure 24. NEI Pods from the cloud vcenter Server instances on Site A and Site B. 51 Figure 25. Logical switches on Site A Figure 26. Logical switches on Site B Figure 27. DLR interfaces on Site A and Site B Figure 28. Route redistribution policy on Site A and Site B Figure 29. Security groups on the primary and recovery sites Figure 30. Security group on the recovery site Figure 31. ViPR/EMC RecoverPoint protected virtual pool Figure 32. Network layout Figure 33. Network layout Figure 34. Network layout Figure 35. Standard Avamar configuration architecture Figure 36. Redundant Avamar/single vcenter configuration Figure 37. Redundant Avamar/dual vcenter configuration Figure 38. SSO domain and vcenter SSO instance relationships Figure 39. Software-defined data center tenant design and endpoints

9 Contents Tables Table 1. Table 2. Table 3. Table 4. Table 5. Table 6. Table 7. Table 8. Table 9. Terminology Collapsed management model: Resource groups configuration Suggested network QoS settings Network layout 1 descriptions Network layout 2 descriptions Network layout 3 descriptions SRM protection maximums SRM protection maximums Implied Federation Enterprise Hybrid Cloud storage maximums

10 This chapter presents the following topics: Federation solutions Document purpose Audience Essential reading Solution purpose Business challenge Technology solution Terminology

11 EMC II, Pivotal, RSA, VCE, Virtustream, and VMware form a unique Federation of strategically aligned businesses that are free to execute individually or together. The Federation businesses collaborate to research, develop, and validate superior, integrated solutions and deliver a seamless experience to their collective customers. The Federation provides customer solutions and choice for the software-defined enterprise and the emerging third platform of mobile, cloud, big data, and social networking. The Federation Enterprise Hybrid Cloud 3.1 solution is a completely virtualized data center, fully automated by software. The solution starts with a foundation that delivers IT as a service (ITaaS), with options for high availability, backup and recovery, and disaster recovery (DR). It also provides a framework and foundation for add-on modules, such as database as a service (DaaS), platform as a service (PaaS), and cloud brokering. This Solution Guide provides an introduction to the concepts and architectural options available within the Federation Enterprise Hybrid Cloud solution. It should be used as an aid to deciding on the most suitable configuration for the initial deployment of a Federation Enterprise Hybrid Cloud solution. This Solution Guide is intended for executives, managers, architects, cloud administrators, and technical administrators of IT environments who want to implement a hybrid cloud IaaS platform. Readers should be familiar with the VMware vrealize Suite, storage technologies, general IT functions and requirements, and how a hybrid cloud infrastructure accommodates these technologies and requirements. The Federation Enterprise Hybrid Cloud 3.1: Foundation Infrastructure Reference Architecture Guide describes the reference architecture of a Federation Enterprise Hybrid Cloud solution. The guide introduces the features and functionality of the solution, the solution architecture and key components, and the validated hardware and software environments. The following guides provide further information about various aspects of the Federation Enterprise Hybrid Cloud solution: Federation Enterprise Hybrid Cloud 3.1: Hyperconverged Infrastructure Reference Architecture Guide Federation Enterprise Hybrid Cloud 3.1: Operations Solution Guide Federation Enterprise Hybrid Cloud 3.1: Security Management Solution Guide Federation Enterprise Hybrid Cloud 3.1: Hadoop Applications Solution Guide The Federation Enterprise Hybrid Cloud solution enables customers to build an enterpriseclass, scalable, multitenant infrastructure that enables: Complete management of the infrastructure service lifecycle On-demand access to and control of network bandwidth, servers, storage, and security 11

12 Provisioning, monitoring, protection, and management of the infrastructure services by the line of business users, without IT administrator involvement Provisioning from application blueprints with associated infrastructure resources by line-of-business application owners without IT administrator involvement Provisioning of backup, continuous availability (CA), and DR services as part of the cloud service provisioning process Maximum asset use While many organizations have successfully introduced virtualization as a core technology within their data center, the benefits of virtualization have largely been restricted to the IT infrastructure owners. End users and business units within customer organizations have not experienced many of the benefits of virtualization, such as increased agility, mobility, and control. Transforming from the traditional IT model to a cloud-operating model involves overcoming the challenges of legacy infrastructure and processes, such as: Inefficiency and inflexibility Slow, reactive responses to customer requests Inadequate visibility into the cost of the requested infrastructure Limited choice of availability and protection services The difficulty in overcoming these challenges has given rise to public cloud providers who have built technology and business models catering to the requirements of end-user agility and control. Many organizations are under pressure to provide similar service levels within the secure and compliant confines of the on-premises data center. As a result, IT departments need to create cost-effective alternatives to public cloud services, alternatives that do not compromise enterprise features such as data protection, DR, and guaranteed service levels. This Federation Enterprise Hybrid Cloud solution integrates the best of EMC and VMware products and services, and empowers IT organizations to accelerate implementation and adoption of a hybrid cloud infrastructure, while still enabling customer choice for the compute and networking infrastructure within the data center. The solution caters to customers who want to preserve their investment and make better use of their existing infrastructure and to those who want to build out new infrastructures dedicated to a hybrid cloud. This solution takes advantage of the strong integration between EMC technologies and the VMware vrealize Suite. The solution, developed by EMC and VMware product and services teams includes EMC scalable storage arrays, integrated EMC and VMware monitoring, and data protection suites to provide the foundation for enabling cloud services within the customer environment. The Federation Enterprise Hybrid Cloud solution offers several key benefits to customers: Rapid implementation: The solution can be designed and implemented in as little as 28 days, in a validated, tested, and repeatable way. This increases the time-to-value while simultaneously reducing risk. Supported solution: Implementing Federation Enterprise Hybrid Cloud through EMC also results in a solution that is supported by EMC and further reduces risk associated with the ongoing operations of your hybrid cloud. 12

13 Defined upgrade path: Customers implementing the Federation Enterprise Hybrid Cloud receive upgrade guidance based on the testing and validation completed by the Federation engineering teams. This upgrade guidance enables customers, partners, and EMC services teams to perform upgrades faster, and with reduced risk. Validated and tested integration: Extensive testing and validation has been conducted by solutions engineering teams resulting in simplified use, management, and operation. The EMC Federation EMC II, Pivotal, RSA, VCE, Virtustream, and VMware form a unique Federation of strategically aligned businesses; each can operate individually or together. The Federation provides customer solutions and choice for the software-defined enterprise and the emerging 3rd platform of mobile, cloud, big data and social, transformed by billions of users and millions of apps. Table 1 lists the terminology used in this guide. Table 1. Terminology Term ACL AIA API Blueprint Business group CBT CDP CRL CSR DHCP Fabric group FQDN HSM IaaS IIS LAG LDAP LDAPS MCCLI Definition Access control list Authority Information Access Application programming interface A blueprint is a specification for a virtual, cloud, or physical machine and is published as a catalog item in the common service catalog A managed object that associates users with a specific set of catalog services and infrastructure resources Changed Block Tracking CRL Distribution Point Certificate Revocation List Certificate Signing Request Dynamic Host Configuration Protocol A collection of virtualization compute resources and cloud endpoints managed by one or more fabric administrators Fully qualified domain name Hardware security module Infrastructure as a service Internet Information Services Link aggregation that bundles multiple physical Ethernet links between two or more devices into a single logical link can also be used to aggregate available bandwidth, depending on the protocol used. Lightweight Directory Access Protocol LDAP over SSL Management Console Command Line Interface 13

14 Term PEM PKI PVLAN SSL TACACS vrealize Automation blueprint VDC vds VLAN VMDK VRF VSI VXLAN Definition Privacy Enhanced Electronic Mail Public key infrastructure Private virtual LAN Secure Sockets Layer Terminal Access Controller Access Control System A specification for a virtual, cloud, or physical machine that is published as a catalog item in the vrealize Automation service catalog Virtual device context Virtual distributed switch Virtual local area network Virtual machine disk Virtual routing and forwarding Virtual Storage Integrator Virtual Extensible LAN 14

15 This chapter presents the following topics: Overview Cloud management platform resources Cloud management platform models

16 The cloud management platform supports the entire management infrastructure for this solution. This management infrastructure is divided into three pods, which consist of one or more VMware vsphere ESXi clusters or vsphere resource groups, depending on the model deployed. Each pod performs a solution-specific function. This chapter describes the components of the management platform and the models available for use. After reading it, you should be able to decide on the model that suits your environment. Management terminology and hierarchy To understand how the management platform is constructed, it is important to know how a number of terms are used throughout this guide. Figure 1 shows the relationship between platform, pod, and cluster as used in the Federation Enterprise Hybrid Cloud. Figure 1. Cloud management terminology and hierarchy The following distinctions exist in terms of the scope of each term: Cloud Management Platform (CMP) is an umbrella term intended to represent the entire management environment. Management Pod is used to represent an area of functionality within the management platform. What this area represents in terms of resources differs depending on the management models discussed in Cloud management platform models. Technology Cluster is used in the context of the individual technologies. While it may refer to ESXi clusters, it could also refer to VPLEX clusters, EMC RecoverPoint clusters, and so on. The use of vsphere ESXi clusters with VMware vsphere High Availability (vsphere HA) provides general virtual machine protection across the management platform. Further levels of availability can be provided by using nested clustering between the virtual machines themselves, such as Windows Failover Clustering, PostgreSQL clustering, load balancer clustering, or farms of machines that work together natively in an N+1 architecture, to provide a resilient architecture. Workload Pods Workload Pods are configured and assigned to fabric groups in VMware vrealize Automation. Available resources are used to host virtual machines deployed by business groups in the Federation Enterprise Hybrid Cloud environment. All business groups can share the available vsphere ESXi cluster resources. EMC ViPR service requests are initiated from the vrealize Automation catalog to provision Workload Pod storage. Note: Workload Pods were previously termed resource pods in Enterprise Hybrid Cloud and earlier. 16

17 Management platform resource types The management platform for the Federation Enterprise Hybrid Cloud solution requires three sets of resources: Core Pod Network Edge Infrastructure (NEI) Pod Automation Pod Figure 2 shows how the components of the management stack are distributed among the management pods. Figure 2. Cloud management platform component layout Core Pod The Core Pod provides the base set of resources to establish the Federation Enterprise Hybrid Cloud solution services. It consists of: External VMware vcenter Server (optional): This vcenter instance hosts only the Core Pod components and hardware. It is required when using the Distributed management model and may already exist, depending on customer resources. Cloud VMware vcenter Server: This vcenter instance is used to manage the NEI and Automation components and hardware. If using the Collapsed management model it also hosts the Core Pod components and hardware. vrealize Automation uses this vcenter Server as its endpoint from which the appropriate vsphere ESXi clusters are reserved for use by vrealize Automation business groups. 17

18 Microsoft SQL Server: Hosts SQL Server databases used by the Cloud vcenter Server and VMware Update Manager. It also hosts the VMware vcenter Site Recovery Manager database in a DR dual-site/dual vcenter topology. Note: Figure 2 includes separate SQL Server virtual machines for the External and Cloud vcenter SQL Server databases. This provides maximum resilience. Placing both vcenter databases on the same SQL Server virtual machine in the Core Pod is also supported. The vrealize IaaS SQL Server database must be on its own SQL Server instance in the Automation Pod. VMware NSX /VMware vcloud Networking and Security Manager : Used to deploy and manage the Workload Pod and the management infrastructure virtual networks. EMC SMI-S Provider: Management infrastructure required for EMC ViPR deployment. The hardware hosting the Core Pod is not under cloud management, but the virtual machines it hosts provide the critical services for the cloud. All of the virtual machines on the Core Pod are deployed on non-vipr storage. The virtual machines can use existing SAN connected storage or any high-availability storage in the customer environment. The Federation Enterprise Hybrid Cloud supports Fibre Channel (FC), iscsi, and NFS storage from EMC VNX storage systems for the Core Pod storage. Though not mandatory, FC connectivity is strongly recommended. All storage should be RAID protected and all vsphere ESXi servers should be configured with EMC PowerPath/VE for automatic path management and load balancing. Network Edge Infrastructure (NEI) Pod The NEI Pod is used to host all of the north/south vcloud networking components and the security Edge components of the virtualized network. When NSX is used, it also hosts the NSX Controller appliances. This pod provides the convergence point for the physical and virtual networks. Like the Core Pod, storage for this pod should be RAID protected and the Federation recommends Fibre Channel connections. vsphere ESXi hosts should run EMC PowerPath /VE for automatic path management and load balancing. Automation Pod The Automation Pod hosts all the virtual machines used for automating and managing the cloud infrastructure, except for services installed in the Core Pod. The Automation Pod supports the services responsible for functions such as the user portal, automated provisioning, monitoring, and metering. The Automation Pod is managed by the Cloud vcenter Server instance; however it is dedicated to automation and management services. Therefore, the resources from this pod are not exposed to vrealize Automation business groups. The Automation Pod does not share networks or storage resources with the workload clusters. Storage provisioning for the Automation Pod follows the same guidelines as the NEI Pod. 18

19 The distributed management model and collapsed management model are described in the following sections. Distributed management model The distributed management model uses two separate vcenter instances and each management pod has its own distinct vsphere ESXi cluster. Therefore, it requires a minimum of seven hosts. A higher level, External vcenter Server instance manages all server and virtual machine components for the Core Pod. While this vcenter instance can also be located on the cloud management platform Core Pod, it can also be located on a separate system for further levels of high availability. The second Cloud vcenter Server instance located on the cloud management platform manages the NEI, Automation, and Workload Pods supporting the various business groups within the enterprise. This server acts as the vsphere end-point for vrealize Automation. Figure 3 shows the default configuration of two vcenters where the first vcenter supports the Core Pod and the second vcenter supports the remaining cloud management pods and tenant resources. Figure 3. Cloud management - distributed vsphere cluster model The distributed management model: Enables Core Pod functionality and resources to be provided by a pre-existing vsphere instance within your environment. Provides the highest level of resource separation (that is, host level) between the Core, Automation, and NEI Pods. Places the NEI Pod ESXi cluster as the single intersection point between the physical and virtual networks configured within the solution, which eliminates the need to have critical networking components compete for resources as the solution scales and the demands of other areas of the cloud management platform increase. Enhances the resilience of the solution because a separate vcenter server and SQL Server instance host the core cloud components. 19

20 Collapsed management model The collapsed management model uses a single vcenter server to host all Core, Automation, and NEI Pod components and the Workload Pods. Each management pod is implemented as an individual vsphere resource group, which ensures that each pod receives the correct proportion of compute and network resources. It requires a minimum of three physical hosts. Figure 4 shows an example of how the vsphere configuration might look with a collapsed management model. Figure 4. Cloud management - collapsed vsphere cluster model The collapsed management model: Provides the smallest overall management footprint for any given cloud size to be deployed. Allows resource allocation between pods to be reconfigured with minimal effort. Allows high-availability overhead to be reduced by using a single cluster, but does not alter the CPU, RAM, or storage required to manage the solution. Resource pool considerations Given that a single vsphere cluster is used in the collapsed management model, a vsphere resource group is required for each Management Pod in order to ensure sufficient resources are reserved for each function. Use the guidelines in Table 2 as the starting point for balancing these resources appropriately. Table 2. Collapsed management model: Resource groups configuration Resource Core NEI Auto CPU 20% 20% 60% RAM 20% 5% 75% Note: These figures are initial guidelines and should be monitored in each environment and fine-tuned accordingly. The percentages can be implemented, as shares, in whatever scale is 20

21 required, when the percentage of shares assigned to each resource pool corresponds to the ratio of percentages in Table 2. Network quality of service considerations When operating in a collapsed management model, it may be necessary to configure network quality of service (QoS) to ensure.that each function has a guaranteed minimum level of bandwidth available. Table 3 shows the suggested initial QoS settings. These can be subsequently fine-tuned as appropriate to the environment. Note: These values are suggestions based on the logical network Layout 1 in Chapter 6. As this layout is only a sample, you should collapse or divide these allocations according to the network topology you want to implement Table 3. Suggested network QoS settings Name VLAN DVS Shares DVS % Min QoS COS vmk_esxi_mgmt % 2 vmk_nfs % 4 vmk_iscsi % 4 vmk_vmotion % 1 DPG_Core % 2 DPG_NEI % 2 DPG_Automation % 2 DPG_Tenant_Uplink % 0 VXLAN_Transport 900 * * * Avamar_Target (Optional) 1000 ** ** ** DPG_AV_Proxies (Optional) % 0 ESG_DLR_Transit Virtual Wire % 0 Workload Virtual Wire % 0 *This is a VXLAN_Transport VLAN. The shares are associated with the virtual wire networks that use the transport VLAN. **Physical network only. No shares required. Deciding on the management model Use the following key criteria to decide which management model is most suited for your environment: Reasons to select the distributed model You want to use the existing infrastructure to provide the resources that will host the Core Pod. You require the highest level of resource separation (that is, host level) between the Core, Automation, and NEI Pods. You want to minimize the intersection points for north/south traffic to just the hosts that serve as the NEI Pod. You want to maximize the resilience of the solution by using a separate vcenter server and SQL Server instance to host the core cloud components. 21

22 Reasons to select the collapsed model You want to deploy the smallest management footprint for any given cloud size. You want resource allocation between pods to be reconfigurable with minimal effort. 22

23 This chapter presents the following topics: Overview Single-site networking considerations Single-site storage considerations Recovery of cloud management platform Backup of single-site/single vcenter enterprise hybrid cloud

24 This chapter describes networking and storage considerations for a single-site/single vcenter topology in the Federation Enterprise Hybrid Cloud solution. When to use the single-site topology The single-site/single vcenter Federation Enterprise Hybrid cloud topology should be used when restart or recovery of the cloud to another data center is not required. It can also be used as the base deployment on top of which you may layer the dual-site/single vcenter or dual-site/dual vcenter topology at a later time. Architecture Figure 5 shows the single-site/single vcenter architecture for the Federation Enterprise Hybrid Cloud solution including the required sets of resources separated by pod. Figure 5. Federation Enterprise Hybrid Cloud single-site architecture NSX controller placement In a single-site topology, all NSX Controller components reside in the NEI Pod. NSX best practice recommends that each controller be placed on separate physical hosts. When NSX is the chosen networking technology, this solution uses three NSX controllers. When using the Federation Enterprise Hybrid Cloud Sizing tool, appropriate consideration should be given to the choice of server specification for the NEI Pod to ensure efficient use of hardware resources, because a three-server minimum will be enforced. VMware Anti-Affinity Rules should be used to ensure that the NSX controllers reside on different hosts in optimum conditions. 24

25 Storage design This Federation Enterprise Hybrid Cloud solution presents storage in the form of storage service offerings that greatly simplify virtual storage provisioning. The storage service offerings are based on ViPR virtual pools, which are tailored to meet the performance requirements of general IT systems and applications. Multiple storage system virtual pools, consisting of different disk types, are configured and brought under ViPR management. ViPR presents the storage to the enterprise hybrid cloud as virtual storage pools, abstracting the underlying storage details and enabling provisioning tasks to be aligned with the application s class of service. Each ViPR virtual pool representing a storage service offering can be supported or backed by multiple storage pools of identical performance and capacity. This storage service offering concept is summarized in Figure 6. Figure 6. Storage service offerings for the hybrid cloud Note: The storage service offerings in Figure 6 are suggestions only. Storage service offerings can be configured and named as appropriate to reflect their functional use. The storage service examples in Figure 6 suggest the following configurations: All Flash: Can be provided by either EMC XtremIO or VNX as all-flash storage. Tiered: Provides VNX or VMAX block or file-based VMFS or NFS storage devices and is supported by multiple storage pools using EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP) and EMC Fully Automated Storage Tiering (FAST ) Cache. Single Tier: Provides EMC VNX block- or file-based VMFS or NFS storage and is supported by multiple storage pools using a single storage type of NL-SAS in this example. We suggest these storage service offerings only to highlight what is possible in a Federation Enterprise Hybrid Cloud environment. The full list of supported platforms includes: EMC VMAX EMC VNX 25

26 EMC XtremIO EMC ScaleIO EMC VPLEX EMC RecoverPoint Isilon (Workload use only) As a result many other storage service offerings can be configured to suit business and application needs, as appropriate. Note: The Federation recommends that you follow the best practice guidelines when deploying any of the supported platform technologies. The Federation Enterprise Hybrid Cloud does not require any variation from these best practices. Storage consumption vrealize Automation provides the framework to associate one or more classes of storage with each line of business so that they can be consumed through the service catalog. Initially, physical storage pools are configured on the storage system and made available to ViPR where they are configured into their respective virtual pools. At provisioning time, LUNs or file systems are configured from these virtual pools and presented to vsphere as VMFS or NFS datastores. The storage is then discovered by vrealize Automation and made available for assignment to business groups within the enterprise. This storage service offering approach greatly simplifies the process of storage administration. Instead of users having to configure the placement of individual virtual machine disks (VMDKs) on different disk types such as serial-attached storage (SAS) and FC, they simply select the appropriate storage service level required for their business need. Virtual disks provisioned on FAST VP storage benefit from the intelligent data placement. While frequently accessed data is placed on disks with the highest level of service, less frequently used data is migrated to disks reflecting that service level. When configuring virtual machine storage, a business group administrator can configure blueprints to deploy virtual machines onto any of the available storage service levels. In the example in Figure 7, a virtual machine can be deployed with a blueprint including a SQL Server database, to a storage service offering named Prod-2, which was designed with the performance requirements of such an application in mind. Figure 7. Blueprint storage configuration in vrealize Automation The devices for this SQL Server database machine have different performance requirements, but rather than assigning different disk types to each individual drive, each virtual disk can be configured on the Prod-2 storage service offering. This allows the underlying FAST technology to handle the best location for each individual block of data across the tiers. The 26

27 vrealize Automation storage reservation policy ensures that the VMDKs are deployed to the appropriate storage. The storage presented to vrealize Automation can be shared and consumed across the various business groups using the capacity and reservation policy framework in vrealize Automation. Storage provisioning Storage is provisioned to the Workload vsphere clusters in the environment using the Provision Cloud Storage catalog item that can provision VNX, VMAX, XtremIO, ScaleIO, and VPLEX Local storage to single-site topology workload clusters. The workflow interacts with both ViPR and vrealize Automation to create the storage, present it to the chosen vsphere cluster and add the new volume to the relevant vrealize Storage Reservation Policy. vsphere clusters are made eligible for storage provisioning by tagging them with vrealize Automation custom properties that define them as Unprotected clusters, that is, that they are not involved in any form of inter-site replication relationship. This tagging is done during the installation and preparation of vsphere clusters for use by the Federation Enterprise Hybrid Cloud using the Unprotected Cluster Onboarding workflows provided as part of the Federation Enterprise Hybrid Cloud self-service catalog. Note: Virtual machines on the cluster may still be configured to use backup as a service, as shown in Chapter 7. As local-only vsphere clusters can also exist in continuous availability and DR topologies, this process ensures that only the correct type of storage is presented to the single-site vsphere clusters and no misplacement of virtual machines intended for inter-site protection occurs. ViPR virtual pools For block-based provisioning, ViPR virtual arrays should not contain more than one protocol. For Federation Enterprise Hybrid Cloud this means that ScaleIO storage and FC block storage must be provided via separate virtual arrays. Single-site topology Recovery of the management platform does not apply to a single-site topology, because there is no target site to recover to. Single-site/single vcenter topology backup The primary option for backup in a single-site/single vcenter topology is the Standard Avamar configuration, though the Redundant Avamar/single vcenter configuration may also be used to provide additional resilience. Both options are described in Chapter

28 This chapter presents the following topics: Overview Standard dual-site/single vcenter topology Continuous availability dual-site/single vcenter topology Continuous availability network considerations VPLEX Witness VPLEX topologies Continuous availability storage considerations Recovery of cloud management platform Backup in dual-site/single vcenter enterprise hybrid cloud

29 This chapter describes networking and storage considerations for a dual-site/single vcenter topology in the Federation Enterprise Hybrid Cloud solution. When to use the dual-site/single vcenter topology The dual-site/single vcenter Federation Enterprise Hybrid Cloud topology may be used when restart of the cloud to another data center is required. It should only be used in either of the following two scenarios: Two sites are present that require management via a single vcenter instance and a single Federation Enterprise Hybrid Cloud management platform/portal. This model has no additional storage considerations beyond the single-site/single vcenter model because each site has completely independent storage. This model employs a second NEI Pod on the second site to ensure north/south network traffic egresses the second site in the most efficient manner. The local NEI Pod will host the Edge gateway services for its respective site. Note: In this case, the scope of the term site is at the user s discretion. It could be taken to mean separate individual geographical locations, or could also mean independent islands of infrastructure in the same geographical location such as independent VCE Vblock platforms. Continuous availability is required. This topology also requires that: EMC VPLEX storage is available. Stretched Layer 2 VLANs are permitted or the networking technology chosen supports VXLANs. The latency between the two physical data center locations is less than 10 ms. The standard dual-site/single vcenter Federation Enterprise Hybrid Cloud topology controls two sites, each with independent islands of infrastructure using a single vcenter instance and Federation Enterprise Hybrid Cloud management stack/portal. This architecture provides a mechanism to extend an existing Federation Enterprise Hybrid Cloud by adding additional independent infrastructure resources to an existing cloud, when resilience of the management platform itself is not required. Figure 8 shows the architecture used for this topology option. 29

30 Figure 8. Federation Enterprise Hybrid Cloud standard dual-site/single vcenter architecture The continuous availability (CA) dual-site/single vcenter Federation Enterprise Hybrid Cloud topology is an extension of the standard dual-site/single vcenter model that stretches the infrastructure across both sites, using VMware vsphere Metro Storage Clusters (vmsc), vsphere HA, and VPLEX in Metro configuration. This topology enables multi-site resilience across two sites with automatic restart of both the management platform and workload virtual machines on the surviving site. Figure 9 shows the architecture used for this topology option. 30

31 Figure 9. Federation Enterprise Hybrid Cloud CA dual-site/single vcenter architecture NSX Controller placement In a CA dual-site/single vcenter topology, all NSX Controller components reside in the NEI Pod. NSX best practice recommends that each controller be placed on separate physical hosts. When NSX is the chosen networking technology, this solution uses a minimum of three NSX controllers. When using the Federation Enterprise Hybrid Cloud Sizing tool, appropriate consideration should be given to the choice of server specification for the NEI Pod to ensure efficient use of hardware resources, given that a three-server minimum will be enforced. 31

32 Two out of the three NSX controllers must be operating for the NSX control plane to be available. The following recommendations are made regarding controller placement: VMware Anti-Affinity Rules should be used to ensure that the NSX controllers reside on different hosts in optimum conditions. Two NSX controllers should be placed on one site and one NSX controller placed on the other site. This should be achieved using a combination of host DRS groups, virtual machine DRS group and virtual machine DRS rules set to ensure that virtual machines in the virtual machine DRS groups should run on the relevant host DRS groups. VMware HA should be configured to enable NSX controllers restart. Stretched networks and network technology choice Data Center Interconnect If vsphere Distributed Switches are used in the CA dual-site/single vcenter topology, then all networks must be backed by stretched Layer 2 VLANs. Use of NSX of vsphere or vcloud Networking and Security enables you to use VXLANs backed by a Layer 3 DCI. Data centers that are connected together over a metro link can use either Layer 2 bridged VLAN connectivity or Layer 3 routed IP connectivity. Both Data Center Interconnect (DCI) options have advantages and disadvantages. However, new standards and technologies, such as Virtual Extensible LAN (VXLAN), address most of the disadvantages. Traditional disadvantages of Layer 2 DCI The risks related to Layer 2 extensions between data centers mirror some of the limitations faced in traditional Ethernet broadcast domains. The limiting factor is the scalability of a single broadcast domain. A large number of hosts and virtual machines within a broadcast domain, all of which contend for shared network resources, can result in broadcast storms. The results of broadcast storms are always to the detriment of network availability, adversely affecting application delivery and ultimately leading to a poor user experience. This can affect productivity. As the CA architecture is stretched across both data centers, a broadcast storm could cause disruption in both the primary and secondary data centers. Multiple Layer 2 interconnects create additional challenges for stretched networks. If unknown broadcast frames are not controlled, loops in the Layer 2 extension can form. This can also cause potential disruption across both data centers, resulting in network downtime and loss of productivity. If used, the Spanning Tree Protocol (STP) needs to be run and carefully managed to control loops across the primary and secondary site interconnecting links. Loop avoidance and broadcast suppression mechanisms are available to the IT professional, but must be carefully configured and managed. Traditional advantages of Layer 2 DCI The greatest advantage of Layer 2 DCI is the IP address mobility of physical and virtual machines across both data centers. This simplifies DR in the event of a failure in the primary data center. Note: Layer 2 connectivity is often necessary for applications where heartbeats and clustering techniques are used across multiple hosts. In some cases, technologies might not be able to span Layer 3 boundaries. 32

33 Traditional disadvantages of Layer 3 DCI If an infrastructure failure occurs at the primary site, a machine migrated to the secondary data center must be reconfigured to use an alternate IP addressing scheme. This can be more time consuming and error prone than having a high-availability deployment across a single Layer 2 domain. Inter-site machine clustering may not be supported over a Layer 3 boundary, which can be either multicast or broadcast based. Traditional advantages of Layer 3 DCI Layer 3 DCI does not use extended broadcast domains or require the use of STP. Therefore, there is greater stability of the production and services networks across both primary and secondary data centers. Note: The data center interconnect physical link is subject to the availability of the local telecommunications service provider and the business requirement of the enterprise. Continuous availability DCI networking solution The network topology used in the CA for Federation Enterprise Hybrid Cloud solution incorporates the advantages of both Layer 2 and Layer 3 DCI topologies. Layer 2 requirements such as resource and management traffic are handled by the VXLAN implementation enabled by NSX. This offers the advantage of IP mobility across both sites by placing the resource and management traffic on spanned VXLAN segments. It also eliminates the complexity of STP and performance degradation that large broadcast domains can introduce. VXLANs can expand the number of Layer 2 domains or segments beyond the 802.1q limit of 4,096 VLANs to a theoretical limit of 16 million. VXLANs can also extend the Layer 2 environment over Layer 3 boundaries. An underlying Layer 3 data center interconnect runs a dynamic route distribution protocol with rapid convergence characteristics such as Open Shortest Path First (OSPF). OSPF routing metrics route the ingress traffic to the primary data center. If the primary data center is unavailable, the OSPF algorithm automatically converges routes to the secondary data center. This is an important advantage compared to using a traditional Layer 2 DCI and Layer 3 DCI solution in isolation. Note: NSX also supports Border Gateway Protocol (BGP) and Intermediate System to Intermediate System (IS-IS) route distribution protocols. In a collapsed management model, all clusters are part of the same vcenter instance and therefore can all be configured to use the security and protection features offered by the same NSX Manager instance. If this is not a requirement for Core Pod, then a stretched Layer 2 network may also be used. In a distributed management model, two vcenter instances are used. Given the 1:1 relationship between a vcenter instance and NSX Manager, a second NSX manager instance would be required if the Core Pod is to use the security and protection NSX provisioned networks. Given the small number of virtual machines present in the external vcenter, it may be appropriate to consider a stretched Layer 2 VLAN for this network if the second NSX manager instance is deemed unnecessary. Figure 10 shows one possible scenario where two data centers are connected using both a Layer 2 and a routed Layer 3 IP link and how the Core, NEI, Automation, and Workload segments could be provisioned. 33

34 Figure 10. Continuous availability data center interconnect example In this scenario, the following properties are true: vsphere ESXi stretched clusters are utilized to host the Core, Automation, NEI, and Workload virtual machines. This, with vsphere HA, enables virtual machines to be automatically restarted on the secondary site, if the primary site fails. The Core Pod virtual machines are connected to a stretched VLAN. This prevents the need for a second NSX manager machine. The NSX controllers (NEI Pod) are connected to the same stretched VLAN as the Core Pod virtual machines. The Automation Pod virtual machines are connected to an NSX logical network, backed by VXLAN and available across both sites. The Workload Pod virtual machines are connected to a NSX logical network, backed by VXLAN and available across both sites. VXLAN encapsulated traffic must be able to travel between vsphere ESXi hosts at both sites. One or more NSX Edge Services Gateways (ESGs) are deployed at each site to control traffic flow between the virtual and physical network environments. Note: NSX supports three modes of replication for VXLAN traffic unicast, multicast and hybrid. Unicast mode enables VXLAN traffic to be carried across Layer 3 boundaries without assistance from the underlying physical network. If vcns is used instead of NSX, then the physical network must support multicast. 34

35 vsphere HA, in combination with VPLEX and VPLEX Witness, enables the cloud-management platform virtual machines to restore the cloud-management service on the secondary site in the event of a total loss of the primary data center. In this scenario, the virtual machines automatically move to and operate from vsphere ESXi nodes residing in the secondary data center. Edge Services Gateway considerations All workload virtual machines should use NSX logical networks connected to a Distributed Logical Router (DLR). The DLR can provide the same default gateway to a virtual machine, whether it is running at the primary or secondary site. DLRs should be connected to at least one ESG at each site and a dynamic route distribution protocol (such as OSPF and others supported by NSX) should be used to direct traffic flow. We recommend that you use both NSX High Availability and High Availability in conjunction with host DRS groups, virtual machine DRS groups and virtual machine DRS Affinity rules to ensure that DLR virtual machines run on the correct site in optimum conditions. This solution has all the advantages of traditional Layer 2 and Layer 3 solutions. It provides increased flexibility and scalability by implementing VXLANs, and benefits from increased stability by not extending large broadcast domains across the VPLEX Metro. VPLEX Witness is an optional component deployed in customer environments where the regular preference rule sets are insufficient to provide seamless zero or near-zero recovery time objective (RTO) storage availability in the event of site disasters or VPLEX cluster and inter-cluster failures. Without VPLEX Witness, all distributed volumes rely on configured rule sets to identify the preferred cluster in the event of a cluster partition or cluster/site failure. However, if the preferred cluster fails (for example, as a result of a disaster event), VPLEX is unable to automatically enable the surviving cluster to continue I/O operations to the affected distributed volumes. VPLEX Witness is designed to overcome this. The VPLEX Witness server is deployed as a virtual appliance running on a customer s vsphere ESXi host that is deployed in a failure domain separate from both of the VPLEX clusters. The third fault domain must have power and IP isolation from both the Site A and Site B fault domains, which host the VPLEX Metro Clusters. This eliminates the possibility of a single fault affecting both the cluster and VPLEX Witness. VPLEX Witness connects to both VPLEX clusters over the management IP network. By reconciling its own observations with the information reported periodically by the clusters, VPLEX Witness enables the clusters to distinguish between inter-cluster network partition failures and cluster failures, and to automatically resume I/O operations in these situations. Figure 11 shows an example of a high-level deployment of VPLEX Witness and how it can augment an existing static preference solution. The VPLEX Witness server resides in a fault domain separate from the VPLEX clusters on Site A and Site B. 35

36 Figure 11. High-level deployment of EMC VPLEX Witness VMware classifies the stretched VPLEX Metro cluster configuration with VPLEX into the following categories: Uniform host access configuration with VPLEX host Cross-Connect vsphere ESXi hosts in a distributed vsphere cluster have a connection to the local VPLEX system and paths to the remote VPLEX system. The remote paths presented to the vsphere ESXi hosts are stretched across distance. Non-uniform host access configuration without VPLEX host Cross-Connect vsphere ESXi hosts in a distributed vsphere cluster have a connection only to the local VPLEX system. Deciding on VPLEX topology Use the following guidelines to help you decide which topology suits your environment: Uniform (Cross-Connect) is typically used where: Inter-site latency is less than 5ms. Stretched SAN configurations are possible. Non-Uniform (without Cross-Connect) is typically used where: Inter-site latency is between 5 ms and 10 ms. Stretched SAN configurations are not possible. Uniform host access configuration with VPLEX host Cross-Connect EMC GeoSynchrony supports the concept of a VPLEX Metro cluster with Cross-Connect. This configuration provides a perfect platform for a uniform vsphere stretched-cluster deployment. VPLEX with host Cross-Connect is designed for deployment in a metropolitantype topology with latency that does not exceed 5 ms round-trip time (RTT). vsphere ESXi hosts can access a distributed volume on the local VPLEX cluster and on the remote cluster in the event of a failure. When this configuration is used with VPLEX Witness, vsphere ESXi hosts are able to survive through multiple types of failure scenarios. For example, in the event of a VPLEX cluster or back-end storage array failure, the vsphere ESXi hosts can still access the second VPLEX cluster with no disruption in service. 36

37 In the unlikely event that the preferred site fails, VPLEX Witness intervenes and ensures that access to the surviving cluster is automatically maintained. In this case, vsphere HA automatically restarts all affected virtual machines. Figure 12 shows that all ESXi hosts are connected to the VPLEX clusters at both sites. This can be achieved in a number of ways: Merge switch fabrics by using Inter-Switch Link (ISL) technology used to connect local and remote SANs. Connect directly to the remote data center fabric without merging the SANs. Figure 12. Deployment model with VPLEX host Cross-Connect This type of deployment is designed to provide the highest possible availability for a Federation Enterprise Hybrid Cloud environment. It can withstand multiple failure scenarios including switch, VPLEX, and back-end storage at a single site with no disruption in service. For reasons of performance and availability, the Federation recommends that separate host bus adapters be used for connecting to local and remote switch fabrics. Note: VPLEX host Cross-Connect is configured at the host layer only and does not imply any cross connection of the back-end storage. The back-end storage arrays remain locally connected to their respective VPLEX clusters. From the host perspective, in the uniform deployment model with VPLEX host Cross- Connect, the vsphere ESXi hosts are zoned to both the local and the remote VPLEX clusters. Figure 13 displays the VPLEX storage views for a host named DRM-ESXi088, physically located in Site A of our environment. Here the initiators for the host are registered and added to both storage views with the distributed device being presented from both VPLEX clusters. 37

38 Figure 13. VPLEX storage views with host Cross-Connect This configuration is transparent to the vsphere ESXi host. The remote distributed volume is presented as an additional set of paths. Figure 14 shows the eight available paths that are presented to host DRM-ESXi088, for access to the VPLEX distributed volume hosting the datastore named CC-Shared-M3. The serial numbers of the arrays are different because four of the paths are presented from the first VPLEX cluster and the remaining four are presented from the second. Figure 14. Datastore paths in a VPLEX with host Cross Connect configuration PowerPath/VE autostandby mode Neither the host nor the native multipath software can by themselves distinguish between local and remote paths. This poses a potential impact on performance if remote paths are used for I/O in normal operations because of the cross-connect latency penalty. 38

39 PowerPath/VE provides the concept of autostandby mode, which automatically identifies all remote paths and sets them to standby (asb:prox is the proximity-based autostandby algorithm). This feature ensures that only the most efficient paths are used at any given time. PowerPath/VE groups paths internally by VPLEX cluster. The VPLEX cluster with the lowest minimum path latency is designated as the local/preferred VPLEX cluster, while the other VPLEX cluster within the VPLEX Metro system is designated as the remote/non-preferred cluster. A path associated with the local/preferred VPLEX cluster is put in active mode, while a path associated with the remote/non-preferred VPLEX cluster is put in autostandby mode. This forces all I/O during normal operations to be directed towards the local VPLEX cluster. If a failure occurs where the paths to the local VPLEX cluster are lost, PowerPath/VE activates the standby paths and the host remains up and running on the local site, while accessing storage on the remote site. Non-uniform host access configuration without VPLEX Cross-Connect The non-uniform host configuration can be used for a Federation Enterprise Hybrid Cloud deployment if greater distances are required. The supported latency of this configuration requires that the round-trip time be within 5 ms to comply with VMware HA requirements. Without the cross-connect deployment, vsphere ESXi hosts at each site have connectivity to only that sites VPLEX cluster. Figure 15 shows that hosts located at each site have connections to only their respective VPLEX cluster. The VPLEX clusters have a link between them to support the VPLEX Metro configuration, and the VPLEX Witness is located in a third failure domain. Figure 15. VPLEX architecture without VPLEX Cross-Connect The major benefit of this deployment option is that greater distances can be achieved in order to protect the infrastructure. With the EMC VPLEX AccessAnywhere TM feature, the nonuniform deployment offers the business another highly resilient option that can withstand various types of failures including front-end and back-end single path failure, single switch failure, and single back-end array failure. Figure 16 shows the storage views from VPLEX cluster 1 and cluster 2. In the example nonuniform deployment, hosts DRM-ESXi077 and DRM-ESXi099 represent hosts located in different data centers. They are visible in their site-specific VPLEX cluster s storage view. With 39

40 AccessAnywhere, the hosts have simultaneous write access to the same distributed device, but only via the VPLEX cluster on the same site. Figure 16. VPLEX Storage views without VPLEX Cross-Connect Figure 17 shows the path details for one of the hosts in a stretched cluster that has access to the datastores hosted on the VPLEX distributed device. The World Wide Name (WWN) on the Target column shows that all paths to that distributed device belong to the same VPLEX cluster. PowerPath/VE has also been installed on all of the hosts in the cluster, and it has automatically set the VPLEX volume to the adaptive failover mode. The autostandby feature is not used in this case because all the paths to the device are local. Figure 17. vsphere Datastore Storage paths without VPLEX Cross-Connect With vsphere HA, the virtual machines are also protected against major outages, such as network partitioning of the VPLEX WAN link or an entire site failure. In order to prevent any unnecessary down time, the Federation recommends that the virtual machines reside on the site that would win ownership of the VPLEX distributed volume in the event of such a partitioning occurring. Site affinity for virtual machines The solution uses VMware Host Distributed Resource Scheduler (DRS) groups to subdivide the vsphere ESXi hosts in each workload and management cluster into groupings of hosts corresponding to their respective sites. It does this by defining two VMware host DRS groups in the format SiteName_Hosts where the site names of both sites are defined during the installation of the Federation Enterprise Hybrid Cloud foundation package. 40

41 VMware virtual machine DRS groups are also created in the format Sitename_VMs during the preparation of the ESXi cluster for continuous availability. Storage reservation polices (SRPs) created by the Federation Enterprise Hybrid Cloud storage as service workflows are automatically named to indicate the preferred site in which that storage type is run. Note: In this case, the preferred setting means that in the event of a failure that results in the VPLEX units being unable to communicate, that this site will be the one that continues to provide read/write access to the storage. During deployment of a virtual machine though the vrealize portal, the user is asked to choose from a list of storage reservation policies. Federation Enterprise Hybrid Cloud custom workflows use this information to place the virtual machine on a vsphere ESXi cluster with access to the required storage type and placing the virtual machine into the appropriate virtual machine DRS group. Virtual machines to host DRS rules are then used to bind virtual machines to the preferred site by configuring the SiteName_VMs virtual machine DRS group with a setting of should run on the respective SiteName_Hosts host DRS group. This ensures virtual machines run on the required site, while allowing them the flexibility of failing over if the infrastructure on that site becomes unavailable. Figure 18 shows how the virtual machine DRS groups and affinity rules might look in a sample configuration. Figure 18. Sample view of Site Affinity DRS Group and Rule configuration Note: The values SiteA and SiteB shown in both Figure 18 and Figure 19 can and should be replaced with meaningful site names in a production environment. They must correlate with the site name values entered during the Federation Enterprise Hybrid Cloud Foundation package initialization for site affinity to work correctly. 41

42 Figure 19 shows a simple example of two scenarios where virtual machines are deployed to a vmsc and how the logic operates to place those virtual machines on their preferred sites. Figure 19. Deploying virtual machines with site affinity Scenario 1: Deploy VM1 with affinity to Site A This scenario describes deploying a virtual machine (VM1) with affinity to Site A: 1. During virtual machine deployment, the user chooses a storage reservation policy named SiteA_Preferred_CA_Enabled. 2. This storage reservation policy choice filters the cluster choice to only those clusters with that reservation policy. In this case cluster Based on the selected storage reservation policy, Federation Enterprise Hybrid Cloud workflows programmatically determine that Site A is the preferred location, and therefore locates the virtual machine DRS affinity group corresponding with Site A, namely SiteA_VMs. 4. The expected result is: a. VM1 is deployed into SiteA_VMs, residing on host CL1-H1 or CL1H2. b. VM1 is deployed onto a datastore from the SiteA_Preferred_CA_Enabled storage reservation policy, for example: VPLEX_Distributed_LUN_SiteA_Preferred_01 or VPLEX_Distributed_LUN_SiteA_Preferred_02 Scenario 2: Deploy VM2 with affinity to Site B This scenario describes deploying a virtual machine (VM2) with affinity to Site B: 1. During virtual machine deployment, the user chooses a storage reservation policy named SiteB_Preferred_CA_Enabled. 2. This storage reservation policy choice filters the cluster choice to only those clusters with that reservation policy. In this case cluster 1. 42

43 3. Based on the selected storage reservation policy, Federation Enterprise Hybrid Cloud workflows programmatically determine that Site B is the preferred location, and therefore locates the virtual machine DRS affinity group corresponding with Site B, namely SiteB_VMs. 4. The expected result is: a. VM2 is deployed into SiteB_VMs, meaning it resides on hosts CL1-H3 or CL1H4. b. VM1 is deployed onto a datastore from the SiteB_Preferred_CA_Enabled storage reservation policy. For example: VPLEX_Distributed_LUN_SiteB_Preferred_01 or VPLEX_Distributed_LUN_SiteB_Preferred_02 ViPR virtual arrays There must be at least one virtual array for each site. By configuring the virtual arrays in this way, ViPR can discover the VPLEX and storage topology. You should carefully plan and perform this step because it is not possible to change the configuration after resources have been provisioned, without first disruptively removing the provisioned volumes. ViPR virtual pools ViPR virtual pools for block storage offer two options under High Availability: VPLEX local and VPLEX distributed. When you specify local high availability for a virtual pool, the ViPR storage provisioning services create VPLEX local virtual volumes. If you specify VPLEX distributed high availability for a virtual pool, the ViPR storage provisioning services create VPLEX distributed virtual volumes. To configure a VPLEX distributed virtual storage pool through ViPR: Ensure a virtual array exists for both sites, with the relevant physical arrays associated with those virtual arrays. Each VPLEX cluster must be a member of the virtual array at its own site only. Before creating a VPLEX high-availability virtual pool at the primary site, create a local pool at the secondary site. This is used as the target virtual pool when creating VPLEX distributed virtual volumes. When creating the VPLEX high-availability virtual pool on the source site, select the source storage pool from the primary site, the remote virtual array, and the remote pool created in Step 2. This pool is used to create the remote mirror volume that makes up the remote leg of the VPLEX virtual volume. Note: This pool is considered remote when creating the high availability pool because it belongs to VPLEX cluster 2 and we are creating the high availability pool from VPLEX cluster 1. Figure 20 shows this configuration, where VPLEX High Availability Virtual Pool represents the VPLEX high-availability pool being created. 43

44 Figure 20. Interactions between local and VPLEX distributed pools As described in Site affinity for virtual machines, Federation Enterprise Hybrid Cloud workflows leverage the winning site in a VPLEX configuration to determine which site to map VMs to. To enable active/active clusters, it is therefore necessary to create two sets of datastores one set that will win on Site A and another set than will win on Site B. To enable this, you need to configure an environment similar to Figure 20 for Site A, and the inverse of it for Site B (where the local pool is on Site A, and the high availability pool is configured from Site B). ViPR and VPLEX consistency groups interaction VPLEX uses consistency groups to maintain common settings on multiple LUNs. To create a VPLEX consistency group using ViPR, a ViPR consistency group must be specified when creating a new volume. ViPR consistency groups are used to control multi-lun consistent snapshots and have a number of important rules associated with them when creating VPLEX distributed devices: All volumes in any given ViPR consistency group must contain only LUNs from the same physical array. As a result of these considerations, the Federation Enterprise Hybrid Cloud STaaS workflows create a new consistency group per physical array, per vsphere cluster per site. All VPLEX distributed devices in a given ViPR consistency group must have source and target backing LUNS from the same pair of arrays. As a result of these two rules, it is a requirement of the Federation Enterprise Hybrid Cloud that an individual ViPR virtual pool is created for every physical array that provides physical pools for use in a VPLEX distributed configuration. Virtual Pool Collapser function Federation Enterprise Hybrid Cloud STaaS workflows use the name of the ViPR virtual pool chosen as part of the naming for the vrealize Storage Reservation Policy (SRP) that the new datastore is added to. The Virtual Pool Collapser (VPC) function of Federation Enterprise Hybrid Cloud collapses the LUNs from multiple virtual pools into a single SRP. The VPC function can be used in the scenario where multiple physical arrays provide physical storage pools of the same configuration or service level to VPLEX, but through 44

45 different virtual pools, and where required to ensure that all LUNS provisioned across those physical pools are collapsed into the same SRP. VPC can be enabled or disabled at a global Federation Enterprise Hybrid Cloud level. When enabled, the Federation Enterprise Hybrid Cloud STaaS workflows examine the naming convention of the virtual pool selected to determine which SRP it should add the datastore to. If the virtual pool has the string _VPC- in it, then Federation Enterprise Hybrid Cloud knows that it should invoke VPC logic. Virtual Pool Collapser example Figure 21 shows an example of VPC in use. In this scenario, the administrator has enabled the VPC function and created two ViPR virtual pools GOLD_VPC , which has physical pools from Array 1 GOLD_VPC , which has physical pools from Array 2 When determining how to construct the SRP name to be used, the VPC function will only use that part of the virtual pool name that exists before _VPC-. In this example that results in the term GOLD which then contributes to the common SRP name of SITEA_GOLD_CA_Enabled. This makes it possible to conform to the rules of ViPR consistency groups as well as providing a single SRP for all datastores of the same type, which maintains abstraction and balanced datastore usage at the vrealize layer. Figure 21. Virtual Pool Collapser example In the example shown in Figure 21, all storage is configured to win on a single site (Site A). To enable true active/active vsphere Metro Storage clusters, additional pools should be configured in the opposite direction, as mentioned in Continuous availability storage considerations. Note: For the DA release, you should only create a single virtual pool of type VPLEX distributed, and this pool should only contain physical pools from a single array. The GA release supports multiple VPLEX distributed pools, and provides a function that allows all LUNs provisioned from these pools to be consolidated into a single vrealize Automation storage reservation policy, if required. Storage provisioning VPLEX distributed storage is provisioned to the Workload vsphere clusters in the environment using the Federation Enterprise Hybrid Cloud catalog item named Provision Cloud Storage. As shown in Figure 20, these VPLEX volumes can be backed by VMAX, VNX, or XtremIO arrays. 45

46 Note: The Federation recommends that you follow the best practice guidelines when deploying any of the supported platform technologies. The Federation Enterprise Hybrid Cloud does not require any variation from these best practices. The workflow interacts with both ViPR and vrealize Automation to create the storage, presents it to the chosen vsphere cluster, and adds the new volume to the relevant vrealize storage reservation policy. As with the single-site topology, vsphere clusters are made eligible for storage provisioning by tagging them with vrealize Automation custom properties. However, in this case they are defined as CA Enabled clusters, that is, they are part of a vmsc that spans both sites in the environment. This tagging is done during the installation and preparation of vsphere clusters for use by the Federation Enterprise Hybrid Cloud using the CA Cluster Onboarding workflow provided as part of the Federation Enterprise Hybrid Cloud self-service catalog. As local-only vsphere clusters can also be present in CA topology, the Provision Cloud Storage catalog item will automatically present only ViPR VPLEX distributed virtual storage pools to provision from when you attempt to provision to a CA-enabled vsphere cluster. Standard dualsite/single vcenter topology CA dualsite/single vcenter topology This model provides no resilience/recovery for the cloud management platform. To enable this you should use the CA dual-site/single vcenter variant. As all of the management pods reside on vmsc, management components are recovered through vsphere HA mechanisms. Assuming the VPLEX Witness has been deployed in a third fault domain, this should happen automatically. Dual-site/single vcenter topology backup The primary option for backup in a dual-site/single vcenter topology is the Redundant Avamar/single vcenter configuration though the Standard Avamar configuration may also be used if backup is only required on one of the two sites. Both options are described in Chapter 7. 46

47 This chapter presents the following topics: Overview Standard dual-site/dual vcenter topology Disaster recovery dual-site/dual vcenter topology Disaster recovery network considerations vcenter Site Recovery Manager considerations vrealize Automation considerations Disaster recovery storage considerations Recovery of cloud management platform Best practices Backup in dual-site/dual vcenter topology

48 This chapter describes networking and storage considerations for a dual-site/dual vcenter topology in the Federation Enterprise Hybrid Cloud solution. When to use the dual-site/dual vcenter topology The dual-site/single vcenter Federation Enterprise Hybrid Cloud topology may be used in either of the following scenarios: Two sites are present that require management via independent vcenter instances and a single Federation Enterprise Hybrid Cloud management stack/portal. This model has no additional storage or network considerations above the singlesite/single vcenter model because each site has totally independent infrastructure resources with independent vcenters, but is managed by the same Federation Enterprise Hybrid Cloud management platform/portal. Note: In this case, the scope of the term site is at the users discretion. This can be separate individual geographical locations, or independent islands of infrastructure in the same geographical location, such as independent Vblock platforms. DR is required. This topology also requires that EMC RecoverPoint is available. Note: Typically this model is used when the latency between the two physical data center locations exceeds 10 ms. The standard dual-site/dual vcenter Federation Enterprise Hybrid Cloud architecture controls two sites, each with independent islands of infrastructure, each using its own vcenter instance but controlled by a single Federation Enterprise Hybrid Cloud management platform/portal. This architecture provides a mechanism to extend an existing Federation Enterprise Hybrid Cloud by adding additional independent infrastructure resources to an existing cloud, when resilience of the management platform itself is not required, but where the resources being added either already belong to an existing vcenter or it is desirable for them to do so. Figure 22 shows the architecture used for this topology option. 48

49 Figure 22. Federation Enterprise Hybrid Cloud standard dual-site/dual vcenter architecture The DR dual-site/dual vcenter topology for the Federation Enterprise Hybrid Cloud solution provides protection and restart capability for workloads deployed to the cloud. Management and workload virtual machines are placed on storage protected by RecoverPoint and are managed from VMware vcenter Site Recovery Manager. This topology allows for multi-site resilience across two sites with DR protection for both the management platform and workload virtual machines on the surviving site. Figure 23 shows the overall architecture of the solution. 49

50 Figure 23. Federation Enterprise Hybrid Cloud DR dual-site/dual vcenter architecture Physical network design The Federation Enterprise Hybrid Cloud solution deploys a highly resilient and fault-tolerant network architecture for intra-site network, compute, and storage networking. To achieve this, it uses features such as redundant hardware components, multiple link aggregation technologies, dynamic routing protocols, and high availability deployment of logical networking components. The DR dual-site/dual vcenter topology of the Federation Enterprise Hybrid Cloud solution requires network connectivity across two sites using WAN technologies. It maintains the resiliency of the Federation Enterprise Hybrid Cloud by implementing a similarly high-availability and fault tolerant network design with redundant links and dynamic routing protocols. The high-availability features of the solution, which can minimize downtime and service interruption, address any component-level failure within the site. Throughput and latency requirements are other important aspects of physical network design. To determine these requirements, consider carefully both the size of the workload and data that must be replicated between sites and the requisite RPOs and RTOs for your 50

51 applications. Traffic engineering and QOS capabilities can be used to guarantee the throughput and latency requirements of data replication. Requirements based on the management model The DR dual-site/dual vcenter topology is supported on both the distributed management model and the collapsed management model. In the collapsed management model, the Automation Pod components must be on a different physical network to the Core and NEI Pod components so that they can be failed over using VMware Site Recovery Manager, and the Automation network re-converged without affecting the Core and NEI Pod components on the source site. Network Controller placement In a DR dual-site/dual vcenter topology, all NSX Controller components are duplicated with one full set residing on each sites corresponding NEI Pod. NSX best practice suggests that each controller be placed on separate physical hosts. When NSX is the chosen networking technology, this solution uses a minimum of three NSX controllers. When using the Federation Enterprise Hybrid Cloud Sizing tool, appropriate consideration should be given to the choice of server specification for the NEI Pod to ensure efficient use of hardware resources, given that a three-server minimum will be enforced. VMware Anti-Affinity Rules should be used to ensure that the NSX controllers reside on different hosts in optimum conditions. Figure 24 shows how the various NSX components are deployed independently on both sites within the topology. Figure 24. NEI Pods from the cloud vcenter Server instances on Site A and Site B NSX Manager The Core Pods on Site A and Site B host distinct instances of NSX Manager. On each site, NSX Manager integrates with the vcenter Server instance on that site to provide network and security management for the site s logical networking and security requirements. NSX controllers NSX controllers are deployed in a high-availability architecture on the NEI Pod at each site and are managed by the local NSX Manager. NSX controllers provide the learning and forwarding of data packets to support virtual machine communication. The deployment of 51

52 NSX controllers helps to simplify the physical infrastructure and eliminates the need for multicast support in the physical network infrastructure to enable intra-vxlan communication. Perimeter NSX Edge The Federation Enterprise Hybrid Cloud solution provides multitier security support and security policy enforcement by deploying NSX Edges as perimeter firewalls. An NSX Edge can be deployed at different tiers to support tiered security policy control. Each site's NSX Manager deploys corresponding NSX Edge Services Gateways (ESGs) configured for services such as firewall, DHCP, NAT, VPN, and SSL-VPN. Logical switches NSX provides logical networking support through logical switches corresponding to VXLAN segments. These logical switches support the extension of Layer 2 connections between various virtual machines and other networking components such as NSX Edges and logical routers. The use of VXLAN also increases the scalability of the solution. For the DR for the Federation Enterprise Hybrid Cloud solution topology, transit logical switches are required on both sites to provide connections between the DLRs and NSX Edges, as shown in Figure 25 and Figure 26. Duplicate logical switches are also needed on both sites for use by the workload virtual machines. Figure 25. Logical switches on Site A 52

53 Figure 26. Logical switches on Site B Distributed logical router Three-tier applications are the most commonly deployed model in enterprises and can be used to demonstrate the network and security provisioning capabilities of NSX when integrated with vrealize Automation. The web tier is external facing and load balanced, serving web pages to users. Each web server needs to communicate with the application server; the application server in turn writes to and retrieves data from the database server. Where vrealize Automation provisions multimachine workloads to networks not created by vrealize Automation (that is, to a pre-provisioned deployment), the networks and router must be created before vrealize Automation can provision the virtual machines. The network adapters of the deployed virtual machines are connected to their respective DLR and an IP address is assigned using either Dynamic Host Configuration Protocol (DHCP) or, as in this solution, a static IP address. The DLR provides the default gateway services for the virtual machines connected to the pre-provisioned application networks. The use of DLR optimizes the traffic flow and throughput for communication between the virtual machines of the multitier applications. Using the transit logical switch segment, the DLR provides a routed path to the ESG and thereby to the physical core for north/south traffic. The DLR control virtual machine is deployed on the NEI Pod in high-availability mode. In this mode, two virtual machines are deployed on separate hosts as an active/passive pair. The active/passive pair maintains state tables and verifies each other's availability through heartbeats. When a failure of the active DLR is detected, the passive DLR immediately takes over and maintains the connection state and workload availability. A DLR kernel module is deployed to each NSX-enabled Workload Pod host to provide east/west traffic capability. To provide default gateway services on both sites, a corresponding DLR must be deployed on both sites, as shown in Figure

54 Figure 27. DLR interfaces on Site A and Site B IP mobility between the primary and recovery sites This Federation Enterprise Hybrid Cloud solution supports migration of virtual machines to the recovery site without the need to change the IP addresses of the virtual machines. Default gateways on each site are created using DLRs. By configuring the DLRs on both sites identically, the same IP addresses and IP subnets are assigned to their corresponding network interfaces, as shown in Figure 27. In this way, there is no need to reconfigure workloads default gateway settings in a recovery scenario. A dynamic routing protocol is configured for the logical networking and is integrated with the physical networking to support dynamic network convergence and IP mobility for the networks (subnets) supported for DR. This approach simplifies the solution and eliminates the need to deploy additional services to support IP address changes. A prefix configured on the DLR specifies the subnets of directly connected public networks. The DLR can also support private networks; these networks are reachable only within the DLR, with access prohibited from outside the DLR networks. A route redistribution policy is configured so that networks defined in the prefix list can be redistributed to the dynamic routing protocol on the primary site DLR where the virtual machines are deployed and running. The route redistribution policy on the recovery site DLR is configured to deny redistribution of networks connected to the recovery site, as shown in Figure

55 Figure 28. Route redistribution policy on Site A and Site B When you execute a recovery plan in the event of a disaster or a planned migration, the plan can automatically modify the routing configuration of the primary site DLR to deny redistribution of failing over subnets after the virtual machines are powered off. A subsequent recovery step can modify the route redistribution policy on the recovery site DLR to permit directly connected public networks into the dynamic routing protocols before powering on the virtual machines on the recovery site. This dynamic network convergence ensures that the virtual machines can reach infrastructure services such as Domain Name System (DNS) and Microsoft Active Directory, and reduces the recovery time. You can implement an additional level of routing control from a site to the WAN peering point to ensure that only appropriate networks are advertised. To enable network failover with the same IP subnet on both sites, a network can be active only on the primary site or 55

56 the recovery site. To support this, the unit of failover for a network is restricted to a single compute cluster. All virtual machines on a compute cluster can fail over to the recovery site without affecting virtual machines running on other compute clusters. If the network spans multiple clusters, the administrator must configure the recovery plan to ensure that all virtual machines on the same network are active only on one site. Security design The Federation Enterprise Hybrid Cloud solution provides multitier security services for virtual machines deployed in the hybrid cloud environment. Security policies Security policies use security groups to simplify security policy management. A security group is a collection of objects, such as virtual machines, to which a security policy can be applied. A network security administrator or application security administrator configures the security policies to secure application traffic according to business requirements. To ensure consistent security policy enforcement for virtual machines on the recovery site, you must configure the security policies on both the primary and recovery sites. Perimeter Edge security Perimeter edges are deployed using NSX Edges on both the primary and recovery sites. The perimeter NSX Edge provides security features, such as stateful firewalls, and other services such as DHCP, NAT, VPN, and load balancer. The configuration of various services must be maintained on both the primary and recovery site perimeter edges. This ensures consistent security policy enforcement in case of DR or planned migration of virtual machines to the recovery site. Distributed firewall The Federation Enterprise Hybrid Cloud solution supports the distributed firewall capability of NSX to protect virtual machine communication and optimize traffic flow. The distributed firewall is configured though the Networking and Security -> Service Composer -> Security Groups section of the vsphere web client. Figure 29 shows various security groups that may be pre-created in the NSX security configuration. Figure 29. Security groups on the primary and recovery sites 56

57 The Federation Enterprise Hybrid Cloud solution provides an option to associate security group information with a machine blueprint. When a business user deploys the blueprint, the virtual machine is included in the security group configuration. This ensures enforcement of the applicable security policy as soon as the virtual machine is deployed. As shown in Figure 30, a corresponding security group must be created on the recovery site to ensure that a virtual machine that migrates to the recovery site is associated with the appropriate security groups and that security policies are consistently enforced on the recovery site. Figure 30. Security group on the recovery site Overview This DR for Federation Enterprise Hybrid Cloud solution incorporates storage replication using RecoverPoint, storage provisioning using ViPR, and integration with Site Recovery Manager to support DR services for applications and virtual machines deployed in the hybrid cloud. Site Recovery Manager natively integrates with vcenter and NSX to support DR, planned migration, and recovery plan testing. RecoverPoint and ViPR Storage Replication Adapters Site Recovery Manager integrates with EMC RecoverPoint storage replication and ViPR automated storage services via EMC Storage Replication Adapters (SRAs). The SRAs control the EMC RecoverPoint replication process. The EMC RecoverPoint SRA controls the Automation Pod datastores. The ViPR SRA controls protected Workload Pod datastores. Site mappings To support DR services, the Site Recovery Manager configuration must include resource mappings between the vcenter Server instance on the protected site and the vcenter Server instance on the recovery site. The mappings enable the administrator to define automated recovery plans for failing over application workloads between the sites according to defined RTOs and RPOs. The resources you need to map include resource pools, virtual machine folders, networks, and the placeholder datastore. The settings must be configured on both the protected and recovery sites to support application workload recovery between the two sites. Resource pool mappings A Site Recovery Manager resource pool specifies the compute cluster, host, or resource pool that is running a protected application. Resource pools must be mapped between the protected site and the recovery site in both directions so that, when an application fails over, the application can then run on the mapped compute resources on the recovery site. Folder mappings When virtual machines are deployed using the Federation Enterprise Hybrid Cloud solution, the virtual machines are placed in particular folders in the vcenter Server inventory to simplify administration. By default, virtual machines are deployed in a folder named VRM. This folder must be mapped between the protected and recovery sites in both directions. 57

58 Network mappings Virtual machines connect to different networks when deployed. Applications deployed with DR support must be deployed on networks that have been configured as defined in the Disaster recovery network considerations section. The networks must be mapped in Site Recovery Manager between the protected and recovery sites in both directions. For testing recovery plans, you should deploy a test network and use test network mappings when you create the recovery plan. Placeholder datastore For every protected virtual machine, Site Recovery Manager creates a placeholder virtual machine on the recovery site. The placeholder virtual machine retains the virtual machine properties specified by the global inventory mappings or specified during protection of the individual virtual machine. A placeholder datastore must be accessible to the compute clusters that support the DR services. The placeholder datastore must be configured in Site Recovery Manager and must be associated with the compute clusters. Disaster recovery support for Automation Pod vapps The Federations Enterprise Hybrid Cloud used several components that are deployed as a vsphere vapp. Currently this list includes: EMC ViPR EMC ViPR SRM Site Recovery Manager protects virtual machines, but does not preserve the vapp structure required for vcenter Operations Manager and ViPR virtual machines to function. The high-level steps to achieve recovery of vapps are: 1. Deploy the vapp identically in both sites. 2. Vacate the vapp on the recovery site (delete the virtual machines, but retain the virtual machine container). 3. Protect the vapp on the protected site through Site Recover Manager, mapping the vapp containers from both sites. 4. Reapply virtual machine vapp settings on placeholder virtual machines. For additional details on the process and if other vapps in the environment are required, see the VMware Knowledge Base topic: vcenter Operations Manager 5.0.x: Using Site Recovery Manager to Protect a vapp Deployment. Protection groups A protection group is the unit of failover in Site Recovery Manager. The Federation Enterprise Hybrid Cloud solution supports failover at the granularity of the Workload Pod; that is, all virtual machines running on a particular pod must failover together. In the context of the DR dual-site/dual vcenter topology, two Workload Pods are assigned to a DR pair, where one pod is the primary and is considered the protected cluster, and the second pod is the alternate site and is considered the recovery cluster. Consequently, for the DR dual-site/dual vcenter topology there is a 1:1 mapping between a DR pair and a protection group. A protection group contains all the replicated datastores that a pod can access and all the virtual machines that are running on the pod. When you deploy new virtual machines on a pod, those virtual machines are automatically added to the corresponding protection group and fail over with that protection group. 58

59 Recovery plans Recovery plans enable administrators to automate the steps required for recovery between the primary and recovery sites. A recovery plan may include one or more protection groups. You can test recovery plans to ensure that protected virtual machines recover correctly to the recovery site. Running a recovery plan test does not affect the production virtual machines. The dynamic networking capabilities of this DR for Federation Enterprise Hybrid Cloud solution eliminate the need to change the IP addresses of the virtual machines when they failover from one site to the other. Instead, the application network moves with the virtual machines and supports virtual machine communication outside the network. You can modify a recovery plan to automate network convergence and to ensure security policy compliance on the recovery site. Note: A recovery plan must be manually created per DR-enabled cluster - two per pair to enable failover and failback. All further STaaS operations for that cluster are completely programmatic. Collapsed management model When configuring the protection group and recovery plans for the Automation Pod components under a collapsed management model, you must exclude all Core and NEI Pod components from the configurations. This is to ensure that system does not attempt to fail over the Core and NEI components from one site to the other. Configuring primary and recovery site endpoints The Federation Enterprise Hybrid Cloud solution uses vrealize Automation to provide automated provisioning and management of cloud resources such as storage and virtual machines, and to support multitenancy. To support DR services for cloud resources, you must configure vrealize Automation with at least two virtual (vcenter) endpoints. The first endpoint is configured to support IaaS services; this endpoint uses the vcenter Server instance where the storage and virtual machines for the primary site are deployed. The second endpoint is configured to serve as the recovery site for the resources of the primary site. To configure each endpoint, a separate IaaS agent must be installed on the IAAS server that is running vrealize Automation. Configuring the infrastructure for disaster recovery services The tenant administrator must assign the compute resources for the Workload Pods, on both the protected and recovery sites, to the fabric administrator for allocation to business groups. As a best practice, and to simplify DR and other operations, you should dedicate Workload Pods that support DR services to the deployment of virtual machines with DR support. You should deploy virtual machines without DR support requirement to Workload vsphere clusters that are tagged as single site, as described in Storage provisioning in Chapter 3. When replicated storage is provisioned to a protected Workload Pod, the fabric administrator must update the reservation policies for the relevant business groups to allocate the newly provisioned storage. Federation Enterprise Hybrid Cloud STaaS workflows automatically add newly provisioned storage to the appropriate protection group. This ensures that the virtual machines deployed on the storage are automatically protected and are included in the recovery plans defined for the Workload Pod. 59

60 Configuring application blueprints for disaster recovery Storage reservation policies are used to deploy virtual machine disks to a datastore that provides the required RPO. The Site Recovery Manager administrator must create storage reservation policies to reflect the RPOs of different datastores. The fabric administrator must then assign the policies to the appropriate datastores of the compute clusters. Business administrators can configure the blueprints for virtual machines so that business users can select an appropriate storage reservation policy when deploying an application. The business user requests a catalog item in the Federation Enterprise Hybrid Cloud tenant portal, selects storage for the virtual machines, and assigns an appropriate storage reservation policy for the virtual machines disks based on the required RPO. The virtual machines disks are then placed on datastores that support the required RPO. The virtual machines are automatically deployed with the selected DR protection service and associated security policy for both the primary and recovery sites. ViPR managed Workload Pod storage For the Workload Pods, ViPR SRA manages the protection of ViPR-provisioned storage. ViPR SRA provides an interface between Site Recovery Manager and ViPR Controller. ViPR Controller, which is part of the Automation Pod, must be running and accessible before the ViPR SRA can instruct ViPR to control the EMC RecoverPoint replication functions. This means that the ViPR vapp must be functioning before Site Recovery Manager can execute a recovery of the Workload Pods. Additional storage at each site The NEI clusters require site-specific storage that does not need to be protected by EMC RecoverPoint. These datastores hold the NSX controllers, NSX Edges, and NSX DLRs. Site Recovery Manager also requires site-specific datastores to contain the placeholder virtual machines on the tenant and automation clusters. ViPR virtual arrays There must be at least one virtual array for each site. By configuring the virtual arrays in this way, ViPR can discover the EMC RecoverPoint and storage topology. You should carefully plan and perform this step because it is not possible to change the configuration after resources have been provisioned, without first disruptively removing the provisioned volumes. ViPR virtual pools When you specify EMC RecoverPoint as the protection option for a virtual pool, the ViPR storage provisioning services create the source and target volumes and the source and target journal volumes, as shown in Figure 31. Figure 31. ViPR/EMC RecoverPoint protected virtual pool Each DR-protected/recovery cluster pair has storage that replicates (under normal conditions) in a given direction, for example, from Site A to Site B. To allow active / active site configuration, additional DR cluster pairs should be configured whose storage replicates 60

61 in the opposite direction. You must create two sets of datastores one set that will replicate from Site A and another set that will replicate from Site B. To enable this, you need to configure an environment similar to Figure 31 for Site A, and the inverse of it for Site B (where the protected source pool is Site B, and local target pool is on Site A). RecoverPoint journal considerations Every RecoverPoint-protected LUN requires access to a journal LUN to maintain the history of disk writes to the LUN. The performance of the journal LUN is critical in the overall performance of the system attached to the RecoverPoint-protected LUN and therefore its performance capability should be in line with the expected performance needs of that system. By default, ViPR uses the same virtual pool for both the target and the journal LUN for a RecoverPoint copy, but it does allow you to specify a separate or dedicated pool. In both cases, the virtual pool and its supporting physical pools should be sized to provide adequate performance. Storage provisioning EMC RecoverPoint protected storage is provisioned to the Workload vsphere clusters in the environment using the catalog item named Provision Cloud Storage. Note: The Federation recommends that you follow the best practice guidelines when deploying any of the supported platform technologies. The Federation Enterprise Hybrid Cloud does not require any variation from these best practices. The workflow interacts with both ViPR and vrealize Automation to create the storage, present it to the chosen vsphere cluster and add the new volume to the relevant vrealize storage reservation policy. As with the single-site topology, vsphere clusters are made eligible for storage provisioning by tagging them with vrealize Automation custom properties. However, in this case they are defined as DR-enabled clusters, that is, they are part of a Site Recovery Manager configuration that maps protected clusters to recovery clusters. This tagging is done during the installation and preparation of vsphere clusters for use by the Federation Enterprise Hybrid Cloud using the DR Cluster Onboarding workflow provided as part of the Federation Enterprise Hybrid Cloud self-service catalog. As local-only vsphere clusters can also be present in a DR dual-site/dual vcenter topology, when you attempt to provision to a DR-enabled vsphere cluster, the Provision Cloud Storage catalog item will automatically present only EMC RecoverPoint-protected virtual storage pools to provision from. Standard dualsite/dual vcenter topology DR dual-site/dual vcenter topology This model provides no resilience/recovery for the cloud management platform. To enable this you should use the DR dual-site/dual vcenter variant. In the DR dual-site/dual vcenter topology, EMC RecoverPoint and Site Recovery Manager protect the Automation Pod. This allows for recovery between Site A and Site B in various planned and unplanned recovery scenarios. EMC RecoverPoint SRA for Site Recovery Manager is used to interact with EMC RecoverPoint during a failover of the Automation Pod s resources. The Core and NEI Pods are created manually on both sites to mirror functionality such as NSX dynamic routing, NSX security groups, NSX security policies (firewall rules) and to host the Site Recovery Manager servers. As a result, there is no need to protect them using EMC RecoverPoint or Site Recovery Manager. 61

62 In a distributed management model, this is accomplished by excluding the Core and NEI Pods from the process of creating associated datastore replications, protection groups, and recovery plans for the vsphere ESXi clusters hosting those functions. In a collapsed management model, all components are on the same vsphere ESXi cluster, so the Core and NEI components must be excluded from Site Recovery Manager recovery plans and protections groups for that cluster. Despite residing on the same vsphere cluster, the Automation Pod components should be on a distinct network and a distinct set of datastores, so that they can be failed over between sites without affecting Core or NEI components. The vcenter Server instances on each site manage the NEI, Automation, and Workload Pods on their respective sites and act as the vsphere end-points for vrealize Automation. The vcenter Server instances are integrated using Site Recovery Manager, which maintains failover mappings for the networks, clusters, and folders between the two sites. Naming conventions VMware vcenter Site Recovery Manager protection groups Protection group names must match the Workload Pod names for example, if SAComputePod2 is the name of Workload Pod 2 on Site A, then the Site Recovery Manager protection group must also be named SAComputePod2. The solution relies on this correspondence when performing several of the automation tasks necessary for successful failover and subsequent virtual machine management through vrealize Automation. VMware NSX security groups Security group names must be the same on both sites. VMware NSX security policies Security policy names must be the same on both sites. EMC ViPR virtual pools ViPR virtual pool names must be meaningful because they are the default names for storage reservation policies. For example, when creating Tier 1 DR protected storage with an RPO of 10 minutes, Tier 1 DR Enabled 10 Minute RPO is an appropriate name. NSX logical networks Each Workload Pod (compute cluster) must have its own transport zone. The NEI Pod must be a member of each transport zone. If a transport zone (Layer 3 domain) spans multiple compute clusters, the corresponding Site Recovery Manager protection groups must be associated with the same Site Recovery Manager recovery plan. The reason for this is that, when a transport zone spans multiple compute clusters, network mobility from Site A to Site B affects the virtual machines deployed across these clusters; therefore, the clusters must be failed over as a set. DR dual-site/dual vcenter topology backup The primary option for backup in a DR dual-site/dual vcenter topology is the Redundant Avamar/dual vcenter configuration. This option is described in detail in Chapter 7. 62

63 This chapter presents the following topics: Overview Logical network topologies

64 This solution provides a network architecture design that is resistant in the event of failure, enables optimal throughput, multitenancy and secure separation. This section presents a number of generic logical network topologies. Further network considerations specific to each topology are presented in the relevant chapters. Supported networking technologies The Federation Enterprise Hybrid Cloud supports three different virtual networking technologies as follows: VMware NSX for vsphere VMware vcloud Networking and Security VMware vsphere Distributed Switches The dynamic network services with vrealize Automation showcased in this solution require either NSX or vcloud Networking and Security. vsphere Distributed Switches support static networking configurations only, precluding the use of VXLANs. Only NSX for vsphere is supported in the DR dual-site/dual vcenter topology. Physical connectivity In designing the physical architecture, the main considerations are high availability, performance, and scalability. Each layer in the architecture should be fault tolerant with physically redundant connectivity throughout. The loss of any one infrastructure component or link should not result in loss of service to the tenant; if scaled appropriately, there is no impact on service performance. Physical network and FC connectivity to the compute layer may be provided over a converged network to converged network adapters on each compute blade, or over any network and FC adapters that are supported by the hardware platform and vsphere. Each logical topology is designed to address the requirements of multitenancy and secure separation of the tenant resources. It is also designed to align with security best practices for segmenting networks according to the purpose or traffic type. In the distributed management platform option, a minimum of one distributed vswitch is required for each of the External and Cloud vcenters, unless you run the Core Pod components on standard vswitches. In that case, a minimum of one distributed vswitch is required for the Cloud vcenter to support NSX/vCloud Networking and Security networks. Multiple distributed vswitches are supported in both cases. Note: While the minimum is one distributed vswitch per vcenter, the Federation recommends two distributed vswitches in the Cloud vcenter. The first distributed switch should be used for cloud management networks and the second distributed switch for tenant workload networks. The sample layouts provided later in this chapter use this model and indicate which networks are on each distributed switch by indicating vds1 or vds2. Additional distributed switches can be created for additional tenants if required. In the collapsed management platform option, there must be at least one distributed vswitch in the Cloud vcenter to support NSX/vCloud Networking and Security networks. Multiple distributed vswitches are supported. 64

65 Network layouts The following network layouts are sample configurations intended to assist in understanding the elements that need to be catered for in an Enterprise Hybrid Cloud network design. They do not represent a prescriptive list of the permitted configurations for logical networks in Enterprise Hybrid Cloud. The network layout should be designed based on individual requirements. Layout 1 Figure 32 shows one possible logical-to-physical network layout where standard vsphere switches are used for the basic infrastructural networks. This layout may be preferable where: Additional NIC cards are available in the hosts to be used. Increased protection against errors in configuration at a distributed vswitch level is required It does this by placing the NFS, iscsi, and vsphere vmotion networks on standard vswitches. Dynamic networking technology is required through the use of NSX or vcloud Networking and Security Note: All VLAN suggestions are samples only and should be determined by the network team in each particular environment. Figure 32. Network layout 1 65

66 Descriptions of each network are provided in Table 4. Table 4. Network layout 1 descriptions Name Type Switch type Location VLAN Description vmk_esxi_mgmt VMkernel Standard vswitch vmk_nfs VMkernel Standard vswitch vmk_iscsi VMkernel Standard vswitch vmk_vmotion VMkernel Standard vswitch vsphere ESXi hosts External vcenter and Cloud vcenter External vcenter and Cloud vcenter External vcenter and Cloud vcenter 100 VMkernel on each vsphere ESXi host that hosts the management interface for the vsphere ESXi host itself. DPG_Core network should be able to reach this network. 200 Optional VMkernel used to mount NFS datastores to the vsphere ESXi hosts. NFS File Storage should be connected to the same VLAN / subnet or routable from this subnet. 300 Optional VMkernel used to mount iscsi datastores to the vsphere ESXi hosts. iscsi network portals should be configured to use the same VLAN / subnet or routable from this subnet. 400 VMkernel used for vsphere vmotion between vsphere ESXi hosts. DPG_Core vsphere distributed port group Distributed vswitch 1 External vcenter 500 Port group to which the management interfaces of all the core management components connect DPG_NEI vsphere distributed port group Distributed vswitch 1 Cloud vcenter 600 Port group to which the NSX controllers on the NEI Pod connect. DPG_Core network should be able to reach this network. DPG_Automation vsphere distributed port group Distributed vswitch 1 Cloud vcenter 700 Port group to which the management interfaces of all the Automation Pod components connect DPG_Tenant_Uplink vsphere distributed port group Distributed vswitch 2 Cloud vcenter 800 Port group used for all tenant traffic to egress from the cloud. Multiples may exist. VXLAN_Transport NSX / vcns distributed port group Distributed vswitch 2 Cloud vcenter 900 Port group used for VTEP endpoints between vsphere ESXi hosts to allow VXLAN traffic. ESG_DLR_Transit NSX / vcns logical switch Distributed vswitch 2 Cloud vcenter Virtual wire VXLAN segments connecting Tenant Edge and Tenant DLRs. Multiples may exist. Workload NSX / vcns logical switch Distributed vswitch 2 Cloud vcenter Virtual wire Workload VXLAN segments. Multiples may exist. Avamar_Target (Optional) Primary PVLAN N/A Physical switches 1000 Promiscuous primary PVLAN to which physical Avamar grids are connected. This PVLAN has an associated secondary isolated PVLAN (1100) in which the Avamar proxies are placed 66

67 Name Type Switch type Location VLAN Description DPG_AV_Proxies (Optional) Secondary PVLAN / vsphere distributed port group Distributed vswitch 2 Physical switches/ Cloud vcenter 1100 Isolated secondary PVLAN to which Avamar Proxies virtual machines are connected. This PVLAN enables proxies to communicate with Avamar Grids on the Avamar_Target network but prevents proxies from communicating with each other Layout 2 Figure 33 shows a second possible logical to physical network layout where distributed vsphere switches are used for all basic infrastructural networks other than the vsphere ESXi management network. This layout may be preferable where: Fewer NIC cards are available in the hosts to be used. Increased consolidation of networks is required It does this by placing all bar the ESXi management interfaces on distributed vswitches. Dynamic networking technology is required through the use of NSX or vcloud Networking and Security Note: All VLAN suggestions are samples only and should be determined by the network team in each particular environment. Figure 33. Network layout 2 67

68 Descriptions of each network are provided in Table 5. Table 5. Network layout 2 descriptions Name Type Switch type vmk_esxi_mgmt VMkernel Standard vswitch Location VLAN Description ESXi hosts 100 VMkernel on each vsphere ESXi host that hosts the management interface for the vsphere ESXi host itself. DPG_Core network should be able to reach this network. vmk_nfs VMkernel Distributed vswitch 1 vmk_iscsi VMkernel Distributed vswitch 1 vmk_vmotion VMkernel Distributed vswitch 1 External vcenter and Cloud vcenter External vcenter and Cloud vcenter External vcenter and Cloud vcenter 200 Optional VMkernel used to mount NFS datastores to the vsphere ESXi hosts. NFS File Storage should be connected to the same VLAN / subnet or routable from this subnet. 300 Optional VMkernel used to mount iscsi datastores to the vsphere ESXi hosts. iscsi network portals should be configured to use the same VLAN / subnet or routable from this subnet. 400 VMkernel used for vsphere vmotion between vsphere ESXi hosts. DPG_Core vsphere distributed port group Distributed vswitch 1 External vcenter 500 Port group to which the management interfaces of all the core management components connect DPG_NEI vsphere distributed port group Distributed vswitch 1 Cloud vcenter 600 Port group to which the NSX controllers on the NEI Pod connect. DPG_Core network should be able to reach this network. DPG_Automation vsphere distributed port group Distributed vswitch 1 Cloud vcenter 700 Port group to which the management interfaces of all the Automation Pod components connect DPG_Tenant_Uplink vsphere distributed port group Distributed vswitch 2 Cloud vcenter 800 Port group used for all tenant traffic to egress from the cloud. Multiples may exist. VXLAN_Transport NSX / vcns distributed port group Distributed vswitch 2 Cloud vcenter 900 Port group used for VTEP endpoints between vsphere ESXi hosts to allow VXLAN traffic. ESG_DLR_Transit NSX / vcns logical switch Distributed vswitch 2 Cloud vcenter Virtual wire VXLAN segments connecting Tenant Edge and Tenant DLRs. Multiples may exist Workload NSX / vcns logical switch Distributed vswitch 2 Cloud vcenter Virtual wire Workload VXLAN segments. Multiples may exist. 68

69 Name Type Switch type Location VLAN Description Avamar_Target (Optional) Primary PVLAN N/A Physical switches 1000 Promiscuous primary PVLAN to which physical Avamar grids are connected. This PVLAN has an associated secondary isolated PVLAN (1100) in which the Avamar proxies are placed DPG_AV_Proxies (Optional) Secondary PVLAN/ vsphere distributed port group Distributed vswitch Physical switches/ Cloud vcenter 1100 Isolated secondary PVLAN to which Avamar Proxies virtual machines are connected. This PVLAN enables proxies to communicate with Avamar Grids on the Avamar_Target network but prevents proxies from communicating with each other Layout 3 Figure 34 shows a third possible logical-to-physical network layout where distributed vsphere switches are used for all networks other than the management network. This layout may be preferable where: There is no requirement for dynamic networking. Reduction of management host count is paramount (as it removes the need for the NEI Pod). Note: All VLAN suggestions are samples only and should be determined by the network team in each particular environment. Figure 34. Network layout 3 69

70 Descriptions of each network are provided in Table 6. Table 6. Network layout 3 descriptions Name Type Switch type vmk_esxi_mgmt VMkernel Standard vswitch Location VLAN Description ESXi Hosts 100 VMkernel on each vsphere ESXi host that hosts the management interface for the ESXi Host itself. DPG_Core network should be able to reach this network. vmk_nfs VMkernel Standard vswitch vmk_iscsi VMkernel Standard vswitch vmk_vmotion VMkernel Standard vswitch External vcenter and Cloud vcenter External vcenter and Cloud vcenter External vcenter and Cloud vcenter 200 Optional VMkernel used to mount NFS datastores to the vsphere ESXi hosts. NFS File Storage should be connected to the same VLAN/subnet or routable from this subnet. 300 Optional VMkernel used to mount iscsi datastores to the vsphere ESXi hosts. iscsi network portals should be configured to use the same VLAN / subnet or routable from this subnet. 400 VMkernel used for vsphere vmotion between vsphere ESXi hosts. DPG_Core vsphere distributed port group Distributed vswitch 1 External vcenter 500 Port group to which the management interfaces of all the core management components connect DPG_Automation vsphere distributed port group Distributed vswitch 1 Cloud vcenter 600 Port group to which the management interfaces of all the Automation Pod components connect DPG_Tenant_Uplink vsphere distributed port group Distributed vswitch 2 Cloud vcenter 700 Port group used for all tenant traffic to egress from the cloud. Multiples may exist. DPG_Workload_1 vsphere distributed port group Distributed vswitch 2 Cloud vcenter 800 Port group used for workload traffic DPG_Workload_2 vsphere distributed port group Distributed vswitch 2 Cloud vcenter 900 Port group used for workload traffic Avamar_Target (optional) Primary PVLAN N/A Physical switches 1000 Promiscuous primary PVLAN to which physical Avamar grids are connected. This PVLAN has an associated secondary isolated PVLAN (1100) in which the Avamar proxies are placed DPG_AV_Proxies (optional) Secondary PVLAN / vsphere distributed port group Distributed vswitch 2 Physical switches/ Cloud vcenter 1100 Isolated secondary PVLAN to which Avamar Proxies virtual machines are connected. This PVLAN enables proxies to communicate with Avamar Grids on the Avamar_Target network but prevents proxies from communicating with each other 70

71 This chapter presents the following topics: Overview Concepts Standard Avamar configuration Redundant Avamar/single vcenter configuration Redundant Avamar/dual vcenter configuration

72 This chapter discusses the considerations for implementing data protection, also known as backup as a service (BaaS) in the context of the Federation Enterprise Hybrid Cloud. Backup and recovery of a multitenant hybrid cloud is a complicated undertaking in which many factors must be considered, including: Backup type and frequency Impact and interaction with replication Recoverability methods and requirements Retention periods Automation workflows Interface methods (workflows, APIs, GUI, CLI, scripts, and so on) Implementation in a CA or DR-enabled environment VMware vcenter Orchestrator, which is central to all of the customizations and operations used in this solution, manages operations across several EMC and VMware products, including: VMware vrealize Automation VMware vcenter EMC Avamar and EMC Data Protection Advisor This solution uses Avamar as the technology to protect your datasets. Using Avamar, this backup solution includes the following characteristics: Abstracts and simplifies backup and restore operations for cloud users Uses VMware Storage APIs for Data Protection, which provides Changed Block Tracking for faster backup and restore operations Provides full image backups for running virtual machines Eliminates the need to manage backup agents for each virtual machine Minimizes network traffic by deduplicating and compressing data Note: The Federation recommends that you engage an Avamar product specialist to design, size, and implement a solution specific to your environment and business needs. 72

73 Scalable backup architecture The Federation Enterprise Hybrid Cloud backup configurations add scalable backup by adding the ability to configure an array of Avamar instances. Federation Enterprise Hybrid Cloud BaaS workflows automatically distribute the workload in a round-robin way across the available Avamar instances, and provides a catalog item to enable additional Avamar instances (up to a maximum of 10 Avamar replication pairs) to be added to the configuration. When new Avamar instances are added, new virtual machine workloads are automatically assigned to those new instances until an equal number of virtual machines are assigned to all Avamar instances in the environment. Once that target has been reached, virtual machines are assigned in a round-robin way again. The configuration of the Avamar instances is stored by the Federation Enterprise Hybrid Cloud workflows for later reference when reconfiguring or adding instances. Avamar replication pairs An Avamar replication pair is defined as a relationship configured between two Avamar instances, and is used by the Federation Enterprise Hybrid Cloud workflows to ensure backup data is protected against the loss of a physical Avamar instance. Normally this is used to ensure that data backed up on one site is available to restore on a secondary site, but it could also be used to provide extra resilience on a single site if required. The Federation Enterprise Hybrid Cloud provides two different redundant Avamar configurations that use an array of Avamar replication pairs to achieve the same scalability as the standard Avamar configuration but with the added resilience that every instance of Avamar has a replication partner, to which it can replicate any backup sets that it receives. Note: In the standard Avamar configuration, each instance is technically configured as the first member of an Avamar replication pair. In this case, no redundancy exists, but it can be added later by adding a second member to each replication pair. To achieve this, the Federation Enterprise Hybrid Cloud uses the concepts of primary and secondary Avamar instances within each replication pair, and the ability to reverse these personalities so that, in the event of a failure, backup and restore operations can continue. The primary Avamar instance is where all scheduled backups are executed. It is also the instance that Federation Enterprise Hybrid Cloud on-demand backup and restore features communicate with in response to dynamic user requests. The primary Avamar instance also has all the currently active replication groups, making it responsible for replication of new backup sets to the secondary Avamar instance. The secondary Avamar instance has the same configurations for backup and replication policies, except that BaaS workflows initially configure these policies in a disabled state. If the primary Avamar instance becomes unavailable, the policies on the secondary Avamar instance can be enabled via the Toggle Single Avamar Pair Designations catalog item to enable backup and replication operations to continue. Note: Replication operations do not catch up until the original primary Avamar instance (now designated as secondary) becomes available again, at which time replication automatically transmits newer backup sets to the secondary system. In this solution, after a redundant Avamar configuration is enabled, the Federation Enterprise Hybrid Cloud workflows configure all subsequent backups with replication enabled. If one member of the Avamar replication pair is offline, backups taken to the surviving member of the pair will automatically be replicated once the offline member is brought back online. 73

74 How each Avamar instance in a replication pair operates varies based on which backup topology is configured, and is described in the context of each individual topology later in this chapter. VMware vcenter folder structure and backup service level relationship When a backup service level is created via the Create Backup Service Level vrealize Automation catalog item, it creates an associated set of folders in the cloud vcenter (or both cloud vcenters if done in a dual-site/dual vcenter environment). The number of folders created depends on how many Avamar pairs are present, and these folders become part of the mechanism for distributing the backup load. Note: In a DR dual-site/dual vcenter environment, the Create a Backup Service Level catalog item automatically creates Site Recovery Manager folder mappings between the new folders created in the first cloud vcenter and their corresponding folders in the second vcenter. Example If you create a backup service level named Daily-7yr in your environment, and four Avamar replication pairs (numbered 0 through 3) are present, then the following folders are created in the relevant cloud vcenter servers: Daily-7yr-Pair0 Daily-7yr-Pair1 Daily-7yr-Pair2 Daily-7yr-Pair3 When you assign a virtual machine to the Daily-7yr backup policy, the workflows use a selection algorithm to determine the Avamar pair with least load, and then assign the virtual machine to the associated folder. So if Avamar-Pair2 is determined to be the best target, then the virtual machine is moved to the Daily-7yr-Pair2 vcenter folder and automatically backed up by Avamar-Pair2 as a result. How the Avamar instances are assigned to monitor and backup these folders differs dependent on which backup topology is deployed, and is described in the context of each individual topology later in this chapter. Avamar pair to vsphere cluster association Avamar image-level backups work by mounting snapshots of VMDKs to Avamar proxy virtual machines and then backing up the data to the Avamar instance that the Avamar proxy is registered with. In a fully deployed Federation Enterprise Hybrid Cloud with up to 10,000 user virtual machines and hundreds of vsphere clusters, this could lead to Avamar proxy sprawl if not properly configured and controlled. To do this, the Federation Enterprise Hybrid Cloud associates vsphere clusters to a subset of Avamar replications pairs. This means that a reduced number of Avamar proxy virtual machines are required to service the cloud. Associations between a vsphere cluster and an Avamar pair is done via the Federation Enterprise Hybrid Cloud BaaS Associate Avamar Pairs with vsphere Cluster catalog item. Note: In a DR dual-site/dual vcenter topology, when a protected cluster is associated with an Avamar pair, the associated recovery cluster is automatically associated with the same Avamar pair to ensure continuity of service on failover. 74

75 Avamar designations In the redundant Avamar/single vcenter configuration, there are two Avamar instances in each pair, and both are assigned to monitor the same vcenter folder and to backup any virtual machines that folder contains. To ensure that this does not result in both instances backing up the same virtual machine and then replicating each backup (four copies in total), the Federation Enterprise Hybrid Cloud uses primary and secondary Avamar instances within each replication pair, and the ability to reverse these personalities so that, in the event of a failure, backup and restore operations can continue. The primary Avamar instance is where all scheduled backups are executed. It is also the instance that Federation Enterprise Hybrid Cloud on-demand backup and restore features communicate with in response to dynamic user requests. The primary Avamar instance also has all the currently active replication groups, making it responsible for replication of new backup sets to the secondary Avamar instance. The secondary Avamar instance has the same configurations for backup and replication policies, except that BaaS workflows initially configure these policies in a disabled state. If the primary Avamar instance becomes unavailable, the policies on the secondary Avamar instance can be enabled via the Toggle Single Avamar Pair Designations catalog item to enable backup and replication operations to continue. Note: Avamar designations are only relevant in the redundant Avamar/single vcenter topology, because the standard Avamar configuration does not have replication, and in the redundant Avamar/dual vcenter configuration each member of a pair is configured to monitor a folder from only one of the two vcenters. Avamar proxy server configuration To associate an Avamar pair with a vsphere cluster, an Avamar proxy virtual machine needs to be deployed to that cluster. Standard Avamar configuration In single-site topologies, all proxies are on the same site. Therefore, the minimum number of proxy virtual machines required per Avamar pair for each cluster is one. Two is recommended for high availability, if there is scope within the overall number of proxies that can be deployed to the environment. Ideally, this number should be in the region of 60 to 80 proxies. Redundant Avamar/single vcenter configuration As the virtual machines on every vsphere cluster could be backed up by either of the members of an Avamar replication pair at different points in time, proxies for both the primary and secondary Avamar instances of every associated Avamar replicated pair should be deployed to every vsphere cluster. This means a minimum of two proxies is required. Four proxies would provide additional resilience, if the scope exists within the overall number of proxies that can be deployed to the environment. If the environment also includes CA, then the proxies for the Site A instances should be bound to Site A using virtual machine DRS affinity groups with a DRS virtual machine to host rule that sets those virtual machines to must run on a host DRS group that contains the Site A hosts. Similarly, proxies for the Site B Avamar instance should be bound to Site B hosts. This ensures that no unnecessary cross-wan backups occur, as Avamar can use vstorage APIs for Data Protection to add VMDKs (from the local leg of the VPLEX volume) to proxy virtual machines bound to physical hosts on the same site as the primary Avamar instance. 75

76 Redundant Avamar/dual vcenter configuration In a dual-site / dual vcenter configuration, each vsphere cluster must have an Avamar proxy virtual machine for the local Avamar instance of every Avamar replicated pair associated with it. This ensures backups are taken locally and replicated to the other member of the Avamar pair. In a dual-site/dual vcenter configuration with DR, when a failover occurs, virtual machines will be moved from the vcenter folders on Site A to their corresponding vcenter folders on Site B, at which point the other member of the Avamar replication pair will assume responsibility for backing up and restoring those virtual machines. Therefore, each vsphere cluster still only requires a minimum of one Avamar proxy for every Avamar instance that is associated with it. Two will provide extra resilience. Note: In this configuration, if a failure of a single Avamar instance occurs without the failure of the vcenter infrastructure on the same site, then the second member of the Avamar replication pair will not automatically assume responsibility to backup virtual machines. To further protect against this scenario, additional resilience can be added on each site by using an Avamar RAIN grid. Avamar administratively full Determining that a backup target, in this case an Avamar instance, has reached capacity can be based on a number of metrics of the virtual machines it is responsible for protecting, including: The number of virtual machines assigned to the instance The total capacity of those virtual machines The rate of change of the data of those virtual machines The effective deduplication ratio that can be achieved while backing up those virtual machines The available network bandwidth and backup window size Because using these metrics can be somewhat subjective, the Federation Enterprise Hybrid Cloud provides the ability for an administrator to preclude an Avamar instance or Avamar replication pair from being assigned further workload by setting a binary Administrative Full flag set via the Set Avamar to Administrative Full vrealize Automation catalog item. When a virtual machine is enabled for data protection via Federation Enterprise Hybrid Cloud BaaS workflows, the available Avamar instances are assessed to determine the most suitable target. If an Avamar instance or Avamar replication pair has had the Administrative Full flag set, then that instance/pair is excluded from the selection algorithm but continues to back up its existing workloads through on-demand or scheduled backups. If workloads are retired and an Avamar instance or pair is determined to have free capacity, again the Administrative Full flag can be toggled back, including it in the selection algorithm. Policy-based replication Policy-based replication provides granular control of the replication process. With policybased replication, you create replication groups in Avamar Administrator to define the following replication settings: Members of the replication group, which are either entire domains or individual clients Priority for the order in which backup data replicates Types of backups to replicate based on the retention setting for the backup or the date on which the backup occurred Maximum number of backups to replicate for each client Destination server for the replicated backups 76

77 Schedule for replication Retention period of replicated backups on the destination server The redundant Avamar configurations automatically create a replication group associated with each backup policy and configure it with a 60-minute stagger to the interval associated with the backup policy. This enables the backups to complete before the replication starts. Note: This schedule can be manually altered within the Avamar GUI, but it is important that you make changes to both the primary and secondary versions of the replication group schedule so that replication operates as required if the Avamar personalities are reversed. Replication control If Data Domain is used as a backup target, Avamar is responsible for replication of Avamar data from the source Data Domain system to the destination Data Domain system. As a result, all configuration and monitoring of replication is done via the Avamar server. This includes the schedule on which Avamar data is replicated between Data Domain units. You cannot schedule replication of data on the Data Domain system separately from the replication of data on the Avamar server. There is no way to track replication by using Data Domain administration tools. Note: Do not configure Data Domain replication to replicate data to another Data Domain system that is configured for use with Avamar. When you use Data Domain replication, the replicated data does not refer to the associated remote Avamar server. 77

78 Architecture This section describes the features of the standard Avamar configuration shown in Figure 35 and the environments where it may be used. Figure 35. Standard Avamar configuration architecture Scenarios for use Best use The most logical fit for a standard Avamar configuration is a single-site Federation Enterprise Hybrid Cloud deployment. Alternate uses The standard Avamar configuration can be used in topologies such as CA dual-site and DR dual-site topologies with the following caveats: The architecture provides no resilience on the secondary site in either of the dual-site topologies. If the site that hosts the Avamar instances is lost, then there is no ability to restore from backup. In the CA dual-site/single vcenter topology, any virtual machines that reside on the site with no Avamar instances present will back up across the WAN connection. In the DR dual-site/dual vcenter topology, any virtual machines that reside on the recovery site (and therefore are registered with a different vcenter) have no ability to back up. 78

79 In the standard Avamar configuration, if the Create Backup Service level workflow creates a folder named Daily-7yr, and there are four Avamar replications pairs present, then it will configure the following backup policies with the Avamar replication pairs: Avamar-Pair0: Assigned to monitor vcenter folder Daily-7yr-Pair0 Avamar-Pair1: Assigned to monitor vcenter folder Daily-7yr-Pair1 Avamar-Pair2: Assigned to monitor vcenter folder Daily-7yr-Pair2 Avamar-Pair3: Assigned to monitor vcenter folder Daily-7yr-Pair3 In this case, each pair has only one member, and therefore only one Avamar instance is monitoring each folder. Characteristics The characteristics of the standard Avamar configuration are: All Avamar instances are standalone, that is, backup sets are not replicated to a secondary Avamar system. It works in the context of a single cloud vcenter only. All Avamar instances contain active backup policies. Note: An Avamar instance can be set to administratively full and still have active backup policies. All Avamar instances are considered to be on the same site, and therefore the roundrobin distribution of virtual machines to vcenter folders includes all Avamar instances that are: Assigned to the vsphere cluster that the virtual machine is on. Are not set to Administratively Full. Distribution examples The following scenarios convey how virtual machines are assigned to vcenter folders to distribute load evenly across Avamar instances, assuming the following configuration, as shown in Figure 35: Four Avamar instances and two vsphere clusters exist AV_REP_PAIR0 and AV_REP_PAIR1 are assigned to Cluster 1 AV_REP_PAIR2 and AV_REP_PAIR3 are assigned to Cluster 2 Note: In this example all virtual machines are deployed to the backup policy named Daily-7yr. Scenario 1: VM1 is deployed to Cluster 1 - No other workload virtual machines exist AV_REP_PAIR2 and AV_REP_PAIR3 are ruled out because they are not assigned to Cluster 1. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 1. It is placed in a folder named Daily-7yr-Pair0, indicating assignment to AV_REP_PAIR0. AV_REP_PAIR1 is an equally viable candidate as both grids are empty, but AV_REP_PAIR0 is selected based on numerical order. 79

80 Scenario 2: VM2 is deployed to Cluster 1 - VM1 exists AV_REP_PAIR2 and AV_REP_PAIR3 are ruled out because they are not assigned to Cluster 1. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 1. It is placed in a folder named Daily-7yr-Pair1 indicating assignment to AV_REP_PAIR1 because the round-robin algorithm determined that AV_REP_PAIR1 had fewer virtual machines than the other candidate AV_REP_PAIR0. VM3 follows a similar logic and ends up being managed by AV_REP_PAIR2 while VM4 is managed by AV_REP_PAIR3. Architecture This section describes the features of the redundant Avamar/single vcenter configuration shown in Figure 36 and the environments where it can be used. Figure 36. Redundant Avamar/single vcenter configuration 80

81 Scenarios for use Best use The most logical fit for a redundant Avamar/single vcenter configuration is a dual-site/single vcenter Federation Enterprise Hybrid Cloud deployment. Alternate uses The redundant Avamar/single vcenter configuration can be used in the single-site topology with no caveats to provide a backup infrastructure that can tolerate the loss of a physical Avamar instance. Note: The redundant Avamar/single vcenter should not be used in a DR dual-site topology because doing so imposes caveats that can be overcome using the redundant Avamar/dual vcenter configuration without the need for any extra components. vcenter folder assignments In the redundant Avamar/single vcenter configuration, if the Create Backup Service level workflow creates a folder called Daily-7yr and there are four Avamar replications pairs present then it will configure the following backup policies with the Avamar replication pairs: Avamar-Pair0: Assigned to monitor vcenter folder Daily-7yr-Pair0 Avamar-Pair1: Assigned to monitor vcenter folder Daily-7yr-Pair1 Avamar-Pair2: Assigned to monitor vcenter folder Daily-7yr-Pair2 Avamar-Pair3: Assigned to monitor vcenter folder Daily-7yr-Pair3 As there is only one vcenter, and therefore only one vcenter folder per Avamar replication pair, each Avamar instance in the pair is configured to monitor the same vcenter folder. At this point, the concept of primary and secondary Avamar members are employed to ensure that only one member of the pair is actively backing up and replicating the virtual machines at any given point in time. Characteristics The characteristics of the redundant Avamar/single vcenter configuration are: All Avamar instances are configured in pairs and all backups are replicated. It works in the context of a single cloud vcenter only. Fifty percent of the Avamar instances have active backup and replication polices at any given point in time (50 percent of the Avamar instances are primary, 50 percent are secondary.) Note: Primary means that the backup policies on that instance are enabled. An Avamar instance can be set to administratively full and still be considered primary. Avamar replication pairs are defined as split across sites, and therefore the roundrobin distribution of virtual machines to vcenter folders includes all Avamar pairs that: Are assigned to the vsphere cluster that the virtual machine is on. Have their primary member on the same site as the virtual machine DRS Affinity group that the virtual machine is a member of. Are not set to Administratively Full. 81

82 Distribution examples The following scenarios convey how virtual machines are assigned to vcenter folders in order to distribute load evenly across Avamar instances, assuming the following configuration (as shown in Figure 36): Six primary Avamar instances, six secondary instances. and two vsphere clusters exist AV_REP_PAIR0 through AV_REP_PAIR3 are assigned to Cluster 1 AV_REP_PAIR4 and AV_REP_PAIR5 are assigned to Cluster 2 Note: In this example, all virtual machines are deployed to a backup policy named Daily-7yr. Scenario 1: VM1 is deployed to Cluster 1, Site A - No other workload virtual machines exist AV_REP_PAIR4 and AV_REP_PAIR5 are ruled out because they are not assigned to Cluster 1. AV_REP_PAIR1 and AV_REP_PAIR3 are ruled out for being primary on Site B. AV_REP_PAIR0 and AV_REP_PAIR2 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 1 Host CL1-H1. It is placed in a folder named Daily-7yr-Pair0 indicating assignment to AV_REP_PAIR0. AV_REP_PAIR2 is an equally viable candidate as both grids are empty, but AV_REP_PAIR0 is chosen based on numerical order. Scenario 2: VM2 is deployed to Cluster 1, Site A - VM1 exists AV_REP_PAIR4 and AV_REP_PAIR5 are ruled out because they are not assigned to Cluster 1. AV_REP_PAIR1 and AV_REP_PAIR3 are ruled out because their primary instances are on Site B. AV_REP_PAIR0 and AV_REP_PAIR2 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 1 Host CL1-H1. It is placed in a folder named Daily-7yr-Pair2 indicating assignment to AV_REP_PAIR2 because the round-robin algorithm determined that AV_REP_PAIR2 had fewer virtual machines than the other candidate AV_REP_PAIR0. Scenario 3: VM3 is deployed to Cluster 1, Site B - VM1 and VM2 exist AV_REP_PAIR4 and AV_REP_PAIR5 are ruled out because they are not assigned to Cluster 1. AV_REP_PAIR0 and AV_REP_PAIR2 are ruled out because their primary instances are on Site A. AV_REP_PAIR1 and AV_REP_PAIR3 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 1 Host CL1-H2. It is placed in a folder named Daily-7yr-Pair1 indicating assignment to AV_REP_PAIR1. AV_REP_PAIR3 is an equally viable candidate as both grids are empty, but AV_REP_PAIR1 is selected based on numerical order. 82

83 Scenario 4: VM4 is deployed to Cluster 1, Site B - VM1, VM2, and VM3 exist AV_REP_PAIR4 and AV_REP_PAIR5 are ruled out because they are not assigned to Cluster 1. AV_REP_PAIR0 and AV_REP_PAIR2 are ruled out because their primary instances are on Site A. AV_REP_PAIR1 and AV_REP_PAIR3 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 1 Host CL1-H2. It is placed in a folder named Daily-7yr-Pair3 indicating assignment to AV_REP_PAIR3 because the round-robin algorithm determined that AV_REP_PAIR3 had fewer virtual machines than the other candidate AV_REP_PAIR1. VM5 follows a similar logic and ends up being managed by AV_REP_PAIR4 while VM6 is managed by AV_REP_PAIR5. 83

84 Architecture This section describes the features of the redundant Avamar/dual vcenter configuration shown in Figure 37 and the environments where it may be used. Figure 37. Redundant Avamar/dual vcenter configuration Scenarios for use Best use The most logical fit for a redundant Avamar/dual vcenter configuration is a dual-site/dual vcenter Federation Enterprise Hybrid Cloud deployment. Alternate uses There are no valid alternate uses for this configuration as no other topology uses dual-cloud vcenters. 84

85 vcenter folder assignments In the redundant Avamar/dual vcenter configuration, if the Create Backup Service level workflow creates a folder named Daily-7yr, and there are four Avamar replications pairs present, then it will configure the following backup policies with the Avamar replication pairs: Avamar-Pair0: Assigned to monitor vcenter folder Daily-7yr-Pair0 Avamar-Pair1: Assigned to monitor vcenter folder Daily-7yr-Pair1 Avamar-Pair2: Assigned to monitor vcenter folder Daily-7yr-Pair2 Avamar-Pair3: Assigned to monitor vcenter folder Daily-7yr-Pair3 Because there are two vcenters, each Avamar instance in a pair is configured to monitor one of the two corresponding vcenter folders, that is, the instance on Site A monitors the folder from the Site A vcenter, and the instance from Site B monitors the folder from the Site B vcenter. As a virtual machine can only be one of the two folders (even in a DR dualsite/dual vcenter topology) there is no duplication of backups. Note: When VMware Site Recovery Manager is used, placeholder virtual machines are created as part of the SRM protection process. To ensure that Avamar does not detect these placeholder virtual machines, additional folders are created in each vcenter with a _PH suffix, and placeholder virtual machines are located in these folders via SRM folder mappings. Before failing over a DR cluster, run the Prepare for DP Failover catalog item. This moves the production virtual machines out of their service level folders on the protected site, so that their placeholders are not created in an Avamar monitored folders when SRM re-protects the virtual machine after failover. Characteristics The characteristics of the redundant Avamar/single vcenter configuration are: All Avamar instances are configured in pairs and all backups are replicated. It works in the context of a dual-cloud vcenter only. Note: An Avamar instance can be set to Administratively Full and still have active backup and replication polices. Avamar replication pairs are defined as being split across sites, and therefore the round-robin distribution of virtual machines to vcenter folders include all Avamar pairs that are: Assigned to the vsphere cluster that the virtual machine is on. Not set to Administratively Full. Distribution examples The following scenarios convey how virtual machines are assigned to vcenter folders in order to distribute load evenly across Avamar instances, assuming the following configuration (as shown in Figure 36): Six active Avamar instances (in three Avamar replication pairs), three protected vsphere clusters, three recovery vsphere clusters, and two local clusters exist AV_REP_PAIR0 through AV_REP_PAIR1 are assigned to Clusters 1 through 6 AV_REP_PAIR2 is assigned to Clusters 7 and 8 Note: All virtual machines are deployed to the backup policy named Daily-7yr for the example. Scenario 1: VM1 is deployed to Cluster 1 - No other workload virtual machines exist AV_REP_PAIR2 is ruled out because it is not assigned to Cluster 1. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets. 85

86 The expected results are: The virtual machine is deployed to Cluster 1 Host CL1-H1. It is placed in a folder named Daily-7yr-Pair0 indicating assignment to AV_REP_PAIR0. AV_REP_PAIR1 is an equally viable candidate as both grids are empty, but AV_REP_PAIR0 is selected based on numerical order. Because Cluster 1 is on Site A, AV_INSTANCE_00 will back up the virtual machine and replicate the backups to AV_INSTANCE_01. Scenario 2: VM2 is deployed to Cluster 1 - VM1 exists AV_REP_PAIR2 is ruled out as it is not assigned to Cluster 1. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets The expected results are: The virtual machine is deployed to Cluster 1 Host CL1-H1. It is placed in a folder named Daily-7yr-Pair1 indicating assignment to AV_REP_PAIR1 because the round-robin algorithm determined that AV_REP_PAIR2 had fewer virtual machines than the other candidate, AV_REP_PAIR0. Because Cluster 1 is on Site A, AV_INSTANCE_02 will back up the virtual machine and replicate the backups to AV_INSTANCE_03. Scenario 3: VM3 is deployed to Cluster 3 - VM1 and VM2 exist AV_REP_PAIR2 is ruled out as it is not assigned to Cluster 3. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 3 Host CL3-H1. It is placed in a folder named Daily-7yr-Pair0 indicating assignment to AV_REP_PAIR0. AV_REP_PAIR1 is an equally viable candidate as both have equal virtual machines (one assigned to each pair globally, but none to the instances on Site B), but AV_REP_PAIR0 is selected based on numerical order. Scenario 4: VM4 is deployed to Cluster 3 - VM1, VM2, and VM3 exist AV_REP_PAIR2 is ruled out, because it is not assigned to Cluster 3. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 3 Host CL3-H1. It is placed in a folder named Daily-7yr-Pair1 indicating assignment to AV_REP_PAIR1 because the round-robin algorithm determined that AV_REP_PAIR1 had fewer virtual machines than the other candidate AV_REP_PAIR0. Because Cluster 3 is on Site B, AV_INSTANCE_03 will back up the virtual machine and replicate the backups to AV_INSTANCE_02. Scenario 5: VM5 is deployed to Cluster 5 - VM1, VM2, VM3 and VM4 exist AV_REP_PAIR2 is ruled out, because it is not assigned to Cluster 5. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 5 Host CL5-H1. It is placed in a folder named Daily-7yr-Pair0 indicating assignment to AV_REP_PAIR0. AV_REP_PAIR1 is an equally viable candidate because both have equal virtual machines (two assigned to each pair globally), but AV_REP_PAIR0 is selected based on numerical order. 86

87 Because Cluster 5 is on Site A, AV_INSTANCE_00 will back up the virtual machine and replicate the backups to AV_INSTANCE_01. Scenario 6: VM6 is deployed to Cluster 6 - VM1, VM2, VM3, VM4 and VM5 exist AV_REP_PAIR2 is ruled out, because it is not assigned to Cluster 6. AV_REP_PAIR0 and AV_REP_PAIR1 are identified as potential targets. The expected results are: The virtual machine is deployed to Cluster 6 Host CL6-H1. It is placed in a folder named Daily-7yr-Pair1 indicating assignment to AV_REP_PAIR1 because the round-robin algorithm determined that AV_REP_PAIR1 had fewer virtual machines than the other candidate AV_REP_PAIR0. Because Cluster 6 is on Site B, AV_INSTANCE_03 will back up the virtual machine and replicate the backups to AV_INSTANCE_02. Scenario 7: VM7 is deployed to Cluster 7 - VM1, VM2, VM3, VM4, VM5 and VM6 exist AV_REP_PAIR0 and AV_REP_PAIR1 are ruled out, because they are not assigned to Cluster 7. AV_REP_PAIR2 is identified as the only potential target. The expected results are: The virtual machine is deployed to Cluster 7 Host CL7-H1. It is placed in a folder named Daily-7yr-Pair2 indicating assignment to AV_REP_PAIR3. We found no other viable candidates. Because Cluster 7 is on Site A, AV_INSTANCE_05 will back up the virtual machine and replicate the backups to AV_INSTANCE_05. 87

88 This chapter presents the following topics: Overview Architectural assumptions vcenter Single Sign-On VMware vrealize tenants and business groups EMC ViPR tenants and projects General storage considerations VMware vcenter endpoints Permitted topology configurations Permitted topology upgrade paths Bulk import of virtual machines DR dual-site/dual vcenter topology restrictions Resource sharing Data protection considerations Software resources Sizing guidance

89 This chapter looks at the rules, configurations, and dependencies between the Federation Enterprise Hybrid Cloud components and their constructs, outlining how this influences the supported configurations within the cloud. Assumption and justifications The following assumptions and justifications apply to the Federation Enterprise Hybrid Cloud architecture: The vcenter Server full installation is used because it: Provides support for an external Microsoft SQL Server database Resides on a Windows System that also supports the VMware Update Manager service, enabling minimal resource requirements in smaller configurations VMware vcenter Single Sign-On is used instead of the vrealize Automation identity appliance because it supports the multi-site single sign-on (SSO) requirements of the solution This solution uses vcenter Single Sign-On in place of the vrealize Automation Identity Appliance. vcenter Single Sign-On is deployed on the vcenter server in each Core Pod (multiple Core Pods exist in the DR dual-site/dual vcenter topology) and an additional vcenter Single Sign-On role (Auto-SSO) is deployed on a server in the Automation Pod. The Auto-SSO server provides authentication services to all the Automation Pod management components requiring SSO integration. This configuration enables authentication services to fail over with the other automation components and enables a seamless transition between Site A and Site B. There is no need to change IP addresses, DNS, or management component settings. Single sign-on domains The Federation Enterprise Hybrid Cloud uses one or more SSO domains depending on the management platform deployed. SSO instances are configured within those domains according to the following model: External SSO domain (Distributed management model only) First external vcenter Second external vcenter (DR dual-site/dual vcenter topology only) Cloud SSO domain (All topologies) First Cloud vcenter Automation Pod SSO Second Cloud vcenter (DR dual-site/dual vcenter topology only) Figure 38 shows the SSO domains and how each SSO instance and domain is required. 89

90 Figure 38. SSO domain and vcenter SSO instance relationships First vcenter Server Single Sign-On instance in each domain This first vcenter Single Sign-On deployed in each domain is deployed in Standalone vcenter Single Sign-On Server mode, enabling it to participate in the default vcenter Single Sign-On namespace (vsphere.local). This primary vcenter Single Sign-On server supports identity sources such as Active Directory, OpenLDAP, local operating system users, and SSO embedded users and groups. This is the default deployment mode when you use the Simple Install method to install vsphere 5.5. It is also the mode to use when deploying the first vsphere 5.5 instance during a custom install. Subsequent vcenter Single Sign-On instances in each domain Additional vcenter Single Sign-On instances being installed into existing domains are deployed in Multisite mode, making them part of the existing domain but in a new SSO site. When you create vcenter Single Sign-On servers with this mode, the deployed vcenter Single Sign-On instances all become members of the same authentication namespace. This deployment mode should only be used after you have deployed the first vcenter Single Sign-On server in the SSO domain. In vsphere 5.5, vcenter Single Sign-On data (such as policies, solution users, application users, and identity sources) is automatically replicated between each vcenter Single Sign- On server in the same authentication namespace every 30 seconds. Note: VMware recommends this deployment mode when installing any additional vcenter Single Sign-On servers in the environment. 90

91 vrealize tenant design The Federation Enterprise Hybrid Cloud can operate using single or multiple vrealize Automation tenants. STaaS operations rely on the tenant URL value configured as part of the vrealize Automation tenant, and therefore require an individual vcenter Orchestrator server per additional tenant, if the ability for multiple tenants to execute STaaS operations is required. The Federation Enterprise Hybrid Cloud foundation package needs to be installed on each of the vcenter Orchestrator servers, entering the relevant tenant URL during installation. This is also required in order to populate the vrealize Automation catalog in each tenant with the relevant STaaS catalog items. vrealize tenant best practice As the vrealize Automation IaaS administrator is a system-wide role, having multiple tenants configure endpoints and execute STaaS operations may not provide any additional value over and above the use of a single tenant with multiple business groups. Therefore, while multiple tenants are supported, Federation Enterprise Hybrid Cloud is normally deployed with a single tenant with respect to STaaS operations. vrealize business group design Federation Enterprise Hybrid Cloud uses two system business groups in each non-default tenant. The first, EHCSystem, is used as the target for installation of the vrealize Automation advanced services STaaS catalog items. It does not require any compute resources. The second standard business group, EHCOperations, is used as the group where Federation Enterprise Hybrid Cloud storage administrators are configured. It is given entitlements to the STaaS and Cluster Onboarding catalog items. It has no compute resource requirements. vrealize business best practice The Federation recommends that applications provisioned using vrealize Automation Application Services each have a separate business group per application type to enable administrative separation of blueprint creation and manipulation. Figure 39 shows an example where the EHCSystem and EHCOperations system business groups are configured alongside three tenant business groups (IT, HR, and Manufacturing) and three application business groups used by vrealize Automation Application Services for Microsoft SharePoint, Oracle, and Microsoft Exchange. 91

92 Figure 39. Software-defined data center tenant design and endpoints. ViPR tenants The Federation Enterprise Hybrid Cloud uses a single ViPR tenant. The default provider tenant or an additional non-default tenant can be used. ViPR projects Federation Enterprise Hybrid Cloud STaaS operations rely on a correlation existing between the tenant URL value of the user executing the request and a ViPR project name. Therefore, to enable STaaS for an additional vrealize tenant, you must create a corresponding ViPR project whose name and case match that of the vrealize tenant URL. As each project can have a maximum total storage capacity (quota) associated with it that cannot be exceeded, the use of multiple ViPR projects enables multiple vrealize Automation tenants within the Federation Enterprise Hybrid Cloud to provision storage from the same storage endpoints in a controlled or limited fashion. ViPR consistency groups ViPR consistency groups are an important component of the CA and DR topologies for the Federation Enterprise Hybrid Cloud solution. Consistency groups logically group volumes within a project to ensure that a set of common properties is applied to an entire group of volumes during a fault event. This ensures host-to-cluster or application-level consistency when a failover occurs. Consistency groups are created by Federation Enterprise Hybrid Cloud STaaS operations and are specified when CA or DR-protected volumes are provisioned. Consistency group names must be unique within the ViPR environment. 92

FEDERATION ENTERPRISE HYBRID CLOUD 3.1 Microsoft Applications Solution Guide

FEDERATION ENTERPRISE HYBRID CLOUD 3.1 Microsoft Applications Solution Guide FEDERATION ENTERPRISE HYBRID CLOUD 3.1 Microsoft Applications Solution Guide ABSTRACT This solution guide describes how to use the Federation Enterprise Hybrid Cloud 3.1 to provision and manage new and

More information

EMC Enterprise Hybrid Cloud 2.5.1, Federation Software-Defined Data Center Edition

EMC Enterprise Hybrid Cloud 2.5.1, Federation Software-Defined Data Center Edition Solution Guide EMC Enterprise Hybrid Cloud 2.5.1, Federation Software-Defined Data Center Edition Foundation Infrastructure Solution Guide Abstract This Solution Guide provides an introduction to VMware

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

EMC HYBRID CLOUD 2.5 WITH VMWARE

EMC HYBRID CLOUD 2.5 WITH VMWARE Solution Guide EMC HYBRID CLOUD 2.5 WITH VMWARE EMC Solutions Abstract This Solution Guide provides an introduction to VMware vcloud Suite, and the EMC hardware, software, and services portfolio. This

More information

Federation Software-Defined Data Center

Federation Software-Defined Data Center SOLUTION GUIDE Federation Software-Defined Data Center Data Protection Backup Solution Guide Abstract This Solution Guide describes the data protection operations and services provided as a modular add-on

More information

EMC HYBRID CLOUD 2.5 WITH VMWARE

EMC HYBRID CLOUD 2.5 WITH VMWARE SOLUTION GUIDE EMC HYBRID CLOUD 2.5 WITH VMWARE EMC Solutions Abstract This Solution Guide describes the data protection operations and services provided as a modular add-on to the EMC Hybrid Cloud solution.

More information

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE DESIGN AND IMPLEMENTATION GUIDE EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE EMC VSPEX Abstract This describes how to design virtualized VMware vcloud Suite resources on

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Federation Software-Defined Data Center

Federation Software-Defined Data Center Reference Architecture Federation Software-Defined Data Center Foundation Infrastructure Reference Architecture Infrastructure as a service Automated provisioning and monitoring Service-driven IT operations

More information

MOVING TO FEDERATION ENTERPRISE HYBRID CLOUD 3.0

MOVING TO FEDERATION ENTERPRISE HYBRID CLOUD 3.0 1 MOVING TO FEDERATION ENTERPRISE HYBRID CLOUD 3.0 JONATHAN CYR @CYR5999 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation and undertakes no obligations with regard to product planning information,

More information

EMC HYBRID CLOUD 2.5 WITH VMWARE

EMC HYBRID CLOUD 2.5 WITH VMWARE Reference Architecture EMC HYBRID CLOUD 2.5 WITH VMWARE Infrastructure as a service Automated provisioning and monitoring Service-driven IT operations EMC Solutions September 2014 Copyright 2014 EMC Corporation.

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All

More information

NEXT GENERATION EMC: LEAD YOUR STORAGE TRANSFORMATION. Copyright 2013 EMC Corporation. All rights reserved.

NEXT GENERATION EMC: LEAD YOUR STORAGE TRANSFORMATION. Copyright 2013 EMC Corporation. All rights reserved. NEXT GENERATION EMC: LEAD YOUR STORAGE TRANSFORMATION 1 The Business Drivers Increase Revenue INCREASE AGILITY Lower Operational Costs Reduce Risk 2 CLOUD TRANSFORMS IT Lower Operational Costs 3 Disruptive

More information

MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS

MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS VCE Word Template Table of Contents www.vce.com MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS January 2012 VCE Authors: Changbin Gong: Lead Solution Architect Michael

More information

DEPLOYING AND MANAGING MICROSOFT APPLICATIONS IN EMC HYBRID CLOUD WITH VMWARE

DEPLOYING AND MANAGING MICROSOFT APPLICATIONS IN EMC HYBRID CLOUD WITH VMWARE DEPLOYING AND MANAGING MICROSOFT APPLICATIONS IN EMC HYBRID CLOUD WITH VMWARE Based on the EMC Hybrid Cloud with VMware Foundation Infrastructure Solution 2.5 EMC Solutions Abstract This describes how

More information

Frequently Asked Questions: EMC ViPR Software- Defined Storage Software-Defined Storage

Frequently Asked Questions: EMC ViPR Software- Defined Storage Software-Defined Storage Frequently Asked Questions: EMC ViPR Software- Defined Storage Software-Defined Storage Table of Contents What's New? Platform Questions Customer Benefits Fit with Other EMC Products What's New? What is

More information

TRANSFORMING DATA PROTECTION

TRANSFORMING DATA PROTECTION TRANSFORMING DATA PROTECTION Moving from Reactive to Proactive Mark Galpin 1 Our Protection Strategy: Best Of Breed Performance LEADER HIGH-END STORAGE VMAX Low Service Level LEADER SCALE-OUT NAS STORAGE

More information

EMC ENTERPRISE PRIVATE CLOUD

EMC ENTERPRISE PRIVATE CLOUD Reference Architecture EMC ENTERPRISE PRIVATE CLOUD Infrastructure as a service Automated provisioning and monitoring Service-driven IT operations EMC Solutions January 2014 Copyright 2014 EMC Corporation.

More information

INTEGRATING CLOUD ORCHESTRATION WITH EMC SYMMETRIX VMAX CLOUD EDITION REST APIs

INTEGRATING CLOUD ORCHESTRATION WITH EMC SYMMETRIX VMAX CLOUD EDITION REST APIs White Paper INTEGRATING CLOUD ORCHESTRATION WITH EMC SYMMETRIX VMAX CLOUD EDITION REST APIs Provisioning storage using EMC Symmetrix VMAX Cloud Edition Using REST APIs for integration with VMware vcloud

More information

EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition

EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition Solution Guide EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition Public Cloud Solution Guide EMC Solutions Abstract This Solution Guide describes the hybrid nature of the

More information

私 有 雲 再 進 化 EMC Hybrid Cloud 解 決 方 案. 徐 師 亮 Sydney Hsu / EMC 系 統 工 程 協 理

私 有 雲 再 進 化 EMC Hybrid Cloud 解 決 方 案. 徐 師 亮 Sydney Hsu / EMC 系 統 工 程 協 理 私 有 雲 再 進 化 EMC Hybrid Cloud 解 決 方 案 徐 師 亮 Sydney Hsu / EMC 系 統 工 程 協 理 BUSINESS IS MOVING MUCH FASTER TRADITIONAL IT TEAM ENTREPRENURIAL BUSINESS TEAM MEASURE SUCCESS IN YEARS MEASURE SUCCESS IN WEEKS

More information

Enterprise Hybrid Cloud. Wong Tran

Enterprise Hybrid Cloud. Wong Tran Enterprise Hybrid Cloud Wong Tran 1 Hybrid Clouds Will Be Pervasive Hybrid Private Cloud Cloud Public Cloud 2 Build Your Hybrid Cloud Strategy Economic Evaluation Trust Assessment Functional Assessment

More information

EMC ENTERPRISE HYBRID CLOUD 2.5.1, FEDERATION SOFTWARE-DEFINED DATA CENTER EDITION: DEPLOYING ORACLE DATABASE AS A SERVICE

EMC ENTERPRISE HYBRID CLOUD 2.5.1, FEDERATION SOFTWARE-DEFINED DATA CENTER EDITION: DEPLOYING ORACLE DATABASE AS A SERVICE White Paper EMC ENTERPRISE HYBRID CLOUD 2.5.1, FEDERATION SOFTWARE-DEFINED DATA CENTER EDITION: DEPLOYING ORACLE DATABASE AS A SERVICE EMC Enterprise Hybrid Cloud 2.5.1 with VMware, VMware vcloud Application

More information

(R)Evolution im Software Defined Datacenter Hyper-Converged Infrastructure

(R)Evolution im Software Defined Datacenter Hyper-Converged Infrastructure (R)Evolution im Software Defined Datacenter Hyper-Converged Infrastructure David Kernahan Senior Systems Engineer VMware Switzerland GmbH 2014 VMware Inc. All rights reserved. Agenda 1 VMware Strategy

More information

EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition

EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition Solution Guide EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition Security Management Solution Guide EMC Solutions Abstract This Solution Guide provides information about

More information

TRANSFORM YOUR STORAGE FOR THE SOFTWARE DEFINED DATA CENTER

TRANSFORM YOUR STORAGE FOR THE SOFTWARE DEFINED DATA CENTER TRANSFORM YOUR STORAGE FOR THE SOFTWARE DEFINED DATA CENTER With EMC ViPR Controller ABSTRACT The following paper opens with the evolution of the Software-Defined Data Center and the challenges of heterogeneous

More information

Federation Software-Defined Data Center

Federation Software-Defined Data Center Solution Guide Federation Software-Defined Data Center Security Management Solution Guide EMC Solutions Abstract This Solution Guide provides information about features and configuration options that are

More information

EMC ENTERPRISE HYBRID CLOUD 2.5 FEDERATION SOFTWARE- DEFINED DATA CENTER EDITION

EMC ENTERPRISE HYBRID CLOUD 2.5 FEDERATION SOFTWARE- DEFINED DATA CENTER EDITION Solution Guide EMC ENTERPRISE HYBRID CLOUD 2.5 FEDERATION SOFTWARE- DEFINED DATA CENTER EDITION Hadoop Applications Solution Guide EMC Solutions Abstract This document serves as a reference for planning

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM

LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM Leverage Vblock Systems for Esri's ArcGIS System Table of Contents www.vce.com LEVERAGE VBLOCK SYSTEMS FOR Esri s ArcGIS SYSTEM August 2012 1 Contents Executive summary...3 The challenge...3 The solution...3

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition

EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition Solution Guide EMC Enterprise Hybrid Cloud 2.5, Federation Software-Defined Data Center Edition Pivotal CF Platform as a Service Solution Guide EMC Solutions Abstract This Solution Guide describes the

More information

VMware vcloud Networking and Security

VMware vcloud Networking and Security VMware vcloud Networking and Security Efficient, Agile and Extensible Software-Defined Networks and Security BROCHURE Overview Organizations worldwide have gained significant efficiency and flexibility

More information

RETHINK STORAGE. Transform the Data Center with EMC ViPR Software-Defined Storage. White Paper

RETHINK STORAGE. Transform the Data Center with EMC ViPR Software-Defined Storage. White Paper White Paper RETHINK STORAGE Transform the Data Center with EMC ViPR Software-Defined Storage Abstract The following paper opens with the evolution of the Software- Defined Data Center and the challenges

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

VMware vcloud Networking and Security Overview

VMware vcloud Networking and Security Overview VMware vcloud Networking and Security Overview Networks and Security for Virtualized Compute Environments WHITE PAPER Overview Organizations worldwide have gained significant efficiency and flexibility

More information

White Paper: AirSembly Datacenter Architecture Models

White Paper: AirSembly Datacenter Architecture Models White Paper: AirSembly Datacenter Architecture Models AirSembly Version 1.6 August 2015 Abstract: This white paper outlines different scenarios in which AirSembly can be configured. It presents common

More information

EMC ViPR for On-Demand File Storage with EMC Syncplicity and EMC Isilon or EMC VNX

EMC ViPR for On-Demand File Storage with EMC Syncplicity and EMC Isilon or EMC VNX EMC ViPR for On-Demand File Storage with EMC Syncplicity and EMC Isilon or EMC VNX EMC Solutions Abstract This document describes how to deploy EMC ViPR software-defined storage in an existing EMC Isilon

More information

EMC ViPR Controller. Service Catalog Reference Guide. Version 2.3 XXX-XXX-XXX 01

EMC ViPR Controller. Service Catalog Reference Guide. Version 2.3 XXX-XXX-XXX 01 EMC ViPR Controller Version 2.3 Service Catalog Reference Guide XXX-XXX-XXX 01 Copyright 2015- EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes the information

More information

Backup & Recovery for VMware Environments with Avamar 6.0

Backup & Recovery for VMware Environments with Avamar 6.0 White Paper Backup & Recovery for VMware Environments with Avamar 6.0 A Detailed Review Abstract With the ever increasing pace of virtual environments deployed in the enterprise cloud, the requirements

More information

TRANSFORM YOUR BUSINESS: BIG DATA AND ANALYTICS WITH VCE AND EMC

TRANSFORM YOUR BUSINESS: BIG DATA AND ANALYTICS WITH VCE AND EMC TRANSFORM YOUR BUSINESS: BIG DATA AND ANALYTICS WITH VCE AND EMC Vision Big data and analytic initiatives within enterprises have been rapidly maturing from experimental efforts to production-ready deployments.

More information

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200

More information

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

On-Demand Infrastructure with Secure Networks REFERENCE ARCHITECTURE

On-Demand Infrastructure with Secure Networks REFERENCE ARCHITECTURE REFERENCE ARCHITECTURE Table of Contents Executive Summary.... 3 Audience.... 3 Overview.... 3 What Is an On-Demand Infrastructure?.... 4 Architecture Overview.... 5 Cluster Overview.... 8 Management Cluster...

More information

What s New: vsphere Virtual Volumes

What s New: vsphere Virtual Volumes Virtual Volumes (VVols) Beta What s New What s New: vsphere Virtual Volumes VMware Storage Business Unit Documentation v 1.5/August 2015 TECHNICAL MARKETING DOCUMENTATION / 1 Contents INTRODUCTION... 3

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

場次: Track B-2 公司名稱: EMC 主講人: 藍基能

場次: Track B-2 公司名稱: EMC 主講人: 藍基能 場次: Track B-2 公司名稱: EMC 主題: EMC Software-Defined Data Protection 主講人: 藍基能 Three Transformational IT Waves Evolving Backup to Data Protection SERVER CENTRIC BACKUP INFRASTRUCTURE CENTRIC DATA PROTECTION

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Table of Contents Executive Summary....4 Audience....4 Overview....4 VMware Software Components....6 Architectural Overview... 7 Cluster...

More information

EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD

EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD EMC VPLEX FAMILY Transparent information mobility within, across, and between data centers A STORAGE PLATFORM FOR THE PRIVATE CLOUD In the past, users have relied on traditional physical storage to meet

More information

What s New in VMware Site Recovery Manager 6.1

What s New in VMware Site Recovery Manager 6.1 What s New in VMware Site Recovery Manager 6.1 Technical Overview AUGUST 2015 Table of Contents Introduction... 2 Storage profile based protection... 2 Stretched Storage and Orchestrated vmotion... 5 Enhanced

More information

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS 全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!

More information

vcloud Suite Architecture Overview and Use Cases

vcloud Suite Architecture Overview and Use Cases vcloud Suite Architecture Overview and Use Cases vcloud Suite 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER

EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER ESSENTIALS Mitigate project risk with the proven leader, many of largest EHR sites run on EMC storage Reduce overall storage costs with automated

More information

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH MICROSOFT SYSTEM CENTER

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH MICROSOFT SYSTEM CENTER DESIGN AND IMPLEMENTATION GUIDE EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH MICROSOFT SYSTEM CENTER EMC VSPEX Abstract This describes how to design virtualized Microsoft System Center resources

More information

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical

More information

HAVE YOUR AGILITY AND EFFICENCY TOO

HAVE YOUR AGILITY AND EFFICENCY TOO 1 HAVE YOUR AGILITY AND EFFICENCY TOO PRACTICAL STEPS FOR A SOFTWARE DEFINED INFRASTRCUTURE TRANSFORMATION 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation and undertakes no obligations with

More information

Stretching VMware clusters across distances with EMC's Vplex - the ultimate in High Availability.

Stretching VMware clusters across distances with EMC's Vplex - the ultimate in High Availability. Stretching VMware clusters across distances with EMC's Vplex - the ultimate in High Availability. VMware TechTalk Live Yury Magalif, Principal Architect Cloud Computing March 11, 2014 What is CDI? Value

More information

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01

EMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01 EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,

More information

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES

EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

VMware for your hosting services

VMware for your hosting services VMware for your hosting services Anindya Kishore Das 2009 VMware Inc. All rights reserved Everybody talks Cloud! You will eat your cloud and you will like it! Everybody talks Cloud - But what is it? VMware

More information

EMC Data Protection Advisor 6.0

EMC Data Protection Advisor 6.0 White Paper EMC Data Protection Advisor 6.0 Abstract EMC Data Protection Advisor provides a comprehensive set of features to reduce the complexity of managing data protection environments, improve compliance

More information

OUTPERFORMING THE COMPETITION

OUTPERFORMING THE COMPETITION ETISALAT MISR Defining the future of telecommunications services ESSENTIALS Challenge Maintain competitive edge in a fast-changing marketplace through IT agility and ability to exploit third-platform technology

More information

BACKUP & RECOVERY FOR VMWARE ENVIRONMENTS WITH AVAMAR 7.2

BACKUP & RECOVERY FOR VMWARE ENVIRONMENTS WITH AVAMAR 7.2 BACKUP & RECOVERY FOR VMWARE ENVIRONMENTS WITH AVAMAR 7.2 A Detailed Review ABSTRACT With the ever-increasing pace of virtual environments deployed in the enterprise cloud, the requirements for protecting

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

VMware vcloud Air Key Concepts

VMware vcloud Air Key Concepts vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center Solution Overview Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center What You Will Learn The data center infrastructure is critical to the evolution of

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs

Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs EMC RECOVERPOINT FAMILY Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs ESSENTIALS EMC RecoverPoint Family Optimizes RPO

More information

EMC HYBRID CLOUD SOLUTION FOR HEALTHCARE

EMC HYBRID CLOUD SOLUTION FOR HEALTHCARE EMC HYBRID CLOUD SOLUTION FOR HEALTHCARE Next-Generation Health IT at the Point-of-Care ESSENTIALS Delivering ITaaS via a trusted, well-run EMC Hybrid Cloud drives business alignment, efficiency, and end-user

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

Expert Reference Series of White Papers. VMware vsphere Distributed Switches Expert Reference Series of White Papers VMware vsphere Distributed Switches info@globalknowledge.net www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,

More information

EMC ViPR Software Defined Storage

EMC ViPR Software Defined Storage EMC ViPR Software Defined Storage Virtualize Everything Compromise Nothing VIRTUALIZE EVERYTHING COMPROMISE NOTHING Dayne Turbitt Regional Sales Director EMC Advanced Software Division 1 IT is Being Transformed

More information

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure White Paper Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure What You Will Learn The new Cisco Application Centric Infrastructure

More information

VMware vsphere Data Protection 6.0

VMware vsphere Data Protection 6.0 VMware vsphere Data Protection 6.0 TECHNICAL OVERVIEW REVISED FEBRUARY 2015 Table of Contents Introduction.... 3 Architectural Overview... 4 Deployment and Configuration.... 5 Backup.... 6 Application

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 125 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNXe3200, and EMC Powered Backup EMC VSPEX Abstract This document describes the

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

Cisco Unified Data Center

Cisco Unified Data Center Solution Overview Cisco Unified Data Center Simplified, Efficient, and Agile Infrastructure for the Data Center What You Will Learn The data center is critical to the way that IT generates and delivers

More information

F5 PARTNERSHIP SOLUTION GUIDE. F5 and VMware. Virtualization solutions to tighten security, optimize performance and availability, and unify access

F5 PARTNERSHIP SOLUTION GUIDE. F5 and VMware. Virtualization solutions to tighten security, optimize performance and availability, and unify access F5 PARTNERSHIP SOLUTION GUIDE F5 and VMware Virtualization solutions to tighten security, optimize performance and availability, and unify access 1 W H AT 'S INS I DE Data Center Virtualization 3 Enterprise

More information

This Release Notes document provides details of new features and resources for the Federation Enterprise Hybrid Cloud v3.5.0. February 2016 V1.

This Release Notes document provides details of new features and resources for the Federation Enterprise Hybrid Cloud v3.5.0. February 2016 V1. This Release Notes document provides details of new features and resources for the Federation Enterprise Hybrid Cloud v3.5.0. February 2016 V1.0 Copyright 2015 EMC Corporation. All rights reserved. Published

More information

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc. White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3

More information

CONVERGE APPLICATIONS, ANALYTICS, AND DATA WITH VCE AND PIVOTAL

CONVERGE APPLICATIONS, ANALYTICS, AND DATA WITH VCE AND PIVOTAL CONVERGE APPLICATIONS, ANALYTICS, AND DATA WITH VCE AND PIVOTAL Vision In today s volatile economy, an organization s ability to exploit IT to speed time-to-results, control cost and risk, and drive differentiation

More information

Three Paths to the Virtualized Private Cloud

Three Paths to the Virtualized Private Cloud The Essential Guide to Virtualizing Microsoft Applications on EMC VSPEX For organizations running mission-critical Microsoft enterprise applications like Microsoft Exchange, Microsoft SharePoint, and Microsoft

More information

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...

More information

VMUG - vcloud Air Deep Dive. 2014 VMware Inc. All rights reserved.

VMUG - vcloud Air Deep Dive. 2014 VMware Inc. All rights reserved. VMUG - vcloud Air Deep Dive 2014 VMware Inc. All rights reserved. Agenda 1 Overview of vcloud Air 2 Advanced Networking Capabilities 3 Use Cases 4 Overview of Disaster Recovery Service 5 Questions 2 VMware

More information

VMware. 2014 VMware Inc. All rights reserved.

VMware. 2014 VMware Inc. All rights reserved. VMware 2014 VMware Inc. All rights reserved. 1 Today s Datacenter Not Yet A Private Cloud Hypervisor Compute challenge largely resolved Hardware abstracted and pooled Dynamic provisioning Self-service

More information

www.vce.com SAP Landscape Virtualization Management Version 2.0 on VCE Vblock System 700 series

www.vce.com SAP Landscape Virtualization Management Version 2.0 on VCE Vblock System 700 series www.vce.com SAP Landscape Virtualization Management Version 2.0 on VCE Vblock System 700 series Version 1.1 December 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS

More information

CERNER EMR: OPTIMIZING IT INFRASTRUCTURES

CERNER EMR: OPTIMIZING IT INFRASTRUCTURES CERNER EMR: OPTIMIZING IT INFRASTRUCTURES Guidance for maximizing performance, availability and mobility of Cerner Millennium environments ABSTRACT Cerner Millennium EMR plays a critical role in delivering

More information

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014

DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTION NS FOR VSPEXX PRIVATE CLOUD EMC VSPEX December 2014 DESIGN AND IMPLEMENTATION GUIDE EMC DATA PROTECTION OPTIONS FOR VSPEX PRIVATE CLOUD EMC VSPEX December 2014 Copyright 2013-2014 EMC Corporation. All rights reserved. Published in USA. Published December,

More information

WHITE PAPER. www.fusionstorm.com. The Double-Edged Sword of Virtualization:

WHITE PAPER. www.fusionstorm.com. The Double-Edged Sword of Virtualization: WHiTE PaPEr: Easing the Way to the cloud: 1 WHITE PAPER The Double-Edged Sword of Virtualization: Solutions and Strategies for minimizing the challenges and reaping the rewards of Disaster recovery in

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information