EMC VSPEX with EMC VPLEX for VMware vsphere 5.1



Similar documents
EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD

SAN Conceptual and Design Basics

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

Workload Resiliency with EMC VPLEX

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Brocade Solution for EMC VSPEX Server Virtualization

VBLOCK DATA PROTECTION: BEST PRACTICES FOR EMC VPLEX WITH VBLOCK SYSTEMS

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE

Setup for Failover Clustering and Microsoft Cluster Service

Building the Virtual Information Infrastructure

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

Virtual SAN Design and Deployment Guide

VMware vsphere 5.1 Advanced Administration

VMware Virtual Machine File System: Technical Overview and Best Practices

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

EMC VPLEX 5.0 ARCHITECTURE GUIDE

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

VMware vsphere 5.0 Boot Camp

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

EMC VSPEX END-USER COMPUTING

VMware vsphere-6.0 Administration Training

EXTENDED ORACLE RAC with EMC VPLEX Metro

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Introduction to VMware EVO: RAIL. White Paper

Cloud Optimize Your IT

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

HBA Virtualization Technologies for Windows OS Environments

EMC VNXe HIGH AVAILABILITY

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Windows Server 2008 R2 Hyper-V Live Migration

Implementation Guide for EMC for VSPEX Private Cloud Environments. CloudLink Solution Architect Team

Setup for Failover Clustering and Microsoft Cluster Service

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

VMware Site Recovery Manager with EMC RecoverPoint

Dell High Availability Solutions Guide for Microsoft Hyper-V

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

EMC VPLEX: LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES

What s New with VMware Virtual Infrastructure

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

Implementing Storage Concentrator FailOver Clusters

NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION

Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs

EMC Integrated Infrastructure for VMware

EMC Data Domain Management Center

Security Configuration Guide P/N Rev A05

VMware vsphere Design. 2nd Edition

Downtime, whether planned or unplanned,

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER

Frequently Asked Questions: EMC UnityVSA

Best Practices for Implementing iscsi Storage in a Virtual Server Environment

How To Run Apa Hadoop 1.0 On Vsphere Tmt On A Hyperconverged Network On A Virtualized Cluster On A Vspplace Tmter (Vmware) Vspheon Tm (

CompTIA Cloud+ 9318; 5 Days, Instructor-led

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

CompTIA Cloud+ Course Content. Length: 5 Days. Who Should Attend:

THE VIRTUAL DATA CENTER OF THE FUTURE

RSA Authentication Manager 8.1 Setup and Configuration Guide. Revision 2

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure Components of the VMware infrastructure...2

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

Brocade One Data Center Cloud-Optimized Networks

Data Migration Techniques for VMware vsphere

VMware vsphere 4.1 with ESXi and vcenter

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman

Veritas Cluster Server from Symantec

VXLAN: Scaling Data Center Capacity. White Paper

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Cisco Active Network Abstraction Gateway High Availability Solution

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

A virtual SAN for distributed multi-site environments

The Benefits of Virtualizing

Stretching VMware clusters across distances with EMC's Vplex - the ultimate in High Availability.

Expert Reference Series of White Papers. Visions of My Datacenter Virtualized

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN

Setup for Failover Clustering and Microsoft Cluster Service

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

EMC BACKUP-AS-A-SERVICE

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

Veritas Storage Foundation High Availability for Windows by Symantec

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Private cloud computing advances

EMC PowerPath Family

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

Windows Server 2008 R2 Hyper-V Live Migration

Transcription:

Design and Implementation Guide EMC VSPEX with EMC VPLEX for VMware vsphere 5.1 Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with EMC VPLEX Metro, VMware vsphere, and EMC VNX for up to 125 virtual machines. June, 2013

Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published June 2013 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website. EMC VSPEX with EMC VPLEX for VMware vsphere 5.1 Part Number H11878

Contents 1. Executive Summary... 8 2. Background and VPLEX Overview... 9 2.1 Document purpose... 9 2.2 Target Audience... 9 2.3 Business Challenges... 9 3. VSPEX with VPLEX Solution... 11 3.1 VPLEX Local... 11 3.2 VPLEX Metro... 12 3.3 VPLEX Platform Availability and Scaling Summary... 14 4. VPLEX Overview... 15 4.1 Continuous Availability... 15 4.2 Mobility... 16 4.3 Stretched Clusters Across Distance... 18 4.4 vsphere HA and VPLEX Metro HA... 18 4.5 VPLEX Availability... 19 4.6 Storage/Service Availability... 20 5. Solution Architecture... 22 5.1 Overview... 22 5.2 Solution Architecture VPLEX Key Components... 23 5.3 VPLEX Cluster Witness... 25 6. Best Practices and Configuration Recommendations... 30 6.1 VPLEX Back-End Storage... 30 6.2 VPLEX Host Connectivity... 30 6.3 VPLEX Network Connectivity... 31 6.4 VPLEX Cluster Connectivity... 31 6.5 Storage Configuration Guidelines... 32 6.6 VSPEX Storage Building Blocks... 33 7. VPLEX Local Deployment... 34 7.1 Overview... 34 7.2 Physical Installation... 34 7.3 Preliminary Tasks... 35 7.4 Set public IPv4 address... 40 7.5 Run EZ-Setup Wizard... 40 7.6 Expose Back-End Storage... 40 7.7 Resume EZ-Setup... 40 7.8 Meta-volume... 41 7.9 Register VPLEX... 41

7.10 Enable Front-End Ports... 41 7.11 Configure VPLEX for ESRS Gateway... 41 7.12 Re-Verify Cluster Health... 42 8. VPLEX Metro Deployment... 43 8.1 Overview... 43 8.2 Physical Installation... 43 8.3 Preliminary Tasks... 44 8.4 Set public IPv4 address... 44 8.5 Run EZ-Setup Wizard on Cluster 1... 44 8.6 Expose Back-End Storage... 45 8.7 Resume EZ-Setup... 45 8.8 Meta-volume... 45 8.9 Register Cluster 1... 45 8.10 Enable Front-End Ports... 46 8.11 Connect Cluster 2... 46 8.12 Verify the Product Version... 46 8.13 Verify Cluster 2 Health... 46 8.14 Synchronize Clusters... 46 8.15 Launch EZ-Setup on Cluster 2... 46 8.16 Expose Back-end Storage on Cluster 2... 47 8.17 Resume EZ-Setup on Cluster 2... 47 8.18 Create Meta-volume on Cluster 2... 47 8.19 Register Cluster 2... 47 8.20 Configure VPLEX for ESRS Gateway... 47 8.21 Complete EZ-Setup on Cluster 1... 48 8.22 Complete EZ-Setup on Cluster 2... 48 8.23 Configure WAN Interfaces... 48 8.24 Join the Clusters... 48 8.25 Create Logging Volumes... 49 8.26 Re-Verify Cluster Health... 49 9. Provisioning Virtual Volumes with VPLEX Local... 50 9.1 Confirm Storage Pools... 50 9.2 Provision Storage... 51 10. Adding VPLEX to an Existing VSPEX Solution... 54 10.1 Assumptions... 54 10.2 Integration Procedure... 54 10.3 Storage Array Mapping... 54 10.4 VPLEX Procedure... 54 10.5 Power-on Hosts... 59 10.6 Register Host Initiators... 59

10.7 Create Highly Available Datastores... 63 11. Converting a VPLEX Local Cluster into a VPLEX Metro Cluster... 65 11.1 Gathering Information for Cluster-2... 65 11.2 Configuration Information for Cluster Witness... 65 11.3 Consistency Group and Detach Rules... 66 11.4 Create Distributed Devices between VPLEX Cluster-1 and Cluster-2.... 67 11.5 Create Storage View for ESXi Hosts... 68 12. Post-install checklist... 69 13. Summary... 70 Appendix-A -- References... 71 Appendix-B Tech Refresh using VPLEX Data Mobility... 72 Appendix-C VPLEX Configuration limits... 76 Appendix-D VPLEX Pre-Configuration Worksheets... 77

List of Figures Figure 1: Private Cloud components for VSPEX with VPLEX Local... 12 Figure 2: Private Cloud components for a VSPEX/VPLEX Metro solution... 13 Figure 3: VPLEX delivers zero downtime... 15 Figure 4: Application Mobility within a datacenter... 16 Figure 5: Application and Data Mobility Example... 17 Figure 6: Application and Data Mobility Example... 18 Figure 7: Highly Available Infrastructure Example... 20 Figure 8: VPLEX Local architecture for Traditional Single Site Environments... 22 Figure 9: VPLEX Metro architecture for Distributed Environments... 23 Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes... 24 Figure 11: Failure scenarios without VPLEX Witness... 26 Figure 12: Failure scenarios with VPLEX Witness... 26 Figure 13: VSPEX deployed with VPLEX Metro configured with 3 rd Site VPLEX Witness... 27 Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven Infrastructure... 33 Figure 16: Place VNX LUNs into VPLEX Storage Group... 51 Figure 17: VPLEX Local System Status and Login Screen... 52 Figure 18: Provisioning Storage... 52 Figure 19: EZ-Provisioning Step 1: Claim Storage and Create Virtual Volumes... 53 Figure 20: EZ-Provisioning Step 2: Register Initiators... 53 Figure 21: EZ-Provisioning Step 3: Create Storage View... 53 Figure 22: EZ-Provisioning... 56 Figure 23: Create Virtual Volumes- Select Array... 56 Figure 24: Create Virtual Volumes- Select Storage Volumes... 57 Figure 25: Create Distributed Volumes- Select Mirrors... 58 Figure 26: Create Distributed Volumes- Select Consistency Group... 59 Figure 27: EZ-Provisioning- Register Initiators... 59 Figure 28: View Unregistered Initiator-ports... 60 Figure 29: View Unregistered Initiator-ports... 61 Figure 30: EZ-Provisioning- Create Storage View... 61 Figure 31: Create Storage View- Select Initiators... 62 Figure 32: Create Storage View- Select Ports... 62 Figure 33: Create Storage View- Select Virtual Volumes... 63 Figure 34: VPLEX Metro System Status Page... 66 Figure 35: VPLEX Consistency Group created for Virtual Volumes... 67 Figure 36: VPLEX Distributed Devices... 67 Figure 37: VPLEX Storage View for ESXi Hosts... 68 Figure 38: Batch Migration, Create Migration Plan... 73 Figure 39: Batch Migration, Start Migration... 74 Figure 40: Batch Migration, Monitor Progress... 74 Figure 41: Batch Migrations, Change Migration State... 75 Figure 42: Batch Migrations, Commit the Migration... 75

List of Tables Table 1: VPLEX Components... 24 Table 2: Hardware Resources for Storage... 32 Table 3: IPv4 Networking Information... 35 Table 4: Metadata Backup Information... 35 Table 5: SMTP details to configure event notifications... 36 Table 6: SNMP information... 37 Table 7: Certificate Authority (CA) and Host Certificate information... 37 Table 8: Product Registration Information... 37 Table 9: VPLEX Metro IP WAN Configuration Information... 38 Table 10: Cluster Witness Configuration Information... 39 Table 13: IPv4 Networking Information... 77 Table 14: Metadata Backup Information... 77 Table 15: SMTP details to configure event notifications... 77 Table 16: SNMP information... 79 Table 17: Certificate Authority (CA) and Host Certificate information... 79 Table 18: Product Registration Information... 79 Table 19: IP WAN Configuration Information... 80 Table 20: Cluster Witness Configuration Information... 81

1. Executive Summary Businesses face many challenges in delivering application availability, while working within constrained IT budgets. Increased deployment of storage virtualization lowers costs and improves availability, but this alone will not allow businesses to provide the required application access demanded by users. This document provides an overview of VPLEX, the use cases, and how VSPEX with VPLEX solutions provide the continuous availability and mobility mission-critical applications require for 24X7 operations. This document is divided into sections that give an overview of the VPLEX family, use cases, solution architecture, how VPLEX extends VMware capabilities along with solution requirements and configuration details. The EMC VPLEX family versions, Local and Metro, provide continuous availability and non-disruptive data mobility for EMC and non-emc storage within and across data centers. Additionally, this document will cover the following: VMware vsphere makes it simpler and less expensive to provide higher levels of availability for critical business applications. With vsphere, organizations can easily increase the baseline level of availability provided for all applications, as well as provide higher levels of availability more easily and cost-effectively. How VPLEX Metro extends VMware vmotion, HA, DRS and FT, stretching the VMware cluster across distance providing solutions that go beyond traditional Disaster Recovery. Solution requirements for software and hardware, material lists, step-by-step sizing guidance and worksheets, and verified deployment steps to implement a VPLEX solution with VSPEX Private Cloud for VMware vsphere that supports up to 125 Virtual machines.

2. Background and VPLEX Overview 2.1 Document purpose This document provides an overview on how to use VPLEX to leverage VSPEX Proven Infrastructure, an explanation on how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the overall system. This document applies to VSPEX deployed with EMC VPLEX Metro and VPLEX Witness. The details provided in this document are based on the following configurations: VPLEX GeoSynchrony 5.1 (patch 4) or higher VPLEX Metro VPLEX Clusters are within 5 milliseconds (ms) of each other for VMware HA (10ms is possible with a VMware Enterprise Plus license) VPLEX Witness is deployed to a third failure domain ESXi and vsphere 5.1 or later are used Any qualified pair of arrays (both EMC and non-emc) listed on the EMC Simple Support Matrix (ESSM) found here: https://elabnavigator.emc.com/vault/pdf/emc_vplex.pdf 2.2 Target Audience The readers of this document should be familiar with the VSPEX Proven Infrastructures, have the necessary training and background to install and configure VMware vsphere, EMC VNX series storage systems, VPLEX, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. After purchase, implementers of this solution should focus on the configuration guidelines of the solution validation phase and the appropriate references and appendices. 2.3 Business Challenges Most of today s organizations operate 24X7, with most applications being mission- critical. Continuous availability of these applications to all users is a primary goal of IT. A secondary goal is to have all applications up and running as soon as possible if the applications stop processing. There are hundreds of possibilities that can cause infrastructure to be taken down - from fires, flooding, natural disasters, application failures, or even simple mistakes in the computer room, of which most are outside of IT s control. Sometimes there are good reasons to take down applications for scheduled maintenance, tech refreshes, load balancing, or data center relocation. In all of these scenarios the outcome is the same,

applications stop processing. The ultimate goal of the IT organization is to maintain mission critical application availability.

3. VSPEX with VPLEX Solution VSPEX with VPLEX utilizing best-of-breed technologies delivers the power, performance and reliability businesses need to be competitive. VSPEX solutions are built with proven best-ofbreed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. Customers are increasingly deploying their business applications on consolidated compute, network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment model. With VSPEX the complexity of integration management is reduced while maintaining the application design and implementation options. VPLEX enhances the VSPEX value proposition by adding the continuous availability and non-disruptive data mobility use cases to the VSPEX infrastructure. VPLEX rounds out the VSPEX Data Protection portfolio by providing the ability to: Refresh technology non-disruptively within the storage arrays within VSPEX vmotion virtual machines non-disruptively from one VSPEX system to another (example, for workload balancing or disaster avoidance) Automatically restart virtual machines from one VSPEX to another to deliver a higher level of protection to VMware environments on VSPEX The following sections describe the VPLEX Local and VPLEX Metro products and how they deliver the value propositions listed above as part of a VSPEX solution. 3.1 VPLEX Local This solution uses VSPEX Private Cloud for VMware vsphere 5.1 with VPLEX Local to provide simplified management and non-disruptive data mobility between multiple heterogeneous storage arrays within the data center. VPLEX removes physical barriers within the datacenter. With its unique scale-out architecture, VPLEX s advanced data caching and distributed cache coherency provide workload resiliency, automatic sharing, balancing and failover of storage domains, and enable local access with predictable service levels.

Figure 1: Private Cloud components for VSPEX with VPLEX Local Note: The above image depicts a logical configuration, physically the VPLEX can be hosted within the VSPEX rack. 3.2 VPLEX Metro The two data center site solution referenced in this document uses VSPEX Private Cloud for VMware vsphere 5.1 with VPLEX Metro to provide simplified management and nondisruptive data mobility between multiple heterogeneous storage arrays across data centers. VPLEX Metro enhances the capabilities of VMware vmotion, HA, DRS and FT to provide a solution that extends data protection strategies that go beyond traditional

Disaster Recovery. This solution provides a new type of deployment which achieves continuous availability over distance for today s enterprise storage and cloud environments. VPLEX Metro provides data access and mobility between two VPLEX clusters within synchronous distances. This solution builds on the VPLEX Local approach by creating a VPLEX Metro cluster between the two geographically dispersed datacenters. Once deployed, this solution will provide truly available distributed storage volumes over distance and makes VMware technology such as vmotion, HA, DRS and FT even better and easier. Figure 2: Private Cloud components for a VSPEX/VPLEX Metro solution The above image is a logical configuration depicting 125 VMs with their datastores stretched across a VMware vsphere 5.1 cluster. This infrastructure will deliver continuous availability for the applications as well as enable non-disruptive workload mobility and balancing. The VPLEX appliances can be physically hosted within the VSPEX racks if space permits.

3.3 VPLEX Platform Availability and Scaling Summary VPLEX addresses high-availability and data mobility requirements while scaling to the I/O throughput required for the front-end applications and back-end storage. Continuous Availability (CA), High-availability (HA), and Data Mobility features are all characteristics of the VPLEX Local and VPLEX Metro solutions outlined in this document. The basic building block of a VPLEX is an engine. To eliminate single points of failure, each VPLEX Engine consists of two Directors. A VPLEX cluster can consist of one, two, or four engines. Each engine is protected by a standby power supply (SPS), and each Fibre Channel switch gets its power through an uninterruptible power supply (UPS). In a dualengine or quad-engine cluster, the management server also gets power from a UPS. The management server has a public Ethernet port, which provides cluster management services when connected to the customer network. VPLEX scales both up and out. Upgrades from a single engine to a dual engine cluster as well as from a dual engine to a quad engine are fully supported and are accomplished nondisruptively. This is referred to as scale up. Upgrades from a VPLEX Local to a VPLEX Metro are also supported non-disruptively.

4. VPLEX Overview The EMC VSPEX with EMC VPLEX solution represents the next-generation architecture for continuous availability and data mobility for mission-critical applications. This architecture is based on EMC s 20+years of expertise in designing; implementing and perfecting enterprise class intelligent cache and distributed data protection solutions. The combined VSPEX with VPLEX solution provides a complete system architecture capable of supporting up to 125 virtual machines with a redundant server or network topology and highly available storage within or across geographically dispersed datacenters. VPLEX addresses three distinct customer requirements: Continuous Availability: The ability to create high-availability storage infrastructure across synchronous distances with unmatched resiliency. Mobility: The ability to move applications and data across different storage installations within the same data center, across a campus, or within a geographical region. Stretched Clusters across Distance The ability to extend VMware vmotion, HA, DRS and FT outside the data center across distances, ensuring the continuous availability of VSPEX solutions. 4.1 Continuous Availability EMC VPLEX family provides continuous availability with zero unplanned downtime for applications from within a data center and across data centers at synchronous distances. VPLEX enables users to have the exact same information simultaneously read / write accessible in two locations, delivering the ability to stretch hypervisor clusters, such as VMware across sites. Instead of idle assets at the second site, all infrastructure is utilized in an Active-Active state. Figure 3: VPLEX delivers zero downtime

With VPLEX in place, customers now have infinite flexibility in the area of Data Mobility. This addresses some compelling use cases such as array technology refreshes with nodisruption to the applications or planned downtime. It also enables performance load balancing for customers who want to dynamically move data to a higher performing or higher capacity array without affecting the end users. 4.2 Mobility EMC VPLEX Local enables the connectivity to heterogeneous storage arrays providing seamless data mobility and the ability to manage storage provisioned from multiple heterogeneous arrays from a single interface within a data center. This provides you with the ability to relocate, share and balance infrastructure resources within a data center. Figure 4: Application Mobility within a datacenter VPLEX Metro configurations enable migrations within and across datacenters over synchronous distances. In combination with VMware using vmotion, it allows you to transparently relocate Virtual Machines and their corresponding applications and data over synchronous distance. This provides you with the ability to relocate, share and balance infrastructure resources between data centers. These capabilities save you money, both in reducing time to do data migrations, and balancing workloads across sites to fully utilize infrastructure at both sites.

Traditional data migration using array replication or manual data moves, are an expensive, time consuming, and oftentimes risky process. They are often expensive since companies typically are paying someone to do the services work. Migrations can be time consuming as the customer can t just shut down servers, instead they must work through their business units to identify possible windows to work within that are mostly during nights and weekends. Migrations can also be risky events if all of the dependencies between applications aren t well documented. It is possible that any issues in the migration process may not be able to be remediated until the following maintenance cycle without an outage. VPLEX limits the risk in traditional migrations by having a fully reversible process. If performance or other issues are discovered when the new storage is put online, the new storage can be taken down and the old storage can continue serving I/O. Due to the ease of migrations with VPLEX, the customer can do the migrations themselves and there are significant services cost savings. Also, new infrastructure can be used immediately with no need to wait for scheduled downtime to begin migrations. There is powerful TCO associated with VPLEX all future refreshes and migrations are free. Figure 5: Application and Data Mobility Example A VPLEX Cluster is a single virtualization I/O group that enables non-disruptive data mobility across the entire cluster. This means that all Directors in a VPLEX cluster have access to all Storage Volumes making this solution what is referred to as an N -1 architecture. This type of architecture allows for multiple director failures without loss of access to data down to a single director.

During a VPLEX Mobility operation any jobs in progress can be paused or stopped without affecting data integrity. Data Mobility creates a mirror of the source and target devices allowing the user to commit or cancel the job without affecting the actual data. A record of all mobility jobs are maintained until the user purges the list for organizational purposes. 4.3 Stretched Clusters Across Distance VPLEX Metro extends VMware vmotion, High Availability (HA), Distributed Resource Scheduler (DRS) and Fault Tolerance (FT) outside the data center across distances, ensuring the continuous availability of VSPEX solutions. Stretching vmotion across datacenters enables non-disruptive load balancing, maintenance, and workload re-location. VMware DRS provides for full utilization of resources across domains. Figure 6: Application and Data Mobility Example 4.4 vsphere HA and VPLEX Metro HA

Due to its core design, EMC VPLEX Metro provides the perfect foundation for VMware High Availability and Fault Tolerance clustering over distance ensuring simple and transparent deployment of stretched clusters without any added complexity. VPLEX Metro takes a single block storage device in one location and distributes it to provide single disk semantics across two locations. This enables a distributed VMFS datastore to be created on that virtual volume. Furthermore, if the layer 2 network has also been stretched then a single instance of vsphere (including a single logical datacenter) can now also be distributed into more than one location and VMware HA can be enabled for any given vsphere cluster. This is possible since the storage federation layer of the VPLEX is completely transparent to ESXi. It therefore enables the user to add ESXi hosts at two different locations to the same HA cluster. Stretching an HA failover cluster (such as VMware HA) with VPLEX creates a Federated HA cluster over distance. This blurs the boundaries between local HA and disaster recovery since the configuration has the automatic restart capabilities of HA combined with the geographical distance typically associated with synchronous DR. 4.5 VPLEX Availability VPLEX is built on a foundation of scalable and highly available processor engines and is designed to seamlessly scale from small to large configurations. VPLEX resides between the servers and heterogeneous storage assets, and uses a unique clustering architecture that allows servers at multiple data centers to have read/write access to the same data at two locations at the same time. Unique characteristics of this new architecture include: Scale-out clustering hardware lets you start small and grow big with predictable service levels Advanced data caching utilizes large-scale SDRAM cache to improve performance and reduce I/O latency and array contention Distributed cache coherence for automatic sharing, balancing, and failover of I/O across the cluster Consistent view of one or more LUNs across VPLEX clusters (within a data center or across synchronous distances) enabling new models of high-availability and workload relocation With a unique scale-up and scale-out architecture, VPLEX advanced data caching and distributed cache coherency provide continuous availability, workload resiliency, automatic sharing, balancing, and failover of storage domains, and enables both local and remote data access with predictable service levels. EMC VPLEX has been architected for virtualization enabling federation across VPLEX Clusters. VPLEX Metro supports a maximum 5ms RTT for FC or 10 GigE connectivity. To protect against entire site failure causing application outages, VPLEX uses a VMware Virtual machine located within a separate failure domain to provide a VPLEX Witness

between VPLEX Clusters that are part of a distributed/federated solution. The VPLEX Witness, known as Cluster Witness, resides in a third failure domain monitoring both VPLEX Clusters for availability. This third site needs only IP connectivity to the VPLEX sites. 4.6 Storage/Service Availability Each VPLEX site has a local VPLEX Cluster with physical storage and hosts connected to that VPLEX Cluster only. The VPLEX Clusters themselves are interconnected across the sites to enable federation. A virtual volume is taken from each of the VPLEX Clusters to create a distributed virtual volume. Hosts connected in Site A actively use the storage I/O capability of the storage in Site A; Hosts in Site B actively use the storage I/O capability of the storage in Site B. Figure 7: Highly Available Infrastructure Example VPLEX distributed volumes are available from either VPLEX cluster and have the same LUN and storage identifiers when exposed from each cluster, enabling true concurrent read/write access across sites.

When using a distributed virtual volume across two VPLEX Clusters, if the storage in one of the sites is lost, all hosts continue to have access to the distributed virtual volume, with no disruption. VPLEX services all read/write traffic through the remote mirror leg at the other site.

5. Solution Architecture 5.1 Overview The VSPEX with VPLEX solution using VMware vsphere has been validated for configuration with up to 125 virtual machines. Figure-8 shows an environment with VPLEX Local only, virtualizing the storage and providing high availability across storage arrays. Since all ESXi servers are able to see VPLEX, VMware vmotion, HA, and DRS are able to seamlessly move and be restarted on all hosts. This configuration is traditional virtualized environment compared to the VPLEX Metro environment which provides high availability within, and across, datacenters. Figure 8: VPLEX Local architecture for Traditional Single Site Environments

Figure-9 characterizes both a traditional infrastructure validated with block -based storage in a single datacenter, and a distributed infrastructure validated with block -based storage federated across two datacenters, where 8 Gb FC carries storage traffic locally, and 10 GbE carries storage, management, and application traffic across datacenter sites Figure 9: VPLEX Metro architecture for Distributed Environments 5.2 Solution Architecture VPLEX Key Components This solution adds the following VPLEX technology to the VSPEX Private Cloud for VMware vsphere 5.1 for 125 Virtual Machines solution:

Table 1: VPLEX Components Cluster-1 Components Single engine Directors 2 Redundant Engine SPSs Yes FE Fibre Channel ports (VS2) 8 BE Fibre Channel ports (VS2) 8 Cache size (VS2 Hardware) 72GB Management Servers 1 Internal Fibre Channel switches (Local Comm) None Uninterruptable Power Supplies (UPSs) None Cluster-2 Components Single engine Directors 2 Redundant Engine SPSs Yes FE Fibre Channel ports (VS2) 8 BE Fibre Channel ports (VS2) 8 Cache size (VS2 Hardware) 72GB Management Servers 1 Internal Fibre Channel switches (Local Comm) None Uninterruptable Power Supplies (UPSs) None The figure below shows a high-level physical topology of a VPLEX Metro distributed device. VPLEX Dual and Quad engine options can be found in the Appendix. Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes Figure 10 is a physical representation of the logical configuration shown in Figure 9. Effectively, with this topology deployed, the distributed volume can be treated just like any

other volume; the only difference being it is now distributed and available in two locations at the same time. Another benefit of this type of architecture is extreme simplicity since it is no more difficult to configure a cluster across distance that it is in a single data center. Note: When deploying VPLEX Metro you have the choice to inter-connect your VPLEX Clusters by using either 8GB Fiber Channel or 10GB Ethernet WAN connectivity. When using FC connectivity this can be configured with either a dedicated channel (i.e. separate non merged fabrics) or an ISL based fabric (i.e. where fabrics have been merged across sites). It is assumed that any WAN link will be fully routable between sites with physically redundant circuits. Note: It is vital that VPLEX Metro has enough bandwidth between clusters to meet requirements. The Business Continuity Solution Designer (BCSD) tool can be used to validate the design. EMC can assist in the qualification if desired. https://elabadvisor.emc.com/app/licensedtools/list For an in-depth technology and architectural understanding of VPLEX Metro, VMware HA, and their interactions, please refer to the VPLEX HA Techbook found here: http://www.emc.com/collateral/hardware/technical- documentation/h7113-vplexarchitecture-deployment.pdf 5.3 VPLEX Cluster Witness VPLEX Metro goes beyond the realms of legacy active/passive replication technologies since it can deliver true active/active storage over distance as well as federated availability. There are three main items that are required to deliver true "Federated Availability". 1. True active/active fibre channel block storage over distance. 2. VPLEX Storage mirroring delivers one view of storage, making data accessible immediately, with no waiting for mirroring to complete. This feature eliminates the need for host based mirroring, saving host CPU cycles. 3. External arbitration to ensure that under all failure conditions automatic recovery is possible. In the previous sections we have discussed 1 and 2, but now we will look at external arbitration which is enabled by VPLEX Witness. VPLEX Witness is delivered as a zero cost VMware Virtual Appliance (vapp) which runs on a customer supplied ESXi server, or a public cloud utilizing a VMware virtualized environment. The ESXi server resides in a physically separate failure domain to either VPLEX cluster and uses different storage to the VPLEX cluster.

Using VPLEX Witness ensures that true Federated Availability can be delivered. This means that regardless of site or link/wan failure a copy of the data will automatically remain online in at least one of the locations. When setting up a single or a group of distributed volumes the user will choose a preference rule which is a special property that each individual or group of distributed volumes has. It is the preference rule that determines the outcome after failure conditions such as site failure or link partition. The preference rule can either be set to cluster A preferred, cluster B preferred or no automatic winner. At a high level this has the following effect to a single or group of distributed volumes under different failure conditions as listed below: Figure 11: Failure scenarios without VPLEX Witness As we can see in Figure 11 if we only used the preference rules without VPLEX Witness then under some scenarios manual intervention would be required to bring the volume online at a given VPLEX cluster(e.g. if site A is the preferred site, and site A fails, site B would also suspend). This is where VPLEX Witness assists since it can better diagnose failures due to the network triangulation, and ensures that at any time at least one of the VPLEX clusters has an active path to the data as shown in the table below: Figure 12: Failure scenarios with VPLEX Witness

As one can see from Figure 12 VPLEX Witness converts a VPLEX Metro from an active/active mobility and collaboration solution into an active/active continuously available storage cluster. Furthermore once VPLEX Witness is deployed, failure scenarios become selfmanaging (i.e. fully automatic) which makes it extremely simple since there is nothing to do regardless of the failure condition. Figure 13: VSPEX deployed with VPLEX Metro configured with 3 rd Site VPLEX Witness As depicted in Figure 13 above, we can see that the Witness VM is deployed in a separate fault domain and connected into both VPLEX management stations via an IP network. Note: VPLEX Witness will support a maximum round trip latency of 1 second between VPLEX s.

VPLEX Virtualized Storage for VMware ESXi Using VPLEX to virtualize your VMware ESXi storage will allow disk access without changing the fundamental dynamics of datastore creation and use. Whether using VPLEX Local for Virtual volumes or VPLEX Metro with Distributed Devices via AccessAnywhere the hosts are still going to coordinate locking to ensure volume consistency. This is controlled by the cluster file system Virtual Machine File System (VMFS) within each datastore. Each storage volume will be presented to VPLEX, a Virtual Volume or Distributed device is created and presented to each ESXi Host in the cluster and formatted with the VMFS file system. The Figure 14 below shows a high-level physical topology of how VMFS and RDM disks are passed to each ESXi host. Figure 14: VMware virtual disk types VMFS VMware VMFS is a high-performance cluster file system for ESXi Server virtual machines that allows multiple ESXi Servers to access the same virtual machine storage concurrently. VPLEX enhances this technology by adding the ability to take a virtual volume at one location and create a RAID-1 mirror that creates a distributed volume to provide single disk semantics across two locations. This enables the VMFS datastore to be transparently utilized within and across datacenters. Raw Device Mapping (RDM) VMware also provides RDM, which is a SCSI pass-through technology that allows a virtual machine to pass SCSI commands for a volume directly to the physical storage array. RDM s

are typically used for quorum devices and/or other commonly shared volumes within a cluster.

6. Best Practices and Configuration Recommendations 6.1 VPLEX Back-End Storage The following are Best Practices for VPLEX Back-End Storage: Dual fabric designs for fabric redundancy and HA should be implemented to avoid a single point of failure. This provides data access even in the event of a full fabric outage. Each VPLEX director will physically connect to both fabrics for both host (front-end) and storage (back-end) connectivity. Hosts will connect to both an A director and B director from both fabrics for the supported HA level of connectivity as required with the Non-Disruptive Upgrade (NDU) pre-checks. Fabric zoning should consist of a set of zones a single initiator and up to 16 targets Avoid port speed issues between the fabric and VPLEX by using dedicated port speeds taking special care not to use oversubscribed ports on SAN switches It is required that each director in a VPLEX cluster must have a minimum of two I/O paths to every local back-end storage array and to every storage volume presented to that cluster. VPLEX allows a maximum of 4 active paths per director to a given LUN (Optimal). This is considered optimal because each director will load balance across the four active paths to the storage volume. 6.2 VPLEX Host Connectivity The following are Best Practices for VPLEX Host Connectivity: Dual fabric designs are considered a best practice The front-end I/O modules on each director should have a minimum of two physical connections one to each fabric (required). Each host should have at least one path to an A director and one path to a B director on each fabric for a total of four logical paths (required for NDU). Maximum availability for host connectivity is achieved by using hosts with multiple host bus adapters and with zoning to all VPLEX directors. Multipathing or path failover software is required at the host for access across the dual fabrics Each host should have fabric zoning that provides redundant access to each LUN from a minimum of an A and B director from each fabric.

Four paths are required for NDU Observe Director CPU utilization to schedule NDU for times when average consumption is at acceptable levels 6.3 VPLEX Network Connectivity The following are Best Practices for VPLEX Network Connectivity: Requires an IPv4 Address for the management server Management Server is configured for Auto Negotiate (1Gbps NIC) VPN connectivity between management servers requires a routable/pingable connection between each cluster. Network QoS requires that the link latency does not exceed 1 second (not millisecond) for management server to VPLEX Witness server Network QoS must be able to handle file transfers during the NDU procedure The following Firewall ports must be opened: o Internet Key Exchange (IKE): UDP port 500 o NAT Traversal in the IKE (IPsec NAT-T): UDP port 4500 o Encapsulating Security Payload (ESP): IP protocol number 50 o Authentication Header (AH): IP protocol number 51 o Secure Shell (SSH) and Secure Copy (SCP): TCP port 22 6.4 VPLEX Cluster Connectivity The following are Best Practices for VPLEX Cluster Connectivity: Metro over Fiber Channel (8 Gbps) Each director s FC WAN ports must be able to see at least one FC WAN port on every other remote director (required). The director s local com port is used for communications between directors within the cluster. Independent FC WAN links are strongly recommended for redundancy Each director has two FC WAN ports that should be configured on separate fabrics to maximize redundancy and fault tolerance. Use VSAN s to isolate VPLEX Metro FC traffic from other traffic using zoning. Use VLAN s to isolate VPLEX Metro Ethernet traffic from other traffic. Metro over IP (10 Gbps/E) Latency must be less than or equal to 5ms (RTT) Cache must be configured for synchronous write through mode only

6.5 Storage Configuration Guidelines This section provides guidelines for setting up the storage layer of the solution to provide highavailability and the expected level of performance. The tested solutions described below use block storage via Fiber Channel. The storage layout described below adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system usage and load if required. However, the building blocks described in this document ensure acceptable performance. The VSPEX storage building blocks document specifies recommendations for customization. Component Table 2: Hardware Resources for Storage Configuration EMC VNX Array Block Common: 1 x 1 GbE NIC per Control Station for management 1 x 1 GbE NIC per SP for management 2 front-end ports per SP System disks for VNX OE For 125 virtual machines: EMC VNX5300 o 60 x 600 GB 15k rpm 3.5-inch SAS drives o 2 x 600 GB 15k rpm 3.5-inch SAS Hot Spares o 10 x 200 GB Flash drives o 1 x 200 GB Flash drive as a hot spare o 4 x 200 GB Flash drive for FAST Cache Cluster-1: (2) Directors EMC VPLEX Metro EMC VPLEX Metro Single Engine Single Engine (8) Front-End Ports (8) Back-End Ports (4) WAN Comm Ports Cluster-2: (2) Directors (8) Front-End Ports (8) Back-End Ports (4) WAN Comm Ports

6.6 VSPEX Storage Building Blocks Please use the EMC VSPEX Private Cloud VMware vsphere 5.1 for up to 500 Virtual Machines to properly size, plan, and implement your 125 Virtual Machine deployment. Once the Building Block size has been established and the LUNs have been created on the back-end storage, they will be virtualized by VPLEX and presented to the ESXi hosts for use. Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven Infrastructure

7. VPLEX Local Deployment 7.1 Overview The VPLEX deployment process consists of two main steps, physical installation and configuration. The physical installation of VPLEX is racking and cabling VPLEX into the VSPEX rack. The installation process is well defined in the EMC VPLEX Procedure Generator and therefore is not replicated in this section. For detailed installation instructions use the EMC VPLEX Field Deployment Guide found in the EMC VPLEX Procedure Generator. The VPLEX deployment process consists of several tasks that are listed below. There are tables within this chapter that detail what information is needed to complete the configuration. These tables have been populated with sample data so it is clear what format is expected. Please see Appendix-D VPLEX Pre-Configuration Worksheets for blank worksheets that can be printed and filled in. Having these worksheets filled out prior to beginning the configuration process is highly recommended. Once the VPLEX has been configured for use, you may then log in to a VPLEX management server to discover and claim your VSPEX Building Block LUNs from the VNX array. These LUNs will be used to create virtual volumes for VPLEX Local and/or Distributed Volumes for VPLEX Metro implementations. For this document we will make the assumption that the VSPEX environment has been setup and is configured for a VSPEX Private Cloud that will support up to 125 virtual machines. Physical Installation and configuration of VPLEX is identical whether the VSPEX with VPLEX solution is already in production or newly installed. The VPLEX pre-deployment gathering of data consists of the items listed below. The first phase is to collect all appropriate site data and fill out the configuration worksheets. These worksheets may be found in the Installation and Configuration section of the VPLEX Procedure Generator. Throughout this chapter you will need to refer to the VPLEX Configuration Guide, or other referenced document, for more detailed information on each step. Chapter 2 of the VPLEX Configuration Guide is for a VPLEX Local implementation, which is the focus for this chapter. For a VPLEX Metro deployment, review the tables in this chapter then proceed to Chapter 8. VPLEX Metro Deployment. 7.2 Physical Installation This is the physical installation of the VPLEX into the VSPEX cabinet. This includes the following tasks: Unpack the VPLEX equipment. Install and cable the standby power supply (SPS) and engine. Install the VPELX management server.

Connect the remaining internal VPLEX management cables. Power up and verify VPLEX operational status. Connect the VPLEX front-end and back-end I/O cables. 7.3 Preliminary Tasks After the VPLEX has been physically installed into the rack, you will need to verify that your environment is ready for the deployment of the VPLEX. These tasks include the following: Install the VPLEX Procedure Generator Review the VPLEX Implementation and Planning Best Practice Guide Review the VPLEX Simple Support Matrix Review the VPLEX with GeoSynchrony 5.1 Release Notes Review the VPLEX Configuration Guide Review the ESX Host Connectivity Guide Review the Encapsulate Arrays on ESX Guide Verify (4) metadata devices are available for VPLEX install Review the EMC Secure Remote Support Gateway Install Procedure Before moving forward with the configuration of VPLEX, complete all the relevant worksheets below to ensure all the necessary information to complete the configuration is available. A blank worksheet is provided in Appendix-D VPLEX Pre-Configuration Worksheets. Review Chapter 2, Task 1 of the VPLEX GeoSynchrony v5.1 Configuration Guide. The following tables show sample configuration information as an example and should be replaced by actual values from the installation environment. Table 3: IPv4 Networking Information Information Additional description Value Management server IP address Network mask Hostname EMC secure remote support (ESRS) gateway Public IP address for the management server on the customer IP network. Subnet mask for the management server IP network. Hostname for the management server. Once configured, this name replaces the default name (service) in the shell prompt each time you open an SSH session to the management server. IP address for the ESRS gateway on the IP network. 192.168.44.171 255.255.255.0 DC1-VPLEX 192.168.44.254 Table 4: Metadata Backup Information Information Additional description Value Day and time to back up the meta-volume The day and time that the cluster s meta-volume will be backed up to a remote storage volume on a back-end array (which will be selected during the cluster setup procedure). 2013 MAY 30, 12:00

Table 5: SMTP details to configure event notifications Information Additional description Value Do you want VPLEX to send event notifications? Sending event notifications allows EMC to act on any issues quickly. Note: The remaining information in this table applies only if you specified yes to the previous question. SMTP IP address of primary connection SMTP address thru which Call Home emails will be sent. EMC recommends using your ESRS gateway as the primary connection address. 192.168.44.254 First/only recipient s email address SMTP IP address for first/only recipient Event notification type Second recipient s email address (optional) SMTP IP address for second recipient * Event notification type for second recipient Third recipient s email address (optional) SMTP IP address for third recipient * Event notification type for third recipient Email address of a person (generally a customer s employee) who will receive Call Home notifications. SMTP address through which the first/only recipient s email notifications will be sent. One or more people can receive email notifications when events occur. Notification types: 1. On Success or Failure - Sends an email regardless of whether the email notification to EMC succeeded. 2. On Failure - Sends an email each time an attempt to notify EMC has failed. 3. On All Failure - Sends an email only if all attempts to notify EMC have failed. On Success - Sends an email each time EMC is successfully sent an email notification.emc recommends distributing connections over multiple SMTP servers for better availability. These SMTP v4 IP addresses can be different from the addresses used for event notifications sent to EMC. Email address of a second person who will receive Call Home notifications. SMTP address through which the second recipient s email notifications will be sent. See description of event notification type for first recipient. Email address of a third person who will receive Call Home notifications. SMTP address through which the third recipient s email notifications will be sent. See description of event notification type for first recipient. Yes user@companyname.com 192.168.44.25 1 user2@companyname.com 192.168.44.25 1 user3@companyname.com 192.168.44.25 1

Information Additional description Value Do you want VPLEX to send system reports? Day or week and time to send system reports. Sending weekly system reports allows EMC to communicate known configuration risks, as well as newly discovered information that can optimize or reduce risks. Note that the connections for system reports are the same connections used for event notifications. default Table 6: SNMP information Information Additional description Value Do you want to use SNMP to collect performance statistics? * xxx You can collect statistics such as I/O operations and latencies, as well as director memory, by issuing SNMP GET, GET-NEXT, or GET-BULK requests. Community string (if you specified yes above). No private Table 7: Certificate Authority (CA) and Host Certificate information Information Additional description Value CA certificate lifetime CA certificate key passphrase Host certificate lifetime Host certificate key passphrase How many years the cluster's self-signed CA certificate should remain valid before expiring. VPLEX uses self-signed certificates for ensuring secure communication between VPLEX Metro clusters. This passphrase is used during installation to create the CA certificate necessary for this secure communication. How many years the cluster's host certificate should remain valid before expiring. This passphrase is used to create the host certificates necessary for secure communication between clusters. 5 dc1-vplex 2 dc1-vplex Table 8: Product Registration Information Information Additional description Value Company site ID number EMC-assigned identifier used when the VPLEX 12345678 (optional) cluster is deployed on the ESRS server. The EMC customer engineer or account executive can provide this ID. Company name CompanyName Company contact First and last name of a person to contact. First Last Contact s business email address user@companyname.com

Contact s business phone number Contact s business address Method used to send event notifications Remote support method Street, city, state/province, ZIP/postal code, country. Method by which the cluster will send event messages to EMC. Method by which the EMC Support Center can access the cluster. xxx-xxx-xxxx 123 Main Street, City, State, 12345-6789 _X_ 1. ESRS _X_ 2. Email _X_ 1. ESRS _X_ 2. WebEx Table 9: VPLEX Metro IP WAN Configuration Information Information Additional description Value Local director discovery configuration details (default values work in most installations) Class-D network discovery address 224.100.100.100 Discovery port 10000 Attributes for Cluster 1 Port Group 0 Listening port for communications between 11000 clusters (Traffic on this port must be allowed through the network) Class-C subnet prefix for Port Group 0. 192.168.11.0 The IP subnet must be different than the one used by the management servers and different from the Port Group 1 subnet in Cluster 1. Subnet mask 255.255.255.0 Cluster address (use Port Group 0 subnet prefix) Gateway for routing configurations (use Port Group 0 subnet prefix) MTU: The size must be set to the same value for Port Group 0 on both clusters. Also, the same MTU must be set for Port Group 1 on both clusters 192.168.11.251 192.168.11.1 1500 Note: jumbo frames are supported. Port 0 IP address for director 1-1-A 192.168.11.35 Port 0 IP address for director 1-1-B 192.168.11.36 Attributes for Cluster 1 Port Group 1 Class-C subnet prefix for Port Group 1. 10.6.11.0 The IP subnet must be different than the one used by the management servers and different from the Port Group 1 subnet in Cluster 2. Subnet mask 255.255.255.0