1 Design and Implementation Guide EMC VSPEX with EMC VPLEX for VMware vsphere 5.1 Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with EMC VPLEX Metro, VMware vsphere, and EMC VNX for up to 125 virtual machines. June, 2013
2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published June 2013 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website. EMC VSPEX with EMC VPLEX for VMware vsphere 5.1 Part Number H11878
3 Contents 1. Executive Summary Background and VPLEX Overview Document purpose Target Audience Business Challenges VSPEX with VPLEX Solution VPLEX Local VPLEX Metro VPLEX Platform Availability and Scaling Summary VPLEX Overview Continuous Availability Mobility Stretched Clusters Across Distance vsphere HA and VPLEX Metro HA VPLEX Availability Storage/Service Availability Solution Architecture Overview Solution Architecture VPLEX Key Components VPLEX Cluster Witness Best Practices and Configuration Recommendations VPLEX Back-End Storage VPLEX Host Connectivity VPLEX Network Connectivity VPLEX Cluster Connectivity Storage Configuration Guidelines VSPEX Storage Building Blocks VPLEX Local Deployment Overview Physical Installation Preliminary Tasks Set public IPv4 address Run EZ-Setup Wizard Expose Back-End Storage Resume EZ-Setup Meta-volume Register VPLEX... 41
4 7.10 Enable Front-End Ports Configure VPLEX for ESRS Gateway Re-Verify Cluster Health VPLEX Metro Deployment Overview Physical Installation Preliminary Tasks Set public IPv4 address Run EZ-Setup Wizard on Cluster Expose Back-End Storage Resume EZ-Setup Meta-volume Register Cluster Enable Front-End Ports Connect Cluster Verify the Product Version Verify Cluster 2 Health Synchronize Clusters Launch EZ-Setup on Cluster Expose Back-end Storage on Cluster Resume EZ-Setup on Cluster Create Meta-volume on Cluster Register Cluster Configure VPLEX for ESRS Gateway Complete EZ-Setup on Cluster Complete EZ-Setup on Cluster Configure WAN Interfaces Join the Clusters Create Logging Volumes Re-Verify Cluster Health Provisioning Virtual Volumes with VPLEX Local Confirm Storage Pools Provision Storage Adding VPLEX to an Existing VSPEX Solution Assumptions Integration Procedure Storage Array Mapping VPLEX Procedure Power-on Hosts Register Host Initiators... 59
5 10.7 Create Highly Available Datastores Converting a VPLEX Local Cluster into a VPLEX Metro Cluster Gathering Information for Cluster Configuration Information for Cluster Witness Consistency Group and Detach Rules Create Distributed Devices between VPLEX Cluster-1 and Cluster Create Storage View for ESXi Hosts Post-install checklist Summary Appendix-A -- References Appendix-B Tech Refresh using VPLEX Data Mobility Appendix-C VPLEX Configuration limits Appendix-D VPLEX Pre-Configuration Worksheets... 77
6 List of Figures Figure 1: Private Cloud components for VSPEX with VPLEX Local Figure 2: Private Cloud components for a VSPEX/VPLEX Metro solution Figure 3: VPLEX delivers zero downtime Figure 4: Application Mobility within a datacenter Figure 5: Application and Data Mobility Example Figure 6: Application and Data Mobility Example Figure 7: Highly Available Infrastructure Example Figure 8: VPLEX Local architecture for Traditional Single Site Environments Figure 9: VPLEX Metro architecture for Distributed Environments Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes Figure 11: Failure scenarios without VPLEX Witness Figure 12: Failure scenarios with VPLEX Witness Figure 13: VSPEX deployed with VPLEX Metro configured with 3 rd Site VPLEX Witness Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven Infrastructure Figure 16: Place VNX LUNs into VPLEX Storage Group Figure 17: VPLEX Local System Status and Login Screen Figure 18: Provisioning Storage Figure 19: EZ-Provisioning Step 1: Claim Storage and Create Virtual Volumes Figure 20: EZ-Provisioning Step 2: Register Initiators Figure 21: EZ-Provisioning Step 3: Create Storage View Figure 22: EZ-Provisioning Figure 23: Create Virtual Volumes- Select Array Figure 24: Create Virtual Volumes- Select Storage Volumes Figure 25: Create Distributed Volumes- Select Mirrors Figure 26: Create Distributed Volumes- Select Consistency Group Figure 27: EZ-Provisioning- Register Initiators Figure 28: View Unregistered Initiator-ports Figure 29: View Unregistered Initiator-ports Figure 30: EZ-Provisioning- Create Storage View Figure 31: Create Storage View- Select Initiators Figure 32: Create Storage View- Select Ports Figure 33: Create Storage View- Select Virtual Volumes Figure 34: VPLEX Metro System Status Page Figure 35: VPLEX Consistency Group created for Virtual Volumes Figure 36: VPLEX Distributed Devices Figure 37: VPLEX Storage View for ESXi Hosts Figure 38: Batch Migration, Create Migration Plan Figure 39: Batch Migration, Start Migration Figure 40: Batch Migration, Monitor Progress Figure 41: Batch Migrations, Change Migration State Figure 42: Batch Migrations, Commit the Migration... 75
7 List of Tables Table 1: VPLEX Components Table 2: Hardware Resources for Storage Table 3: IPv4 Networking Information Table 4: Metadata Backup Information Table 5: SMTP details to configure event notifications Table 6: SNMP information Table 7: Certificate Authority (CA) and Host Certificate information Table 8: Product Registration Information Table 9: VPLEX Metro IP WAN Configuration Information Table 10: Cluster Witness Configuration Information Table 13: IPv4 Networking Information Table 14: Metadata Backup Information Table 15: SMTP details to configure event notifications Table 16: SNMP information Table 17: Certificate Authority (CA) and Host Certificate information Table 18: Product Registration Information Table 19: IP WAN Configuration Information Table 20: Cluster Witness Configuration Information... 81
8 1. Executive Summary Businesses face many challenges in delivering application availability, while working within constrained IT budgets. Increased deployment of storage virtualization lowers costs and improves availability, but this alone will not allow businesses to provide the required application access demanded by users. This document provides an overview of VPLEX, the use cases, and how VSPEX with VPLEX solutions provide the continuous availability and mobility mission-critical applications require for 24X7 operations. This document is divided into sections that give an overview of the VPLEX family, use cases, solution architecture, how VPLEX extends VMware capabilities along with solution requirements and configuration details. The EMC VPLEX family versions, Local and Metro, provide continuous availability and non-disruptive data mobility for EMC and non-emc storage within and across data centers. Additionally, this document will cover the following: VMware vsphere makes it simpler and less expensive to provide higher levels of availability for critical business applications. With vsphere, organizations can easily increase the baseline level of availability provided for all applications, as well as provide higher levels of availability more easily and cost-effectively. How VPLEX Metro extends VMware vmotion, HA, DRS and FT, stretching the VMware cluster across distance providing solutions that go beyond traditional Disaster Recovery. Solution requirements for software and hardware, material lists, step-by-step sizing guidance and worksheets, and verified deployment steps to implement a VPLEX solution with VSPEX Private Cloud for VMware vsphere that supports up to 125 Virtual machines.
9 2. Background and VPLEX Overview 2.1 Document purpose This document provides an overview on how to use VPLEX to leverage VSPEX Proven Infrastructure, an explanation on how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the overall system. This document applies to VSPEX deployed with EMC VPLEX Metro and VPLEX Witness. The details provided in this document are based on the following configurations: VPLEX GeoSynchrony 5.1 (patch 4) or higher VPLEX Metro VPLEX Clusters are within 5 milliseconds (ms) of each other for VMware HA (10ms is possible with a VMware Enterprise Plus license) VPLEX Witness is deployed to a third failure domain ESXi and vsphere 5.1 or later are used Any qualified pair of arrays (both EMC and non-emc) listed on the EMC Simple Support Matrix (ESSM) found here: https://elabnavigator.emc.com/vault/pdf/emc_vplex.pdf 2.2 Target Audience The readers of this document should be familiar with the VSPEX Proven Infrastructures, have the necessary training and background to install and configure VMware vsphere, EMC VNX series storage systems, VPLEX, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. After purchase, implementers of this solution should focus on the configuration guidelines of the solution validation phase and the appropriate references and appendices. 2.3 Business Challenges Most of today s organizations operate 24X7, with most applications being mission- critical. Continuous availability of these applications to all users is a primary goal of IT. A secondary goal is to have all applications up and running as soon as possible if the applications stop processing. There are hundreds of possibilities that can cause infrastructure to be taken down - from fires, flooding, natural disasters, application failures, or even simple mistakes in the computer room, of which most are outside of IT s control. Sometimes there are good reasons to take down applications for scheduled maintenance, tech refreshes, load balancing, or data center relocation. In all of these scenarios the outcome is the same,
10 applications stop processing. The ultimate goal of the IT organization is to maintain mission critical application availability.
11 3. VSPEX with VPLEX Solution VSPEX with VPLEX utilizing best-of-breed technologies delivers the power, performance and reliability businesses need to be competitive. VSPEX solutions are built with proven best-ofbreed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. Customers are increasingly deploying their business applications on consolidated compute, network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment model. With VSPEX the complexity of integration management is reduced while maintaining the application design and implementation options. VPLEX enhances the VSPEX value proposition by adding the continuous availability and non-disruptive data mobility use cases to the VSPEX infrastructure. VPLEX rounds out the VSPEX Data Protection portfolio by providing the ability to: Refresh technology non-disruptively within the storage arrays within VSPEX vmotion virtual machines non-disruptively from one VSPEX system to another (example, for workload balancing or disaster avoidance) Automatically restart virtual machines from one VSPEX to another to deliver a higher level of protection to VMware environments on VSPEX The following sections describe the VPLEX Local and VPLEX Metro products and how they deliver the value propositions listed above as part of a VSPEX solution. 3.1 VPLEX Local This solution uses VSPEX Private Cloud for VMware vsphere 5.1 with VPLEX Local to provide simplified management and non-disruptive data mobility between multiple heterogeneous storage arrays within the data center. VPLEX removes physical barriers within the datacenter. With its unique scale-out architecture, VPLEX s advanced data caching and distributed cache coherency provide workload resiliency, automatic sharing, balancing and failover of storage domains, and enable local access with predictable service levels.
12 Figure 1: Private Cloud components for VSPEX with VPLEX Local Note: The above image depicts a logical configuration, physically the VPLEX can be hosted within the VSPEX rack. 3.2 VPLEX Metro The two data center site solution referenced in this document uses VSPEX Private Cloud for VMware vsphere 5.1 with VPLEX Metro to provide simplified management and nondisruptive data mobility between multiple heterogeneous storage arrays across data centers. VPLEX Metro enhances the capabilities of VMware vmotion, HA, DRS and FT to provide a solution that extends data protection strategies that go beyond traditional
13 Disaster Recovery. This solution provides a new type of deployment which achieves continuous availability over distance for today s enterprise storage and cloud environments. VPLEX Metro provides data access and mobility between two VPLEX clusters within synchronous distances. This solution builds on the VPLEX Local approach by creating a VPLEX Metro cluster between the two geographically dispersed datacenters. Once deployed, this solution will provide truly available distributed storage volumes over distance and makes VMware technology such as vmotion, HA, DRS and FT even better and easier. Figure 2: Private Cloud components for a VSPEX/VPLEX Metro solution The above image is a logical configuration depicting 125 VMs with their datastores stretched across a VMware vsphere 5.1 cluster. This infrastructure will deliver continuous availability for the applications as well as enable non-disruptive workload mobility and balancing. The VPLEX appliances can be physically hosted within the VSPEX racks if space permits.
14 3.3 VPLEX Platform Availability and Scaling Summary VPLEX addresses high-availability and data mobility requirements while scaling to the I/O throughput required for the front-end applications and back-end storage. Continuous Availability (CA), High-availability (HA), and Data Mobility features are all characteristics of the VPLEX Local and VPLEX Metro solutions outlined in this document. The basic building block of a VPLEX is an engine. To eliminate single points of failure, each VPLEX Engine consists of two Directors. A VPLEX cluster can consist of one, two, or four engines. Each engine is protected by a standby power supply (SPS), and each Fibre Channel switch gets its power through an uninterruptible power supply (UPS). In a dualengine or quad-engine cluster, the management server also gets power from a UPS. The management server has a public Ethernet port, which provides cluster management services when connected to the customer network. VPLEX scales both up and out. Upgrades from a single engine to a dual engine cluster as well as from a dual engine to a quad engine are fully supported and are accomplished nondisruptively. This is referred to as scale up. Upgrades from a VPLEX Local to a VPLEX Metro are also supported non-disruptively.
15 4. VPLEX Overview The EMC VSPEX with EMC VPLEX solution represents the next-generation architecture for continuous availability and data mobility for mission-critical applications. This architecture is based on EMC s 20+years of expertise in designing; implementing and perfecting enterprise class intelligent cache and distributed data protection solutions. The combined VSPEX with VPLEX solution provides a complete system architecture capable of supporting up to 125 virtual machines with a redundant server or network topology and highly available storage within or across geographically dispersed datacenters. VPLEX addresses three distinct customer requirements: Continuous Availability: The ability to create high-availability storage infrastructure across synchronous distances with unmatched resiliency. Mobility: The ability to move applications and data across different storage installations within the same data center, across a campus, or within a geographical region. Stretched Clusters across Distance The ability to extend VMware vmotion, HA, DRS and FT outside the data center across distances, ensuring the continuous availability of VSPEX solutions. 4.1 Continuous Availability EMC VPLEX family provides continuous availability with zero unplanned downtime for applications from within a data center and across data centers at synchronous distances. VPLEX enables users to have the exact same information simultaneously read / write accessible in two locations, delivering the ability to stretch hypervisor clusters, such as VMware across sites. Instead of idle assets at the second site, all infrastructure is utilized in an Active-Active state. Figure 3: VPLEX delivers zero downtime
16 With VPLEX in place, customers now have infinite flexibility in the area of Data Mobility. This addresses some compelling use cases such as array technology refreshes with nodisruption to the applications or planned downtime. It also enables performance load balancing for customers who want to dynamically move data to a higher performing or higher capacity array without affecting the end users. 4.2 Mobility EMC VPLEX Local enables the connectivity to heterogeneous storage arrays providing seamless data mobility and the ability to manage storage provisioned from multiple heterogeneous arrays from a single interface within a data center. This provides you with the ability to relocate, share and balance infrastructure resources within a data center. Figure 4: Application Mobility within a datacenter VPLEX Metro configurations enable migrations within and across datacenters over synchronous distances. In combination with VMware using vmotion, it allows you to transparently relocate Virtual Machines and their corresponding applications and data over synchronous distance. This provides you with the ability to relocate, share and balance infrastructure resources between data centers. These capabilities save you money, both in reducing time to do data migrations, and balancing workloads across sites to fully utilize infrastructure at both sites.
17 Traditional data migration using array replication or manual data moves, are an expensive, time consuming, and oftentimes risky process. They are often expensive since companies typically are paying someone to do the services work. Migrations can be time consuming as the customer can t just shut down servers, instead they must work through their business units to identify possible windows to work within that are mostly during nights and weekends. Migrations can also be risky events if all of the dependencies between applications aren t well documented. It is possible that any issues in the migration process may not be able to be remediated until the following maintenance cycle without an outage. VPLEX limits the risk in traditional migrations by having a fully reversible process. If performance or other issues are discovered when the new storage is put online, the new storage can be taken down and the old storage can continue serving I/O. Due to the ease of migrations with VPLEX, the customer can do the migrations themselves and there are significant services cost savings. Also, new infrastructure can be used immediately with no need to wait for scheduled downtime to begin migrations. There is powerful TCO associated with VPLEX all future refreshes and migrations are free. Figure 5: Application and Data Mobility Example A VPLEX Cluster is a single virtualization I/O group that enables non-disruptive data mobility across the entire cluster. This means that all Directors in a VPLEX cluster have access to all Storage Volumes making this solution what is referred to as an N -1 architecture. This type of architecture allows for multiple director failures without loss of access to data down to a single director.
18 During a VPLEX Mobility operation any jobs in progress can be paused or stopped without affecting data integrity. Data Mobility creates a mirror of the source and target devices allowing the user to commit or cancel the job without affecting the actual data. A record of all mobility jobs are maintained until the user purges the list for organizational purposes. 4.3 Stretched Clusters Across Distance VPLEX Metro extends VMware vmotion, High Availability (HA), Distributed Resource Scheduler (DRS) and Fault Tolerance (FT) outside the data center across distances, ensuring the continuous availability of VSPEX solutions. Stretching vmotion across datacenters enables non-disruptive load balancing, maintenance, and workload re-location. VMware DRS provides for full utilization of resources across domains. Figure 6: Application and Data Mobility Example 4.4 vsphere HA and VPLEX Metro HA
19 Due to its core design, EMC VPLEX Metro provides the perfect foundation for VMware High Availability and Fault Tolerance clustering over distance ensuring simple and transparent deployment of stretched clusters without any added complexity. VPLEX Metro takes a single block storage device in one location and distributes it to provide single disk semantics across two locations. This enables a distributed VMFS datastore to be created on that virtual volume. Furthermore, if the layer 2 network has also been stretched then a single instance of vsphere (including a single logical datacenter) can now also be distributed into more than one location and VMware HA can be enabled for any given vsphere cluster. This is possible since the storage federation layer of the VPLEX is completely transparent to ESXi. It therefore enables the user to add ESXi hosts at two different locations to the same HA cluster. Stretching an HA failover cluster (such as VMware HA) with VPLEX creates a Federated HA cluster over distance. This blurs the boundaries between local HA and disaster recovery since the configuration has the automatic restart capabilities of HA combined with the geographical distance typically associated with synchronous DR. 4.5 VPLEX Availability VPLEX is built on a foundation of scalable and highly available processor engines and is designed to seamlessly scale from small to large configurations. VPLEX resides between the servers and heterogeneous storage assets, and uses a unique clustering architecture that allows servers at multiple data centers to have read/write access to the same data at two locations at the same time. Unique characteristics of this new architecture include: Scale-out clustering hardware lets you start small and grow big with predictable service levels Advanced data caching utilizes large-scale SDRAM cache to improve performance and reduce I/O latency and array contention Distributed cache coherence for automatic sharing, balancing, and failover of I/O across the cluster Consistent view of one or more LUNs across VPLEX clusters (within a data center or across synchronous distances) enabling new models of high-availability and workload relocation With a unique scale-up and scale-out architecture, VPLEX advanced data caching and distributed cache coherency provide continuous availability, workload resiliency, automatic sharing, balancing, and failover of storage domains, and enables both local and remote data access with predictable service levels. EMC VPLEX has been architected for virtualization enabling federation across VPLEX Clusters. VPLEX Metro supports a maximum 5ms RTT for FC or 10 GigE connectivity. To protect against entire site failure causing application outages, VPLEX uses a VMware Virtual machine located within a separate failure domain to provide a VPLEX Witness
20 between VPLEX Clusters that are part of a distributed/federated solution. The VPLEX Witness, known as Cluster Witness, resides in a third failure domain monitoring both VPLEX Clusters for availability. This third site needs only IP connectivity to the VPLEX sites. 4.6 Storage/Service Availability Each VPLEX site has a local VPLEX Cluster with physical storage and hosts connected to that VPLEX Cluster only. The VPLEX Clusters themselves are interconnected across the sites to enable federation. A virtual volume is taken from each of the VPLEX Clusters to create a distributed virtual volume. Hosts connected in Site A actively use the storage I/O capability of the storage in Site A; Hosts in Site B actively use the storage I/O capability of the storage in Site B. Figure 7: Highly Available Infrastructure Example VPLEX distributed volumes are available from either VPLEX cluster and have the same LUN and storage identifiers when exposed from each cluster, enabling true concurrent read/write access across sites.
21 When using a distributed virtual volume across two VPLEX Clusters, if the storage in one of the sites is lost, all hosts continue to have access to the distributed virtual volume, with no disruption. VPLEX services all read/write traffic through the remote mirror leg at the other site.
22 5. Solution Architecture 5.1 Overview The VSPEX with VPLEX solution using VMware vsphere has been validated for configuration with up to 125 virtual machines. Figure-8 shows an environment with VPLEX Local only, virtualizing the storage and providing high availability across storage arrays. Since all ESXi servers are able to see VPLEX, VMware vmotion, HA, and DRS are able to seamlessly move and be restarted on all hosts. This configuration is traditional virtualized environment compared to the VPLEX Metro environment which provides high availability within, and across, datacenters. Figure 8: VPLEX Local architecture for Traditional Single Site Environments
23 Figure-9 characterizes both a traditional infrastructure validated with block -based storage in a single datacenter, and a distributed infrastructure validated with block -based storage federated across two datacenters, where 8 Gb FC carries storage traffic locally, and 10 GbE carries storage, management, and application traffic across datacenter sites Figure 9: VPLEX Metro architecture for Distributed Environments 5.2 Solution Architecture VPLEX Key Components This solution adds the following VPLEX technology to the VSPEX Private Cloud for VMware vsphere 5.1 for 125 Virtual Machines solution:
24 Table 1: VPLEX Components Cluster-1 Components Single engine Directors 2 Redundant Engine SPSs Yes FE Fibre Channel ports (VS2) 8 BE Fibre Channel ports (VS2) 8 Cache size (VS2 Hardware) 72GB Management Servers 1 Internal Fibre Channel switches (Local Comm) None Uninterruptable Power Supplies (UPSs) None Cluster-2 Components Single engine Directors 2 Redundant Engine SPSs Yes FE Fibre Channel ports (VS2) 8 BE Fibre Channel ports (VS2) 8 Cache size (VS2 Hardware) 72GB Management Servers 1 Internal Fibre Channel switches (Local Comm) None Uninterruptable Power Supplies (UPSs) None The figure below shows a high-level physical topology of a VPLEX Metro distributed device. VPLEX Dual and Quad engine options can be found in the Appendix. Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes Figure 10 is a physical representation of the logical configuration shown in Figure 9. Effectively, with this topology deployed, the distributed volume can be treated just like any
25 other volume; the only difference being it is now distributed and available in two locations at the same time. Another benefit of this type of architecture is extreme simplicity since it is no more difficult to configure a cluster across distance that it is in a single data center. Note: When deploying VPLEX Metro you have the choice to inter-connect your VPLEX Clusters by using either 8GB Fiber Channel or 10GB Ethernet WAN connectivity. When using FC connectivity this can be configured with either a dedicated channel (i.e. separate non merged fabrics) or an ISL based fabric (i.e. where fabrics have been merged across sites). It is assumed that any WAN link will be fully routable between sites with physically redundant circuits. Note: It is vital that VPLEX Metro has enough bandwidth between clusters to meet requirements. The Business Continuity Solution Designer (BCSD) tool can be used to validate the design. EMC can assist in the qualification if desired. https://elabadvisor.emc.com/app/licensedtools/list For an in-depth technology and architectural understanding of VPLEX Metro, VMware HA, and their interactions, please refer to the VPLEX HA Techbook found here: documentation/h7113-vplexarchitecture-deployment.pdf 5.3 VPLEX Cluster Witness VPLEX Metro goes beyond the realms of legacy active/passive replication technologies since it can deliver true active/active storage over distance as well as federated availability. There are three main items that are required to deliver true "Federated Availability". 1. True active/active fibre channel block storage over distance. 2. VPLEX Storage mirroring delivers one view of storage, making data accessible immediately, with no waiting for mirroring to complete. This feature eliminates the need for host based mirroring, saving host CPU cycles. 3. External arbitration to ensure that under all failure conditions automatic recovery is possible. In the previous sections we have discussed 1 and 2, but now we will look at external arbitration which is enabled by VPLEX Witness. VPLEX Witness is delivered as a zero cost VMware Virtual Appliance (vapp) which runs on a customer supplied ESXi server, or a public cloud utilizing a VMware virtualized environment. The ESXi server resides in a physically separate failure domain to either VPLEX cluster and uses different storage to the VPLEX cluster.
26 Using VPLEX Witness ensures that true Federated Availability can be delivered. This means that regardless of site or link/wan failure a copy of the data will automatically remain online in at least one of the locations. When setting up a single or a group of distributed volumes the user will choose a preference rule which is a special property that each individual or group of distributed volumes has. It is the preference rule that determines the outcome after failure conditions such as site failure or link partition. The preference rule can either be set to cluster A preferred, cluster B preferred or no automatic winner. At a high level this has the following effect to a single or group of distributed volumes under different failure conditions as listed below: Figure 11: Failure scenarios without VPLEX Witness As we can see in Figure 11 if we only used the preference rules without VPLEX Witness then under some scenarios manual intervention would be required to bring the volume online at a given VPLEX cluster(e.g. if site A is the preferred site, and site A fails, site B would also suspend). This is where VPLEX Witness assists since it can better diagnose failures due to the network triangulation, and ensures that at any time at least one of the VPLEX clusters has an active path to the data as shown in the table below: Figure 12: Failure scenarios with VPLEX Witness
27 As one can see from Figure 12 VPLEX Witness converts a VPLEX Metro from an active/active mobility and collaboration solution into an active/active continuously available storage cluster. Furthermore once VPLEX Witness is deployed, failure scenarios become selfmanaging (i.e. fully automatic) which makes it extremely simple since there is nothing to do regardless of the failure condition. Figure 13: VSPEX deployed with VPLEX Metro configured with 3 rd Site VPLEX Witness As depicted in Figure 13 above, we can see that the Witness VM is deployed in a separate fault domain and connected into both VPLEX management stations via an IP network. Note: VPLEX Witness will support a maximum round trip latency of 1 second between VPLEX s.
28 VPLEX Virtualized Storage for VMware ESXi Using VPLEX to virtualize your VMware ESXi storage will allow disk access without changing the fundamental dynamics of datastore creation and use. Whether using VPLEX Local for Virtual volumes or VPLEX Metro with Distributed Devices via AccessAnywhere the hosts are still going to coordinate locking to ensure volume consistency. This is controlled by the cluster file system Virtual Machine File System (VMFS) within each datastore. Each storage volume will be presented to VPLEX, a Virtual Volume or Distributed device is created and presented to each ESXi Host in the cluster and formatted with the VMFS file system. The Figure 14 below shows a high-level physical topology of how VMFS and RDM disks are passed to each ESXi host. Figure 14: VMware virtual disk types VMFS VMware VMFS is a high-performance cluster file system for ESXi Server virtual machines that allows multiple ESXi Servers to access the same virtual machine storage concurrently. VPLEX enhances this technology by adding the ability to take a virtual volume at one location and create a RAID-1 mirror that creates a distributed volume to provide single disk semantics across two locations. This enables the VMFS datastore to be transparently utilized within and across datacenters. Raw Device Mapping (RDM) VMware also provides RDM, which is a SCSI pass-through technology that allows a virtual machine to pass SCSI commands for a volume directly to the physical storage array. RDM s
29 are typically used for quorum devices and/or other commonly shared volumes within a cluster.
30 6. Best Practices and Configuration Recommendations 6.1 VPLEX Back-End Storage The following are Best Practices for VPLEX Back-End Storage: Dual fabric designs for fabric redundancy and HA should be implemented to avoid a single point of failure. This provides data access even in the event of a full fabric outage. Each VPLEX director will physically connect to both fabrics for both host (front-end) and storage (back-end) connectivity. Hosts will connect to both an A director and B director from both fabrics for the supported HA level of connectivity as required with the Non-Disruptive Upgrade (NDU) pre-checks. Fabric zoning should consist of a set of zones a single initiator and up to 16 targets Avoid port speed issues between the fabric and VPLEX by using dedicated port speeds taking special care not to use oversubscribed ports on SAN switches It is required that each director in a VPLEX cluster must have a minimum of two I/O paths to every local back-end storage array and to every storage volume presented to that cluster. VPLEX allows a maximum of 4 active paths per director to a given LUN (Optimal). This is considered optimal because each director will load balance across the four active paths to the storage volume. 6.2 VPLEX Host Connectivity The following are Best Practices for VPLEX Host Connectivity: Dual fabric designs are considered a best practice The front-end I/O modules on each director should have a minimum of two physical connections one to each fabric (required). Each host should have at least one path to an A director and one path to a B director on each fabric for a total of four logical paths (required for NDU). Maximum availability for host connectivity is achieved by using hosts with multiple host bus adapters and with zoning to all VPLEX directors. Multipathing or path failover software is required at the host for access across the dual fabrics Each host should have fabric zoning that provides redundant access to each LUN from a minimum of an A and B director from each fabric.
31 Four paths are required for NDU Observe Director CPU utilization to schedule NDU for times when average consumption is at acceptable levels 6.3 VPLEX Network Connectivity The following are Best Practices for VPLEX Network Connectivity: Requires an IPv4 Address for the management server Management Server is configured for Auto Negotiate (1Gbps NIC) VPN connectivity between management servers requires a routable/pingable connection between each cluster. Network QoS requires that the link latency does not exceed 1 second (not millisecond) for management server to VPLEX Witness server Network QoS must be able to handle file transfers during the NDU procedure The following Firewall ports must be opened: o Internet Key Exchange (IKE): UDP port 500 o NAT Traversal in the IKE (IPsec NAT-T): UDP port 4500 o Encapsulating Security Payload (ESP): IP protocol number 50 o Authentication Header (AH): IP protocol number 51 o Secure Shell (SSH) and Secure Copy (SCP): TCP port VPLEX Cluster Connectivity The following are Best Practices for VPLEX Cluster Connectivity: Metro over Fiber Channel (8 Gbps) Each director s FC WAN ports must be able to see at least one FC WAN port on every other remote director (required). The director s local com port is used for communications between directors within the cluster. Independent FC WAN links are strongly recommended for redundancy Each director has two FC WAN ports that should be configured on separate fabrics to maximize redundancy and fault tolerance. Use VSAN s to isolate VPLEX Metro FC traffic from other traffic using zoning. Use VLAN s to isolate VPLEX Metro Ethernet traffic from other traffic. Metro over IP (10 Gbps/E) Latency must be less than or equal to 5ms (RTT) Cache must be configured for synchronous write through mode only
32 6.5 Storage Configuration Guidelines This section provides guidelines for setting up the storage layer of the solution to provide highavailability and the expected level of performance. The tested solutions described below use block storage via Fiber Channel. The storage layout described below adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system usage and load if required. However, the building blocks described in this document ensure acceptable performance. The VSPEX storage building blocks document specifies recommendations for customization. Component Table 2: Hardware Resources for Storage Configuration EMC VNX Array Block Common: 1 x 1 GbE NIC per Control Station for management 1 x 1 GbE NIC per SP for management 2 front-end ports per SP System disks for VNX OE For 125 virtual machines: EMC VNX5300 o 60 x 600 GB 15k rpm 3.5-inch SAS drives o 2 x 600 GB 15k rpm 3.5-inch SAS Hot Spares o 10 x 200 GB Flash drives o 1 x 200 GB Flash drive as a hot spare o 4 x 200 GB Flash drive for FAST Cache Cluster-1: (2) Directors EMC VPLEX Metro EMC VPLEX Metro Single Engine Single Engine (8) Front-End Ports (8) Back-End Ports (4) WAN Comm Ports Cluster-2: (2) Directors (8) Front-End Ports (8) Back-End Ports (4) WAN Comm Ports
33 6.6 VSPEX Storage Building Blocks Please use the EMC VSPEX Private Cloud VMware vsphere 5.1 for up to 500 Virtual Machines to properly size, plan, and implement your 125 Virtual Machine deployment. Once the Building Block size has been established and the LUNs have been created on the back-end storage, they will be virtualized by VPLEX and presented to the ESXi hosts for use. Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven Infrastructure
34 7. VPLEX Local Deployment 7.1 Overview The VPLEX deployment process consists of two main steps, physical installation and configuration. The physical installation of VPLEX is racking and cabling VPLEX into the VSPEX rack. The installation process is well defined in the EMC VPLEX Procedure Generator and therefore is not replicated in this section. For detailed installation instructions use the EMC VPLEX Field Deployment Guide found in the EMC VPLEX Procedure Generator. The VPLEX deployment process consists of several tasks that are listed below. There are tables within this chapter that detail what information is needed to complete the configuration. These tables have been populated with sample data so it is clear what format is expected. Please see Appendix-D VPLEX Pre-Configuration Worksheets for blank worksheets that can be printed and filled in. Having these worksheets filled out prior to beginning the configuration process is highly recommended. Once the VPLEX has been configured for use, you may then log in to a VPLEX management server to discover and claim your VSPEX Building Block LUNs from the VNX array. These LUNs will be used to create virtual volumes for VPLEX Local and/or Distributed Volumes for VPLEX Metro implementations. For this document we will make the assumption that the VSPEX environment has been setup and is configured for a VSPEX Private Cloud that will support up to 125 virtual machines. Physical Installation and configuration of VPLEX is identical whether the VSPEX with VPLEX solution is already in production or newly installed. The VPLEX pre-deployment gathering of data consists of the items listed below. The first phase is to collect all appropriate site data and fill out the configuration worksheets. These worksheets may be found in the Installation and Configuration section of the VPLEX Procedure Generator. Throughout this chapter you will need to refer to the VPLEX Configuration Guide, or other referenced document, for more detailed information on each step. Chapter 2 of the VPLEX Configuration Guide is for a VPLEX Local implementation, which is the focus for this chapter. For a VPLEX Metro deployment, review the tables in this chapter then proceed to Chapter 8. VPLEX Metro Deployment. 7.2 Physical Installation This is the physical installation of the VPLEX into the VSPEX cabinet. This includes the following tasks: Unpack the VPLEX equipment. Install and cable the standby power supply (SPS) and engine. Install the VPELX management server.
35 Connect the remaining internal VPLEX management cables. Power up and verify VPLEX operational status. Connect the VPLEX front-end and back-end I/O cables. 7.3 Preliminary Tasks After the VPLEX has been physically installed into the rack, you will need to verify that your environment is ready for the deployment of the VPLEX. These tasks include the following: Install the VPLEX Procedure Generator Review the VPLEX Implementation and Planning Best Practice Guide Review the VPLEX Simple Support Matrix Review the VPLEX with GeoSynchrony 5.1 Release Notes Review the VPLEX Configuration Guide Review the ESX Host Connectivity Guide Review the Encapsulate Arrays on ESX Guide Verify (4) metadata devices are available for VPLEX install Review the EMC Secure Remote Support Gateway Install Procedure Before moving forward with the configuration of VPLEX, complete all the relevant worksheets below to ensure all the necessary information to complete the configuration is available. A blank worksheet is provided in Appendix-D VPLEX Pre-Configuration Worksheets. Review Chapter 2, Task 1 of the VPLEX GeoSynchrony v5.1 Configuration Guide. The following tables show sample configuration information as an example and should be replaced by actual values from the installation environment. Table 3: IPv4 Networking Information Information Additional description Value Management server IP address Network mask Hostname EMC secure remote support (ESRS) gateway Public IP address for the management server on the customer IP network. Subnet mask for the management server IP network. Hostname for the management server. Once configured, this name replaces the default name (service) in the shell prompt each time you open an SSH session to the management server. IP address for the ESRS gateway on the IP network DC1-VPLEX Table 4: Metadata Backup Information Information Additional description Value Day and time to back up the meta-volume The day and time that the cluster s meta-volume will be backed up to a remote storage volume on a back-end array (which will be selected during the cluster setup procedure) MAY 30, 12:00
36 Table 5: SMTP details to configure event notifications Information Additional description Value Do you want VPLEX to send event notifications? Sending event notifications allows EMC to act on any issues quickly. Note: The remaining information in this table applies only if you specified yes to the previous question. SMTP IP address of primary connection SMTP address thru which Call Home s will be sent. EMC recommends using your ESRS gateway as the primary connection address First/only recipient s address SMTP IP address for first/only recipient Event notification type Second recipient s address (optional) SMTP IP address for second recipient * Event notification type for second recipient Third recipient s address (optional) SMTP IP address for third recipient * Event notification type for third recipient address of a person (generally a customer s employee) who will receive Call Home notifications. SMTP address through which the first/only recipient s notifications will be sent. One or more people can receive notifications when events occur. Notification types: 1. On Success or Failure - Sends an regardless of whether the notification to EMC succeeded. 2. On Failure - Sends an each time an attempt to notify EMC has failed. 3. On All Failure - Sends an only if all attempts to notify EMC have failed. On Success - Sends an each time EMC is successfully sent an notification.emc recommends distributing connections over multiple SMTP servers for better availability. These SMTP v4 IP addresses can be different from the addresses used for event notifications sent to EMC. address of a second person who will receive Call Home notifications. SMTP address through which the second recipient s notifications will be sent. See description of event notification type for first recipient. address of a third person who will receive Call Home notifications. SMTP address through which the third recipient s notifications will be sent. See description of event notification type for first recipient. Yes
37 Information Additional description Value Do you want VPLEX to send system reports? Day or week and time to send system reports. Sending weekly system reports allows EMC to communicate known configuration risks, as well as newly discovered information that can optimize or reduce risks. Note that the connections for system reports are the same connections used for event notifications. default Table 6: SNMP information Information Additional description Value Do you want to use SNMP to collect performance statistics? * xxx You can collect statistics such as I/O operations and latencies, as well as director memory, by issuing SNMP GET, GET-NEXT, or GET-BULK requests. Community string (if you specified yes above). No private Table 7: Certificate Authority (CA) and Host Certificate information Information Additional description Value CA certificate lifetime CA certificate key passphrase Host certificate lifetime Host certificate key passphrase How many years the cluster's self-signed CA certificate should remain valid before expiring. VPLEX uses self-signed certificates for ensuring secure communication between VPLEX Metro clusters. This passphrase is used during installation to create the CA certificate necessary for this secure communication. How many years the cluster's host certificate should remain valid before expiring. This passphrase is used to create the host certificates necessary for secure communication between clusters. 5 dc1-vplex 2 dc1-vplex Table 8: Product Registration Information Information Additional description Value Company site ID number EMC-assigned identifier used when the VPLEX (optional) cluster is deployed on the ESRS server. The EMC customer engineer or account executive can provide this ID. Company name CompanyName Company contact First and last name of a person to contact. First Last Contact s business address
38 Contact s business phone number Contact s business address Method used to send event notifications Remote support method Street, city, state/province, ZIP/postal code, country. Method by which the cluster will send event messages to EMC. Method by which the EMC Support Center can access the cluster. xxx-xxx-xxxx 123 Main Street, City, State, _X_ 1. ESRS _X_ 2. _X_ 1. ESRS _X_ 2. WebEx Table 9: VPLEX Metro IP WAN Configuration Information Information Additional description Value Local director discovery configuration details (default values work in most installations) Class-D network discovery address Discovery port Attributes for Cluster 1 Port Group 0 Listening port for communications between clusters (Traffic on this port must be allowed through the network) Class-C subnet prefix for Port Group The IP subnet must be different than the one used by the management servers and different from the Port Group 1 subnet in Cluster 1. Subnet mask Cluster address (use Port Group 0 subnet prefix) Gateway for routing configurations (use Port Group 0 subnet prefix) MTU: The size must be set to the same value for Port Group 0 on both clusters. Also, the same MTU must be set for Port Group 1 on both clusters Note: jumbo frames are supported. Port 0 IP address for director 1-1-A Port 0 IP address for director 1-1-B Attributes for Cluster 1 Port Group 1 Class-C subnet prefix for Port Group The IP subnet must be different than the one used by the management servers and different from the Port Group 1 subnet in Cluster 2. Subnet mask
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
EMC VPLEX FAMILY Transparent information mobility within, across, and between data centers A STORAGE PLATFORM FOR THE PRIVATE CLOUD In the past, users have relied on traditional physical storage to meet
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
Best Practices Planning Abstract This white paper provides a brief introduction to EMC VPLEX and describes how VPLEX provides increased workload resiliency to the data center. Best practice recommendations
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
VSPEX IMPLEMENTATION GUIDE SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE Silver Peak Abstract This Implementation Guide describes the deployment of Silver Peak
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed
VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
White Paper EMC VPLEX 5.0 ARCHITECTURE GUIDE Abstract This white paper explains the hardware and software architecture of the EMC VPLEX series with EMC GeoSynchrony. This paper will be of particular interest
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
EXTENDED ORACLE RAC with EMC VPLEX Metro Reduced Complexity and Improved Availability Allan Robertson Derek O Mahony EMC Solutions Group 1 Objectives At the end of this session, you will Understand how
IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes
Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical
Introduction to VMware EVO: RAIL White Paper Table of Contents Introducing VMware EVO: RAIL.... 3 Hardware.................................................................... 4 Appliance...............................................................
What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management
Implementing EMC VPLEX and Microsoft Hyper-V and SQL Server with Applied Technology Abstract This white paper examines deployment and integration of Microsoft Hyper-V and Microsoft SQL Server solutions
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server A Dell Technical White Paper PowerVault MD32xx Storage Array www.dell.com/md32xx THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION Network Appliance, Inc. March 2007 TABLE OF CONTENTS 1 INTRODUCTION... 3 2 BACKGROUND...
VSPEX IMPLEMENTATION GUIDE CloudLink SecureVSA Implementation Guide for EMC for VSPEX Private Cloud Environments CloudLink Solution Architect Team Abstract This Implementation Guide describes best practices
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the
Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient
EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes
White Paper IMPLEMENTING EMC VPLEX WITH MICROSOFT SQL SERVER 2012 AND WINDOWS SERVER FAILOVER CLUSTERING Abstract This white paper examines deployment and integration of Microsoft SQL Server 2012 AlwaysOn
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
DATA SHEET EMC PowerPath Family PowerPath Multipathing PowerPath Migration Enabler PowerPath Encryption with RSA The enabler for EMC host-based solutions The Big Picture Intelligent high-performance path
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
White Paper EMC VNXe HIGH AVAILABILITY Overview Abstract This white paper discusses the high availability (HA) features in the EMC VNXe system and how you can configure a VNXe system to achieve your goals
CompTIA Cloud+ 9318; 5 Days, Instructor-led Course Description The CompTIA Cloud+ certification validates the knowledge and best practices required of IT practitioners working in cloud computing environments,
Expert Reference Series of White Papers Visions of My Datacenter Virtualized 1-800-COURSES www.globalknowledge.com Visions of My Datacenter Virtualized John A. Davis, VMware Certified Instructor (VCI),
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
OmniCube SimpliVity OmniCube and Multi Federation ROBO Reference Architecture White Paper Authors: Bob Gropman Date: April 13, 2015 SimpliVity and OmniCube are trademarks of SimpliVity Corporation. All
White Paper EMC VPLEX: LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES Second Edition Abstract This white paper provides best practices planning and use cases for using array-based and native replication
EMC RECOVERPOINT FAMILY Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs ESSENTIALS EMC RecoverPoint Family Optimizes RPO
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200
CompTIA Cloud+ Length: 5 Days Who Should Attend: Project manager, cloud computing services Cloud engineer Manager, data center SAN Business analyst, cloud computing Summary: The CompTIA Cloud+ certification
Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase
white paper Best Practices for Implementing iscsi Storage in a Virtual Server Environment Server virtualization is becoming a no-brainer for any that runs more than one application on servers. Nowadays,
Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER CORPORATE COLLEGE SEMINAR SERIES Date: April 15-19 Presented by: Lone Star Corporate College Format: Location: Classroom instruction 8 a.m.-5 p.m. (five-day session)
Implementing Concentrator FailOver Clusters Technical Brief All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject to
The Drobo family of iscsi storage arrays allows organizations to effectively leverage the capabilities of a VMware infrastructure, including vmotion, Storage vmotion, Distributed Resource Scheduling (DRS),
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
White Paper EMC Data Protection Advisor 6.0 Abstract EMC Data Protection Advisor provides a comprehensive set of features to reduce the complexity of managing data protection environments, improve compliance
Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer
Table of Contents UCS E-Series Availability and Fault Tolerance... 3 Solid hardware... 3 Consistent management... 3 VMware vsphere HA and FT... 3 Storage High Availability and Fault Tolerance... 4 Quick-start
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
RSA Authentication Manager 8.1 Setup and Configuration Guide Revision 2 Contact Information Go to the RSA corporate website for regional Customer Support telephone and fax numbers: www.emc.com/domains/rsa/index.htm
Vblock Solution for SAP Application High Availability Table of Contents www.vce.com VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY Version 2.0 February 2013 1 Copyright 2013 VCE Company, LLC. All
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
THE VIRTUAL DATA CENTER OF THE FUTURE An Introduction to VMAX and VPLEX Ron Davidi EMC Israel 1 2 EMC Petabyte Club Customers with > 1 PB EMC storage 1,000 10 2002 2003 2004 2005 2006 2007 2008 2009 2010
EMC VPLEX Security Configuration Guide P/N 300-010-493 Rev A05 June 7, 2011 This guide provides an overview of VPLEX security configuration settings, including secure deployment and usage settings needed
Improving Your Business Continuity with Vmware Ilir Mitrushi Consultant TETRA Solutions April 2011 Agenda Introduction : general problem statement Planned vs. Unplanned downtime vsphere 4 platform capabilities:
VMware vsphere 4.1 with ESXi and vcenter This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter. Assuming no prior virtualization
To register or for more information call our office (208) 898-9036 or email firstname.lastname@example.org Vmware VSphere 6.0 Private Cloud Administration Class Duration 5 Days Introduction This fast paced,