USING VMWARE VSPHERE WITH EMC VPLEX

Size: px
Start display at page:

Download "USING VMWARE VSPHERE WITH EMC VPLEX"

Transcription

1 White Paper USING VMWARE VSPHERE WITH EMC VPLEX Best Practices Planning Abstract This white paper describes EMC VPLEX features and functionality relevant to VMware vsphere. The best practices for configuring a VMware environment to optimally leverage EMC VPLEX are also presented. The paper also discusses methodologies to migrate an existing VMware deployment to the EMC VPLEX family. July 2011

2 Copyright 2011 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, ESX, ESXi, vcenter, VMotion, and vsphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number h

3 Table of Contents Executive summary... 4 Audience... 4 EMC VPLEX overview... 5 EMC VPLEX architecture... 5 EMC VPLEX family... 6 EMC VPLEX clustering architecture... 7 Provisioning VPLEX storage to VMware environments... 9 EMC Virtual Storage Integrator and VPLEX Connectivity considerations Multipathing and load balancing VMware ESX version 4.x and NMP VMware ESX version 4.x with PowerPath/VE PowerPath/VE features PowerPath/VE management Path Management feature of Virtual Storage Integrator Migrating existing VMware environments to VPLEX Nondisruptive migrations using storage vmotion Migration using encapsulation of existing devices VMware deployments in a VPLEX Metro environment VPLEX witness VMware cluster configuration with VPLEX witness VMware cluster configuration VMware DRS Groups and Rules Cross-connecting VMware vsphere environments to VPLEX clusters for increased resilience VMware cluster configuration without VPLEX witness Nondisruptive migration of virtual machines using VMotion in environments without VPLEX witness Changing configuration of non-replicated VPLEX Metro volumes Virtualized vcenter Server on VPLEX Metro Conclusion References

4 Executive summary The EMC VPLEX family of products running the EMC GeoSynchrony operating system provides an extensive offering of new features and functionality for the era of cloud computing. EMC VPLEX breaks the physical barriers of data centers and allows users to access a single copy of the data at different geographical locations concurrently, enabling a transparent migration of running virtual machines between data centers. This capability allows for transparent load sharing between multiple sites while providing the flexibility of migrating workloads between sites in anticipation of planned events. Furthermore, in case of an unplanned event that causes disruption of services at one of the data centers, the failed services can be restarted at the surviving site with minimal effort thus minimizing recovery time objective (RTO). VMware vsphere virtualizes the entire IT infrastructure including servers, storage, and networks. The VMware software aggregates these resources and presents a uniform set of elements in the virtual environment. Thus VMware vsphere 4 brings the power of cloud computing to the data center, reducing IT costs while also increasing infrastructure efficacy. Furthermore, for hosting service providers, VMware vsphere 4 enables a more economic and efficient path to delivering cloud services that are compatible with customers internal cloud infrastructures. VMware vsphere 4 delivers significant performance and scalability to enable even the most resourceintensive applications, such as large databases, to be deployed on internal clouds. With these performance and scalability improvements, VMware vsphere 4 can enable a 100 percent virtualized internal cloud. The EMC VPLEX family is thus a natural fit for a virtualization environment based on VMware technologies. The capability of EMC VPLEX to provide both local and distributed federation that allows transparent cooperation of physical data elements within a single site or two geographically separated sites allows IT administrators to break physical barriers and expand their VMware-based cloud offering. The local federation capabilities of the EMC VPLEX allow collection of the heterogeneous data storage solutions at a physical site and present the storage as a pool of resources for VMware vsphere, thus enabling the major tenets of a cloud offering. Specifically, an extension of the VPLEX s capabilities to span multiple data centers enables IT administrators to leverage either private or public cloud offerings from hosting service providers. The synergies provided by a VMware virtualization offering connected to EMC VPLEX thus help customers to reduce total cost of ownership while providing a dynamic service that can rapidly respond to the changing needs of their business. Audience This white paper is intended for VMware administrators, storage administrators, and IT architects responsible for architecting, creating, managing, and using virtualized IT environments that utilize VMware vsphere and EMC VPLEX technologies. The white 4

5 paper assumes the reader is familiar with VMware technology, EMC VPLEX, and related software. EMC VPLEX overview The EMC VPLEX family with the EMC GeoSynchrony operating system is a SAN-based federation solution that removes physical barriers within a single and multiple virtualized data centers. EMC VPLEX is the first platform in the world that delivers both local and distributed federation. Local federation provides the transparent cooperation of physical storage elements within a site while distributed federation extends the concept between two locations across distance. The distributed federation is enabled by a breakthrough technology available with VPLEX, AccessAnywhere, which enables a single copy of data to be shared, accessed, and relocated over distance. The combination of a virtualized data center with the EMC VPLEX offering provides customers entirely new ways to solve IT problems and introduce new models of computing. Specifically, customers can: Move virtualized applications across data centers Enable workload balancing and relocation across sites Aggregate data centers and deliver 24 x forever EMC VPLEX architecture EMC VPLEX represents the next-generation architecture for data mobility and information access. The new architecture is based on EMC s more than 20 years of expertise in designing, implementing, and perfecting enterprise-class intelligent cache and distributed data protection solutions. As shown in Figure 1, VPLEX is a solution for federating both EMC and non-emc storage. VPLEX resides between the servers and heterogeneous storage assets and introduces a new architecture with unique characteristics: Scale-out clustering hardware that lets customers to start small and grow big with predictable service levels Advanced data caching utilizing large-scale SDRAM cache to improve performance and reduce I/O latency and array contention Distributed cache coherence for automatic sharing, balancing, and failover of I/O across the cluster A consistent view of one or more LUNs across VPLEX Clusters separated either by a few feet within a data center or across asynchronous distances, enabling new models of high availability and workload relocation 5

6 Figure 1. Capability of EMC VPLEX to federate heterogeneous storage EMC VPLEX family The EMC VPLEX family consists of three offerings: VPLEX Local: This solution is appropriate for customers that would like federation of homogeneous or heterogeneous storage systems within a data center and for managing data mobility between physical data storage entities. VPLEX Metro: The solution is for customers that require concurrent access and data mobility across two locations separated by synchronous distances. The VPLEX Metro offering also includes the unique capability where a remote VPLEX Metro site can present LUNs without the need for physical storage for those LUNs at the remote site. VPLEX Geo: The solution is for customers that require concurrent access and data mobility across two locations separated by asynchronous distances. The VPLEX Geo offering is currently not supported for live migration of VMware vsphere virtual machines using VMware VMotion. The EMC VPLEX family of offerings is shown in Figure 2. 6

7 Figure 2. EMC VPLEX family offerings EMC VPLEX clustering architecture VPLEX uses a unique clustering architecture to help customers break the boundaries of the data center and allow servers at multiple data centers to have concurrent read and write access to shared block storage devices. A VPLEX Cluster, shown in Figure 3, can scale up through the addition of more engines, and scale out by connecting multiple clusters to form a VPLEX Metro configuration. A VPLEX Metro supports up to two clusters, which can be in the same data center or at two different sites within synchronous distances (less than 5 ms round trip time). VPLEX Metro configurations help users to transparently move and share workloads, consolidate data centers, and optimize resource utilization across data centers. In addition, VPLEX Clusters provide nondisruptive data mobility, heterogeneous storage management, and improved application availability. 7

8 Site A Site B App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS App OS VMware ESX VMware ESX VMware ESX VMware ESX FC SAN FC SAN VPLEX Federation Layer Distributed Volume FC FC SAN FC SAN A B Figure 3. Schematic representation of EMC VPLEX Metro A VPLEX Cluster is composed of one, two, or four engines. The engine is responsible for federating the I/O stream, and connects to hosts and storage using Fibre Channel connections as the data transport. A single VPLEX Cluster consists of an engine with the following major components: Two directors, which run the GeoSynchrony software and connect to storage, hosts, and other directors in the cluster with Fibre Channel and gigabit Ethernet connections One Standby Power Supply, which provides backup power to sustain the engine through transient power loss Two management modules, which contain interfaces for remote management of a VPLEX Engine Each cluster also consists of: 8

9 A management server, which manages the cluster and provides an interface from a remote management station An EMC standard 40U cabinet to hold all of the equipment of the cluster Additionally, clusters containing more than one engine also have: A pair of Fibre Channel switches used for inter-director communication between various engines A pair of Universal Power Supplies that provide backup power for the Fibre Channel switches and allow the system to ride through transient power loss Provisioning VPLEX storage to VMware environments EMC VPLEX provides an intuitive, wizard-driven management interface to provision storage to various operating systems, including VMware vsphere. The wizard has both an EZ-Provisioning tab and an Advanced tab. The system also provides a command line interface (CLI) for advanced users. Figure 4 shows the GUI interface for provisioning storage from EMC VPLEX. 9

10 Figure 4. EMC VPLEX GUI management interface EZ-Provisioning tab 10

11 Figure 5. Provisioning Illustration The browser-based management interface, enlarged in Figure 5, schematically shows the various components involved in the process. Storage from EMC VPLEX is exposed using a logical construct called Storage View that is a union of the objects: Registered initiators, VPLEX ports, and Virtual Volume. The Registered initiators object lists the WWPN of the initiators that need access to the storage. In the case of a VMware environment, the Registered initiators entity contains the WWPN of the HBAs in the VMware ESX hosts connected to the EMC VPLEX. The object VPLEX ports contains the front-end ports of the VPLEX array through which the Registered initiators access the virtual volumes. The Virtual Volume object is a collection of volumes that are constructed from the storage volumes that are provided to the EMC VPLEX from the back-end storage arrays. It can be seen in the red boxed area of Figure 5 that a virtual volume is constructed from a Device that in turn can be a combination of different devices built on top of an abstract entity called Extent. The figure also shows that an Extent is created from the Storage Volume exposed to the EMC VPLEX. 11

12 Also shown in Figure 4 on page 10 in the bottom callout are the three high-level steps that are required to provision storage from EMC VPLEX. The wizard supports a centralized mechanism for provisioning storage to different cluster members in case of EMC VPLEX Metro or Geo. The first step in the process of provisioning storage from EMC VPLEX is the discovery of the storage arrays connected to it and the claiming of storage that has been exposed to EMC VPLEX. This first part of this step needs to be rarely executed since the EMC VPLEX proactively monitors for changes to the storage environment. The wizard not only claims the storage in this step but also creates the extents in that Storage Volume and finally the Virtual Volume that is created on that extent. These components are called out in Figure 5. Figure 6 shows an example of running through Step 1 of the EZ-Provisioning wizard which will create all objects from the storage volume to the virtual volumes. It can be seen from the figure that VPLEX software simplifies the process by automatically suggesting user-friendly names for the devices that have been exposed from the storage arrays and using those to generate names for both extents and devices. 12

13 13

14 Figure 6. Creating virtual volumes in the EZ-Provisioning VPLEX wizard For the sake of simplicity, in VMware environments, it is recommended to create a single extent on the storage volume that was created on the device presented from the storage array. The wizard does this automatically for the user. The virtual volume can be exposed to VMware vsphere, as discussed earlier, by creating a storage view by combining the objects, Registered initiators, VPLEX ports, and Virtual Volumes. To do this, the WWN of the initiators on the VMware ESX hosts has to be first registered on EMC VPLEX. This can be accomplished in Step 2 of the EZ-Provisioning wizard. When Step 2: Register initiators is selected, a Task Help screen appears as seen in Figure 7. This dialog box explains how to register the initiators. Figure 7. Task Help screen 14

15 Figure 8. Listing unregistered initiators logged in to the EMC VPLEX When the initiators are zoned to the front-end ports of the EMC VPLEX, they automatically log in to EMC VPLEX. As seen in Figure 8 these initiators are displayed with the prefix, UNREGISTERED-, followed by the WWPN of the initiator. However, initiators can also be manually registered before they are zoned to the front-end ports of the VPLEX. The button highlighted in yellow in Figure 8 should be selected to perform this operation. The initiators logged in to EMC VPLEX can be registered by highlighting the unregistered initiator and clicking the Register button. This is demonstrated in Figure 9. The inset in the figure shows the window that is opened when the Register button is clicked. The inset also shows the facility provided by EMC VPLEX to assign a user-friendly name to the unregistered initiator and also select a host type for the initiator that is being registered. Once the information is added, click OK to complete registration. Note that multiple unregistered initiators may be selected at once for registration. 15

16 Figure 9. Registering VMware HBAs on the EMC VPLEX The final step in provisioning storage from EMC VPLEX to the VMware environment is the creation of the storage view. This is achieved by selecting the final step in the EZ- Provisioning wizard, Step 3: Create storage view on the Provisioning Overview page of the VPLEX management system. Figure 10 shows the first window that is opened. The left-hand pane of the window shows the steps that have to be performed to create a storage view. 16

17 Figure 10. Wizard for creating a VPLEX storage view Stepping through the wizard provisions the appropriate virtual volumes to VMware vsphere using the defined set of VPLEX front-end ports. This is shown in Figure 11. Note that the recommendation for the VPLEX ports that should be used when connecting VMware ESX hosts to EMC VPLEX is discussed in the section Connectivity considerations. Figure 11. Selecting ports for the selected initiators 17

18 Once the available ports have been added, virtual volumes can be assigned to the view as seen in Figure 12. Figure 12. Adding virtual volumes to the storage view Finally, Figure 13 details the results of the final step of the EZ-Provisioning wizard. With that screen s message of success, storage has been presented to the VMware vsphere environment and is available for use as a raw device mapping (RDM) or for the creation of a VMware datastore. Figure 13. Final results view for the EZ-Provisioning wizard 18

19 Figure 14 shows the storage view created using the wizard. The WWN of the virtual volume exposed through the view is highlighted in the figure. This information is used by VMware vsphere to identify the devices. Figure 14. Viewing details of a storage view utilizing the VPLEX management interface The newly provisioned storage can be discovered on the VMware ESX hosts by performing a rescan of the SCSI bus. The result from the scan is shown in Figure 15. It can be seen that the VMware ESX host has access to a device with WWN e01443ee283912b8. A quick comparison of the WWN with the information highlighted in green in Figure 14 confirms that the device discovered by the VMware ESX host is indeed the newly provisioned VPLEX virtual volume. The figure also shows the FC organizationally unique identifier (OUI) for EMC VPLEX devices as 00:01:44. 19

20 Figure 15. Discovering newly provisioned VPLEX storage on a VMware ESX host Once the VPLEX device has been discovered by the VMware ESX hosts, they can be used for creating a VMware file system (datastore), or used as a RDM. However, for optimal performance it is important to ensure the I/Os to the EMC VPLEX are aligned to a 64 KB block boundary. The VMware file system created using the vsphere Client automatically aligns the file system blocks. However, a misaligned partition on a guest operating system can impact performance negatively. Therefore, it is critical to ensure that all partitions created on the guest operating system (either on a virtual disk presented from a VMware file system or a RDM) are aligned to a multiple of 64 KB. EMC Virtual Storage Integrator and VPLEX EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in to the VMware vsphere client that provides a single management interface used for managing EMC storage within the vsphere environment. Features can be added and removed from VSI independently, providing flexibility for customizing VSI user environments. VSI provides a unified user experience, allowing each of the features to be updated independently, and new features to be introduced rapidly in response to changing customer requirements. Examples of features available for VSI are: Storage Viewer (SV), Path Management, Storage Pool Management (SPM), Symmetrix SRA Utilities, and Unified Storage Management. 20

21 Storage Viewer feature The Storage Viewer feature extends the vsphere Client to facilitate the discovery and identification of EMC Symmetrix, CLARiiON, Celerra, and VPLEX storage devices that are allocated to VMware ESX/ESXi hosts and virtual machines. SV presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vsphere Client views. SV enables users to resolve the underlying storage of Virtual Machine File System (VMFS) and Network File System (NFS) datastores and virtual disks, as well as raw device mappings (RDM). In addition, Storage Viewer also lists storage arrays and devices that are accessible to the ESX and ESXi hosts in the virtual datacenter. In case the underlying storage hosting a VMFS datastore is hosted on a VPLEX volume, Storage Viewer provides details of the Virtual Volumes, Storage Volumes, and Paths that make up the datastore or LUN. Figure 16 is a compilation of all three different views together, though each may be displayed only one at a time in the plug-in. Figure 16. Storage Viewer (VSI) datastore view of a VPLEX device The LUNs view of Storage Viewer provides similar information. Note that in this view shown in Figure 17, there is a Used By column to inform the user how the LUN is being employed in the environment. 21

22 Figure 17. Storage Viewer (VSI) LUN view of a VPLEX device The other feature of VSI that can be used with VPLEX is Path Management. It will be addressed later in this paper. Connectivity considerations EMC VPLEX introduces a new type of storage federation paradigm that provides increased resiliency, performance, and availability. The following paragraph discusses the recommendations for connecting VMware ESX hosts to EMC VPLEX. The recommendations ensure the highest level of connectivity and availability to VMware vsphere even during abnormal operations. As a best practice, each VMware ESX host in the VMware vsphere environment should have at least two physical HBAs, and each HBA should be connected to at least two front-end ports on director A and director B on EMC VPLEX. This configuration ensures continued use of all HBAs on the VMware ESX host even if one of the front-end ports of the EMC VPLEX goes offline for either planned maintenance events or unplanned disruptions. When a single VPLEX Engine configuration is connected to a VMware vsphere environment, each HBA should be connected to the front-end ports provided on both the A and B directors within the VPLEX Engine. Connectivity to the VPLEX front-end ports should consist of first connecting unique hosts to port 0 of each I/O module emulating the front-end directors before connecting additional hosts to the remaining ports on the I/O module. A schematic example of the wiring diagram for a four-node 22

23 VMware vsphere environment connected to a single VPLEX Engine is shown in Figure 18. Figure 18. Connecting a VMware vsphere server to a single-engine VPLEX Cluster If multiple VPLEX Engines are available, as is the case in the dual- and quad-engine VPLEX Cluster configurations, the HBAs from the VMware ESX hosts can be connected to different engines. Using both directors on the same engine minimizes cache coherency traffic, while using directors on different engines (with dual and quad configurations) provides greater resiliency. The decision on which configuration to select is based on the desired objectives. For example, one possible connectivity diagram for a four-node VMware ESX cluster connected to a two-engine VPLEX Cluster is schematically shown in Figure 19. It is important to note that in both Figure 18 and Figure 19, the connectivity between the VPLEX Engines and the storage arrays has not been displayed. The connectivity 23

24 from the VPLEX Engines to the storage arrays should follow the best practices recommendation for the array. A detailed discussion of the best practices for connecting the back-end storage is beyond the scope of this paper. Interested readers should consult the TechBook EMC VPLEX Metro Witness Technology and High Availability. Figure 19. Connecting ESX hosts to a multiple-engine VPLEX Cluster When the VMware ESX host is connected to an EMC VPLEX using the best practices discussed in this section, the VMware kernel will associate four paths to each device presented from the system. Figure 20 shows the paths available and used by the VMware kernel for one of the federated devices presented from EMC VPLEX. As can be seen in the figure, the VMware kernel can access the device using one of the four possible paths. It is important to note that the EMC VPLEX is an active/active array that allows simultaneous access to any VPLEX device from any of the front-end ports. This fact is recognized by the VMware kernel automatically, and is highlighted in green in Figure 20. The screenshot is taken from the Virtual Storage Integrator plug-in for the vsphere client. This plug-in is available for free on Powerlink. 24

25 Figure 20. VMware kernel paths for a VPLEX device in Virtual Storage Integrator (VSI) The connectivity from the VMware ESX hosts to the multiple-engine VPLEX Cluster can be scaled as more engines are added. The methodologies discussed in this section ensure all front-end ports are utilized while providing maximum potential performance and load balancing for VMware vsphere. Multipathing and load balancing The VMware ESX host provides native channel failover capabilities. The ESX host for active/active storage systems, by default, assigns the path it discovers first to any SCSI-attached device as the preferred path with a fixed failover policy. This path is always used as the active path for sending I/O to that device unless the path is unavailable due to a planned or an unplanned event. The remaining paths discovered by the VMware ESX host for the device are used as a passive failover path and utilized only if the active path fails. Therefore, VMware ESX hosts automatically queue all of the I/Os on the first available HBA in the system, while the other HBA is not actively used until a failure on the primary HBA is detected. This behavior leads to an unbalanced configuration on the ESX host and on the EMC VPLEX. There are a number of ways to address this. The most appropriate method, as discussed in the following sections, depends on the multipathing software that is used. VMware ESX version 4.x and NMP VMware ESX version 4.x includes advanced path management and load-balancing capabilities exposed through the policies Fixed, Round Robin, and Most Recently Used. The default policy used by the ESX kernel for active/active arrays is Fixed. However, for most active/active arrays such as the EMC Symmetrix arrays, round-robin is the most appropriate policy. Nonetheless, the advanced cache management features provided by the EMC VPLEX can be disrupted by the use of a 25

26 simple load-balancing algorithm provided by the Round Robin policy. Therefore, for VMware ESX version 4.x connected to EMC VPLEX, EMC recommends the use of the Fixed policy with static load balancing by changing the preferred path. In addition, the changes to the preferred path should be performed on all of the ESX hosts accessing the VPLEX devices. The preferred path on VMware ESX version 4 can be set using vsphere Client. Figure 21 shows the procedure that can be used to set the preferred path for a physical disk in a VMware vsphere environment. Figure 22 shows the preferred path setting for two datastores, each residing on an EMC VPLEX device presented from front-end ports A0- FC00, A1-FC00, B0-FC00, and B1-FC00. Figure 21. Setting the preferred path on VMware ESX version 4 Figure 22. EMC VPLEX devices with static load balancing on ESX version 4 26

27 VMware ESX version 4.x with PowerPath/VE EMC PowerPath /VE delivers PowerPath multipathing features to optimize VMware vsphere virtual environments. PowerPath/VE enables standardization of path management across heterogeneous physical and virtual environments. PowerPath/VE enables one to automate optimal server, storage, and path utilization in a dynamic virtual environment. With hyper-consolidation, a virtual environment may have hundreds or even thousands of independent virtual machines running, including virtual machines with varying levels of I/O intensity. I/O-intensive applications can disrupt I/O from other applications, and before the availability of PowerPath/VE, as discussed in previous sections, load balancing on an ESX host system had to be manually configured to correct for this. Manual load-balancing operations to ensure that all virtual machines receive their individual required response times are timeconsuming and logistically difficult to effectively achieve. PowerPath/VE works with VMware ESX and ESXi as a multipathing plug-in (MPP) that provides enhanced path management capabilities to ESX and ESXi hosts. PowerPath/VE is supported with vsphere (ESX version 4) only. Previous versions of ESX do not have the PSA, which is required by PowerPath/VE. PowerPath/VE installs as a kernel module on the vsphere host. As shown in Figure 23, PowerPath/VE plugs in to the vsphere I/O stack framework to bring the advanced multipathing capabilities of PowerPath dynamic load balancing and automatic failover to VMware vsphere. Figure 23. PowerPath/VE vstorage API for multipathing plug-in At the heart of PowerPath/VE path management is server-resident software inserted between the SCSI device-driver layer and the rest of the operating system. This driver 27

28 software creates a single pseudo device for a given array volume (LUN) regardless of how many physical paths on which it appears. The pseudo device, or logical volume, represents all physical paths to a given device. It is then used for creating a VMware file system or for raw device mapping (RDM). These entities can be then used for application and database access. PowerPath/VE s value fundamentally comes from its architecture and position in the I/O stack. PowerPath/VE sits above the HBA, allowing heterogeneous support of operating systems and storage arrays. By integrating with the I/O drivers, all I/Os run through PowerPath and allow for it to be a single I/O control and management point. Since PowerPath/VE resides in the ESX kernel, it sits below the guest OS level, application level, database level, and file system level. PowerPath/VE s unique position in the I/O stack makes it an infrastructure manageability and control point, thus bringing more value going up the stack. PowerPath/VE features PowerPath/VE provides the following features: Dynamic load balancing PowerPath is designed to use all paths at all times. PowerPath distributes I/O requests to a logical device across all available paths, rather than requiring a single path to bear the entire I/O burden. Auto-restore of paths Periodic auto-restore reassigns logical devices when restoring paths from a failed state. Once restored, the paths automatically rebalance the I/O across all active channels. Device prioritization Setting a high priority for a single or several devices improves their I/O performance at the expense of the remaining devices, while otherwise maintaining the best possible load balancing across all paths. This is especially useful when there are multiple virtual machines on a host with varying application performance and availability requirements. Automated performance optimization PowerPath/VE automatically identifies the type of storage array and sets the highest performing optimization mode by default. For VPLEX, the default mode is Adaptive. Dynamic path failover and path recovery If a path fails, PowerPath/VE redistributes I/O traffic from that path to functioning paths. PowerPath/VE stops sending I/O to the failed path and checks for an active alternate path. If an active path is available, PowerPath/VE redirects I/O along that path. PowerPath/VE can compensate for multiple faults in the I/O channel (for example, HBAs, fiber-optic cables, Fibre Channel switch, storage array port). Monitor/report I/O statistics While PowerPath/VE load balances I/O, it maintains statistics for all I/O for all paths. The administrator can view these statistics using rpowermt. Automatic path testing PowerPath/VE periodically tests both live and dead paths. By testing live paths that may be idle, a failed path may be identified before an application attempts to pass I/O down it. By marking the path as failed 28

29 before the application becomes aware of it, timeout and retry delays are reduced. By testing paths identified as failed, PowerPath/VE will automatically restore them to service when they pass the test. The I/O load will be automatically balanced across all active available paths. PowerPath/VE management PowerPath/VE uses a command set, called rpowermt, to monitor, manage, and configure PowerPath/VE for vsphere. The syntax, arguments, and options are very similar to the traditional powermt commands used on all other PowerPath multipathing-supported operating system platforms. There is one significant difference in that rpowermt is a remote management tool. Not all vsphere installations have a service console interface. In order to manage an ESXi host, customers have the option to use VMware vcenter Server or vcli (also referred to as VMware Remote Tools) on a remote server. PowerPath/VE for vsphere uses the rpowermt command line utility for both ESX and ESXi. PowerPath/VE for vsphere cannot be managed on the ESX host itself. There is neither a local nor remote GUI for PowerPath on ESX. Administrators must designate a Guest OS or a physical machine to manage one or multiple ESX hosts. The utility, rpowermt, is supported on Windows 2003 (32-bit) and Red Hat 5 Update 2 (64-bit). When the vsphere host server is connected to the EMC VPLEX, the PowerPath/VE kernel module running on the vsphere host associates all paths to each device presented from the array and assigns a pseudo device name (as discussed earlier). An example of this is shown in Figure 24, which shows the output of rpowermt display host=x.x.x.x dev=emcpower11. Note in the output that the device has four paths and displays the default optimization mode for VPLEX devices ADaptive. The default optimization mode is the most appropriate policy for most workloads and should not be changed. Figure 24. Output of the rpowermt display command on a VPLEX device Path Management feature of Virtual Storage Integrator A far easier way to set the preferred path for all types of EMC devices is to use the Path Management feature of Virtual Storage Integrator. Using this feature, one can set the preferred path on all EMC devices at the ESX host level or at the cluster level. The feature also permits setting a different policy for each type of EMC device: 29

30 Symmetrix, CLARiiON, and VPLEX. This is extremely useful as a customer may have many different types of devices presented to their vsphere environment. The Path Management feature can set the policy for both NMP and PowerPath/VE. Figure 25 shows the navigation to change the multipathing policy with VSI. Figure 25. Navigating to the Path Management feature of VSI The many options within Path Management are display in Figure

31 Figure 26. VSI Path Management options Through the feature, multiple changes to policies can be made at once. For the particular example shown in Figure 27 all hosts have PowerPath/VE installed. All Symmetrix devices under PowerPath control will be set to Symmetrix Optimization, while all VPLEX devices will be set to Adaptive. 31

32 32

33 Figure 27. Making multiple multipathing changes Migrating existing VMware environments to VPLEX Existing deployments of VMware vsphere can be migrated to VPLEX environments. There are a number of different alternatives that can be leveraged. The easiest method to migrate to a VPLEX environment is to use storage vmotion. However, this technique is viable only if the storage array has sufficient free storage to accommodate the largest datastore in the VMware environment. Furthermore, storage vmotion may be tedious if several hundreds of virtual machines or terabytes have to be converted, or if the virtual machines have existing snapshots. For these scenarios, it might be appropriate to leverage the capability of EMC VPLEX to encapsulate existing devices. However, this methodology is disruptive and requires planned outages to VMware vsphere. Nondisruptive migrations using storage vmotion Figure 29 shows the datastores available on VMware ESX version 4.1 managed by a vsphere vcenter Server. The view is available using the Storage Viewer feature of EMC 33

34 Virtual Storage Integrator. The datastores are backed by both VPLEX and non-vplex devices. Figure 28. Datastore view in Storage Viewer (VSI) It can be seen from Figure 29 that the virtual machine W2K8 VM1 resides on datastore Management_Datastore_1698 hosted on device CA9 on a Symmetrix VMAX array. Figure 29. Details of the EMC storage device displayed by EMC Storage Viewer The migration of the data from the Symmetrix VMAX arrays to the storage presented from VPLEX can be performed using storage vmotion once appropriate datastores are created on the devices presented from VPLEX. In this example the VM W2K8 VM1 will be migrated from its current datastore on the Symmetrix, to the datastore vplex_boston_local, which resides on the VPLEX and shown earlier in Figure

35 Figure 30 shows the steps required to initiate the migration of a virtual machine from Management_Datastore_1698 to the target datastore, vplex_boston_local. The storage vmotion functionality is also available via a command line utility. Detailed discussion of storage vmotion is beyond the scope of this white paper. Further details on storage vmotion can be found in the VMware documentation listed in the References section. Figure 30. Using storage vmotion to migrate virtual machines to VPLEX devices Migration using encapsulation of existing devices As discussed earlier, although storage vmotion provides the capability to perform nondisruptive migration from an existing VMware deployment to EMC VPLEX, it might not be always a viable tool. For these situations, the encapsulation capabilities of EMC VPLEX can be leveraged. The procedure, however, is disruptive but the duration of the disruption can be minimized by proper planning and execution. 35

36 The following steps need to be taken to encapsulate and migrate an existing VMware deployment. 1. Zone the back-end ports of EMC VPLEX to the front-end ports of the storage array currently providing the storage resources. 2. The next step should be to change the LUN masking on the storage array so the EMC VPLEX has access to the devices that host the VMware datastores. In the example below, the devices 4EC (for Datastore_1) and 4F0 (for Datastore_2) have to be masked to EMC VPLEX. Figure 31 shows the devices that are visible to EMC VPLEX after the masking changes have been performed and a rescan of the storage array has been performed on EMC VPLEX. The figure also shows the SYMCLI output of the Symmetrix VMAX devices and their corresponding WWNs. A quick comparison clearly shows that EMC VPLEX has access to the devices that host the datastores that need to be encapsulated. Figure 31. Discovering devices to be encapsulated on EMC VPLEX 3. Once the devices are visible to EMC VPLEX they have to be claimed. This step is shown in Figure 32. The -appc flag during the claiming process ensures that the content of the device that is being claimed is preserved, and that the device is encapsulated for further use within the EMC VPLEX. 36

37 Figure 32. Encapsulating devices in EMC VPLEX while preserving existing data 4. After claiming the devices a single extent that spans the whole disk has to be created. Figure 33 shows this step for the two datastores that are being encapsulated in this example. Figure 33. Creating extents on encapsulated storage volumes claimed by VPLEX 37

38 5. A VPLEX device (local device) with a single RAID 1 member should be created using the extent that was created in the previous step. This is shown for the two datastores, Datastore_1 and Datastore_2, hosted on device 4EC and 4F0, respectively, in Figure 34. The step should be repeated for all of the storage array devices that need to be encapsulated and exposed to the VMware environment. Figure 34. Creating a VPLEX RAID 1 protected device on encapsulated VMAX devices 6. A virtual volume should be created on each VPLEX device that was created in the previous step. This is shown in Figure 35 for the VMware datastores Datastore_1 and Datastore_2. Figure 35. Creating virtual volumes on VPLEX to expose to VMware vsphere 7. It is possible to create a storage view on EMC VPLEX by manually registering the WWN of the HBAs on the VMware ESX hosts that are part of the VMware vsphere domain. The storage view should be created first to allow VMware vsphere access to the virtual volume(s) that was/were created in step 6. By doing so, the disruption to the service during the switchover from the original storage array over to EMC VPLEX can be minimized. An example of this step for the environment used in this study is shown in Figure

39 Figure 36. Creating a storage view to present encapsulated devices to VMware ESX hosts 8. In parallel to the operations conducted on EMC VPLEX, new zones should be created that allow the VMware ESX hosts involved in the migration access to the front-end ports of EMC VPLEX. These zones should also be added to the appropriate zone set. Furthermore, the zones that provide the VMware ESX host access to the storage array whose devices are being encapsulated should be removed from the zone set. However, the modified zone set should not be activated until the maintenance window when the VMware virtual machines can be shut down. It is important to ensure that the encapsulated devices are presented to the ESX hosts only through the VPLEX front-end ports. The migration of the VMware environment to VPLEX can fail if devices are presented from both VPLEX and the storage subsystem to the VMware ESX hosts simultaneously. Furthermore, there is a potential for data corruption if the encapsulated devices are presented simultaneously from the storage array and the VPLEX system. 9. When the maintenance window is open, all of the virtual machines that would be impacted by the migration should be first shut down gracefully. This can be either done with the vsphere Client, or command line utilities that leverage the VMware SDK. 10. Activate the zone set that was created in step 8. A manual rescan of the SCSI bus on the VMware ESX hosts should remove the original devices and add the encapsulated devices presented from the VPLEX system. 11. The devices presented from the VPLEX system host the original datastore. However, the VMware ESX hosts do not automatically mount datastores since VMware ESX considers datastores as a snapshot since the WWN of the devices 39

40 exposed through the VPLEX system differs from the WWN of the devices presented from the Symmetrix VMAX system. 12. Figure 37 shows an example of this for a VMware vsphere environment. The figure shows all of the original virtual machines in the environment are now marked as inaccessible. This occurs since the datastores, Datastore_1 and Datastore_2, created on the devices presented from the VMAX system are no longer available. Figure 37. Rescanning the SCSI bus on the VMware ESX hosts VMware vsphere allows access to datastores that are considered snapshots in two different ways the snapshot can be either resignatured or can be persistently mounted. In VMware vsphere environments, the resignaturing process of datastores that are considered snapshots can be performed on a device-by-device basis. This reduces the risk of mistakenly resignaturing the encapsulated devices from the VPLEX system. Therefore, for a homogeneous vsphere environment (that is, all ESX hosts in the environment are at version 4.0 or later), EMC recommends the use of persistent mounts for VMware datastores that are encapsulated by VPLEX. The use of persistent mount also provides other advantages such as retaining of the history of all of the virtual machines. The datastore on devices encapsulated by VPLEX can also be accessed by resignaturing it. However, using this method adds unnecessary complexity to the recovery process and is not recommended. Therefore, the procedure to recover a VMware vsphere environment utilizing the method is not discussed in this document. 40

41 A detailed discussion of the process to persistently mount datastores is beyond the scope of this white paper. Readers should consult the VMware document Fibre Channel SAN Configuration Guide available on The results after the persistent mounting of the datastores presented from EMC VPLEX are shown in Figure 38. It can be seen that all of the virtual machines that were inaccessible are now available. The persistent mount of the datastores considered snapshots retains both the UUID of the datastore and the label. Since the virtual machines are crossreferenced using the UUID of the datastores, the persistent mount enables vcenter Server to rediscover the virtual machines that were previously considered inaccessible. Figure 38. Persistently mounting datastores on encapsulated VPLEX devices VMware deployments in a VPLEX Metro environment EMC VPLEX breaks physical barriers of data centers and allows users to access data at different geographical locations concurrently. This functionality in a VMware context enables functionality that was not available previously. Specifically, the ability to concurrently access the same set of devices independent of the physical location enables geographically stretched clusters based on VMware vsphere 1. This allows for transparent load sharing between multiple sites while providing the flexibility of migrating workloads between sites in anticipation of planned events such as hardware maintenance. Furthermore, in case of an unplanned event that causes disruption of services at one of the data centers, the failed services can be quickly and easily restarted at the surviving site with minimal effort. Nevertheless, the 1 The solution requires extension of VLAN to different physical data centers. Technologies such as Cisco s Overlay Transport Virtualization (OTV) can be leveraged to provide the service. 41

42 design of the VMware environment has to account for a number of potential failure scenarios and mitigate the risk for services disruption. The following paragraphs discuss the best practices for designing the VMware environment to ensure an optimal solution. For further information on EMC VPLEX Metro configuration readers should consult the TechBook EMC VPLEX Metro Witness Technology and High Availability available on Powerlink. VPLEX witness VPLEX uses rule sets to define how a site or link failure should be handled in a VPLEX Metro or VPLEX Geo configuration. If two clusters lose contact, the rule set defines which cluster continues operation and which suspends I/O. The rule set is applied on a device-by-device basis or for a consistency group. The use of rule sets to control which site is a winner, however, adds unnecessary complexity in case of a site failure since it may be necessary to manually intervene to resume I/O to the surviving site. VPLEX with GeoSynchrony 5.0 introduces a new concept to handle such an event, the VPLEX Witness. VPLEX Witness is a virtual machine that runs in an independent (3 rd ) fault domain. It provides the following features: Active/active use of both data centers High availability for applications (no single points of storage failure, auto-restart) Fully automatic failure handling Better resource utilization Lower capital expenditures and lower operational expenditures as a result Typically data centers implement highly available designs within a data center, and deploy disaster recovery functionality between data centers. This is because within the data center, components operate in an active/active (or active/passive with automatic failover). However, between data centers, legacy replication technologies use active/passive techniques and require manual failover to use the passive component. When using VPLEX Metro active/active replication technology in conjunction with VPLEX Witness, the lines between local high availability and longdistance disaster recovery are somewhat blurred because high availability is stretched beyond the data center walls. A configuration that uses any combination of VPLEX Metro and VPLEX Witness is considered a VPLEX Metro HA configuration. The key to this environment is AccessAnywhere. It allows both clusters to provide coherent read/write access to the same virtual volume. That means that on the remote site, the paths are up and the storage is available even before any failover happens. When this is combined with host failover clustering technologies such as VMware HA, one gets a fully automatic application restart for any site-level disaster. The system rides through component failures within a site, including the failure of an entire array. VMware ESX can be deployed at both VPLEX clusters in a Metro environment to create a high availability environment. Figure 39 shows the Metro HA configuration that will be used in this paper. 42

43 Figure 39. VMware Metro HA with VPLEX Witness In this scenario, a virtual machine can write to the same distributed device from either cluster. In other words, if the customer is using VMware Distributed Resource Scheduler (DRS), which allows the automatic load distribution on virtual machines across multiple ESX servers, a virtual machine can be moved from an ESX server attached to Cluster-1 to an ESX server attached to Cluster-2 without losing access to the underlying storage. This configuration allows virtual machines to move between two geographically disparate locations with up to 5 ms of latency, the limit to which VMware VMotion is supported. In the event of a complete site failure, VPLEX Witness automatically signals the surviving cluster to resume I/O rather than following the rule set. VMware HA detects the failure of the virtual machines and restarts the virtual machines automatically at the surviving site with no external intervention. It is important to note that, a data unavailability event can occur when there is not a full site outage but there is a VPLEX outage on Cluster-1 and the virtual machine is currently running on the ESX server 43

EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD

EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD EMC VPLEX FAMILY Transparent information mobility within, across, and between data centers A STORAGE PLATFORM FOR THE PRIVATE CLOUD In the past, users have relied on traditional physical storage to meet

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and Data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING WITH EMC VPLEX

MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING WITH EMC VPLEX White Paper MICROSOFT WINDOWS SERVER FAILOVER CLUSTERING WITH EMC VPLEX BEST PRACTICES PLANNING Abstract This white paper describes Microsoft Windows Server Failover Clustering, with functionalities and

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.1 ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 Technical Note Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 This technical note discusses using ESX Server hosts with an IBM System Storage SAN Volume Controller

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

Workload Resiliency with EMC VPLEX

Workload Resiliency with EMC VPLEX Best Practices Planning Abstract This white paper provides a brief introduction to EMC VPLEX and describes how VPLEX provides increased workload resiliency to the data center. Best practice recommendations

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

VMware Site Recovery Manager with EMC RecoverPoint

VMware Site Recovery Manager with EMC RecoverPoint VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright

More information

Data Migration Techniques for VMware vsphere

Data Migration Techniques for VMware vsphere Data Migration Techniques for VMware vsphere A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper profiles and compares various methods of data migration in a virtualized

More information

Implementing EMC Symmetrix VMAX in a Cloud Service Provider Environment

Implementing EMC Symmetrix VMAX in a Cloud Service Provider Environment Implementing EMC Symmetrix VMAX in a Cloud Service Provider Environment Applied Technology Abstract This white paper documents the usage of the EMC Symmetrix VMAX in a cloud service provider environment.

More information

Validating Host Multipathing with EMC VPLEX Technical Notes P/N 300-012-789 REV A01 June 1, 2011

Validating Host Multipathing with EMC VPLEX Technical Notes P/N 300-012-789 REV A01 June 1, 2011 Validating Host Multipathing with EMC VPLEX Technical Notes P/N 300-012-789 REV A01 June 1, 2011 This technical notes document contains information on these topics: Introduction... 2 EMC VPLEX overview...

More information

What s New in VMware vsphere 4.1 VMware vcenter. VMware vsphere 4.1

What s New in VMware vsphere 4.1 VMware vcenter. VMware vsphere 4.1 What s New in VMware vsphere 4.1 VMware vcenter VMware vsphere 4.1 W H I T E P A P E R VMware vsphere 4.1 ( vsphere ) continues to improve on its industry-leading virtualization platform, continuing the

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

EMC VPLEX: LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES

EMC VPLEX: LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES White Paper EMC VPLEX: LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES Second Edition Abstract This white paper provides best practices planning and use cases for using array-based and native replication

More information

EMC VSI for VMware vsphere: Storage Viewer

EMC VSI for VMware vsphere: Storage Viewer EMC VSI for VMware vsphere: Storage Viewer Version 5.6 Product Guide P/N 300-013-072 REV 07 Copyright 2010 2013 EMC Corporation. All rights reserved. Published in the USA. Published September 2013 EMC

More information

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices VMware vstorage Virtual Machine File System Technical Overview and Best Practices A V M wa r e T e c h n i c a l W h i t e P a p e r U p d at e d f o r V M wa r e v S p h e r e 4 V e r s i o n 2. 0 Contents

More information

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...

More information

Veritas Storage Foundation High Availability for Windows by Symantec

Veritas Storage Foundation High Availability for Windows by Symantec Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC

More information

DR-to-the- Cloud Best Practices

DR-to-the- Cloud Best Practices DR-to-the- Cloud Best Practices HOW TO EFFECTIVELY CONFIGURE YOUR OWN SELF-MANAGED RECOVERY PLANS AND THE REPLICATION OF CRITICAL VMWARE VIRTUAL MACHINES FROM ON-PREMISES TO A CLOUD SERVICE PROVIDER CONTENTS

More information

VMware Virtual Machine File System: Technical Overview and Best Practices

VMware Virtual Machine File System: Technical Overview and Best Practices VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document

More information

VMware vsphere 5.1 Advanced Administration

VMware vsphere 5.1 Advanced Administration Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.

More information

Veritas Cluster Server from Symantec

Veritas Cluster Server from Symantec Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

Disaster Recovery with EonStor DS Series &VMware Site Recovery Manager

Disaster Recovery with EonStor DS Series &VMware Site Recovery Manager Disaster Recovery with EonStor DS Series &VMware Site Recovery Manager Application Note Version: 1.0 Updated: July, 2015 Abstract: This application note provides information about Infortrend Storage Replication

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Storage VMware vsphere 4.1 W H I T E P A P E R Introduction VMware vsphere 4.1 brings many new capabilities to further extend the benefits of vsphere 4.0. These new features

More information

White. Paper. Optimizing the Virtual Data Center with Data Path Pools. EMC PowerPath/VE. February, 2011

White. Paper. Optimizing the Virtual Data Center with Data Path Pools. EMC PowerPath/VE. February, 2011 White Paper Optimizing the Virtual Data Center with Data Path Pools EMC PowerPath/VE By Bob Laliberte February, 2011 This ESG White Paper was commissioned by EMC and is distributed under license from ESG.

More information

Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs

Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs EMC RECOVERPOINT FAMILY Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs ESSENTIALS EMC RecoverPoint Family Optimizes RPO

More information

VBLOCK SOLUTION FOR SAP: SIMPLIFIED PROVISIONING FOR OPERATIONAL EFFICIENCY

VBLOCK SOLUTION FOR SAP: SIMPLIFIED PROVISIONING FOR OPERATIONAL EFFICIENCY VBLOCK SOLUTION FOR SAP: SIMPLIFIED PROVISIONING FOR OPERATIONAL EFFICIENCY August 2011 2011 VCE Company, LLC. All rights reserved. 1 Table of Contents Introduction... 3 Purpose... 3 Audience... 3 Scope...

More information

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009 Availability Guide for Deploying SQL Server on VMware vsphere August 2009 Contents Introduction...1 SQL Server 2008 with vsphere and VMware HA/DRS...2 Log Shipping Availability Option...4 Database Mirroring...

More information

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems

More information

GUIDE TO MULTISITE DISASTER RECOVERY FOR VMWARE VSPHERE ENABLED BY EMC SYMMETRIX VMAX, SRDF, AND VPLEX

GUIDE TO MULTISITE DISASTER RECOVERY FOR VMWARE VSPHERE ENABLED BY EMC SYMMETRIX VMAX, SRDF, AND VPLEX White Paper GUIDE TO MULTISITE DISASTER RECOVERY FOR VMWARE VSPHERE ENABLED BY EMC SYMMETRIX VMAX, SRDF, AND VPLEX A Detailed Review EMC GLOBAL SOLUTIONS Abstract This white paper offers guidelines for

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

Backup & Recovery for VMware Environments with Avamar 6.0

Backup & Recovery for VMware Environments with Avamar 6.0 White Paper Backup & Recovery for VMware Environments with Avamar 6.0 A Detailed Review Abstract With the ever increasing pace of virtual environments deployed in the enterprise cloud, the requirements

More information

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery

More information

VBLOCK DATA PROTECTION: BEST PRACTICES FOR EMC VPLEX WITH VBLOCK SYSTEMS

VBLOCK DATA PROTECTION: BEST PRACTICES FOR EMC VPLEX WITH VBLOCK SYSTEMS Best Practices for EMC VPLEX with Vblock Systems Table of Contents www.vce.com 5 VBLOCK DATA PROTECTION: BEST PRACTICES FOR EMC VPLEX WITH VBLOCK SYSTEMS July 2012 1 Contents Introduction...4 Business

More information

Implementing Storage Concentrator FailOver Clusters

Implementing Storage Concentrator FailOver Clusters Implementing Concentrator FailOver Clusters Technical Brief All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject to

More information

Implementing disaster recovery solutions with IBM Storwize V7000 and VMware Site Recovery Manager

Implementing disaster recovery solutions with IBM Storwize V7000 and VMware Site Recovery Manager Implementing disaster recovery solutions with IBM Storwize V7000 and VMware Site Recovery Manager A step-by-step guide IBM Systems and Technology Group ISV Enablement January 2011 Table of contents Abstract...

More information

SAN Implementation Course SANIW; 3 Days, Instructor-led

SAN Implementation Course SANIW; 3 Days, Instructor-led SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2 vcenter Server Heartbeat 5.5 Update 2 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

EMC Data Protection Advisor 6.0

EMC Data Protection Advisor 6.0 White Paper EMC Data Protection Advisor 6.0 Abstract EMC Data Protection Advisor provides a comprehensive set of features to reduce the complexity of managing data protection environments, improve compliance

More information

EMC Replication Manager for Virtualized Environments

EMC Replication Manager for Virtualized Environments EMC Replication Manager for Virtualized Environments A Detailed Review Abstract Today s IT organization is constantly looking for ways to increase the efficiency of valuable computing resources. Increased

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Exam : VCP5-DCV Title : VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Version : DEMO 1 / 9 1.Click the Exhibit button. An administrator has deployed a new virtual machine on

More information

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server A Dell Technical White Paper PowerVault MD32xx Storage Array www.dell.com/md32xx THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND

More information

EMC VSPEX with EMC VPLEX for VMware vsphere 5.1

EMC VSPEX with EMC VPLEX for VMware vsphere 5.1 Design and Implementation Guide EMC VSPEX with EMC VPLEX for VMware vsphere 5.1 Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with EMC VPLEX

More information

Best Practices Planning

Best Practices Planning White Paper EMC POWERPATH/VE FOR VMWARE vsphere Abstract EMC PowerPath /VE is a path management solution for VMware and Microsoft Hyper-V servers. This paper focuses on PowerPath/VE for VMware vsphere.

More information

EMC VPLEX 5.0 ARCHITECTURE GUIDE

EMC VPLEX 5.0 ARCHITECTURE GUIDE White Paper EMC VPLEX 5.0 ARCHITECTURE GUIDE Abstract This white paper explains the hardware and software architecture of the EMC VPLEX series with EMC GeoSynchrony. This paper will be of particular interest

More information

VM Instant Access & EMC Avamar Plug-In for vsphere Web Client

VM Instant Access & EMC Avamar Plug-In for vsphere Web Client White Paper VM Instant Access & EMC Avamar Plug-In for vsphere Web Client Abstract With the ever increasing pace of virtual environments deployed in the enterprise cloud, the requirements for protecting

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Microsoft SharePoint 2010 on VMware Availability and Recovery Options. Microsoft SharePoint 2010 on VMware Availability and Recovery Options

Microsoft SharePoint 2010 on VMware Availability and Recovery Options. Microsoft SharePoint 2010 on VMware Availability and Recovery Options This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware

More information

Storage Pool Management Feature in EMC Virtual Storage Integrator

Storage Pool Management Feature in EMC Virtual Storage Integrator Storage Pool Management Feature in EMC Virtual Storage Integrator Version 4.0 Installation and Configuration of SPM Detailed Use Cases Customer Example Drew Tonnesen Lee McColgan Bill Stronge Copyright

More information

Configuration Maximums VMware vsphere 4.1

Configuration Maximums VMware vsphere 4.1 Topic Configuration s VMware vsphere 4.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.1. The limits presented in the

More information

AX4 5 Series Software Overview

AX4 5 Series Software Overview AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management

More information

VMware vsphere 5.0 Boot Camp

VMware vsphere 5.0 Boot Camp VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this

More information

How To Use A Virtualization Server With A Sony Memory On A Node On A Virtual Machine On A Microsoft Vpx Vx/Esxi On A Server On A Linux Vx-X86 On A Hyperconverged Powerpoint

How To Use A Virtualization Server With A Sony Memory On A Node On A Virtual Machine On A Microsoft Vpx Vx/Esxi On A Server On A Linux Vx-X86 On A Hyperconverged Powerpoint ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman OmniCube SimpliVity OmniCube and Multi Federation ROBO Reference Architecture White Paper Authors: Bob Gropman Date: April 13, 2015 SimpliVity and OmniCube are trademarks of SimpliVity Corporation. All

More information

Esri ArcGIS Server 10 for VMware Infrastructure

Esri ArcGIS Server 10 for VMware Infrastructure Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure

More information

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. E-Series NetApp E-Series Storage Systems Mirroring Feature Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)

More information

Configuration Maximums VMware vsphere 4.0

Configuration Maximums VMware vsphere 4.0 Topic Configuration s VMware vsphere 4.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.0. The limits presented in the

More information

EMC PowerPath Family

EMC PowerPath Family DATA SHEET EMC PowerPath Family PowerPath Multipathing PowerPath Migration Enabler PowerPath Encryption with RSA The enabler for EMC host-based solutions The Big Picture Intelligent high-performance path

More information

Module: Business Continuity

Module: Business Continuity Upon completion of this module, you should be able to: Describe business continuity and cloud service availability Describe fault tolerance mechanisms for cloud infrastructure Discuss data protection solutions

More information

Mastering Disaster Recovery: Business Continuity and Virtualization Best Practices W H I T E P A P E R

Mastering Disaster Recovery: Business Continuity and Virtualization Best Practices W H I T E P A P E R Mastering Disaster Recovery: Business Continuity and Virtualization Best Practices W H I T E P A P E R Table of Contents Introduction.......................................................... 3 Challenges

More information

N_Port ID Virtualization

N_Port ID Virtualization A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February

More information

VMware Best Practice and Integration Guide

VMware Best Practice and Integration Guide VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

Veritas InfoScale Availability

Veritas InfoScale Availability Veritas InfoScale Availability Delivers high availability and disaster recovery for your critical applications Overview protects your most important applications from planned and unplanned downtime. InfoScale

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog!

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog! Table of Contents Introduction 1 About the VMware VCP Program 1 About the VCP Exam 2 Exam Topics 3 The Ideal VCP Candidate 7 How to Prepare for the Exam 9 How to Use This Book and CD 10 Chapter Format

More information

Downtime, whether planned or unplanned,

Downtime, whether planned or unplanned, Deploying Simple, Cost-Effective Disaster Recovery with Dell and VMware Because of their complexity and lack of standardization, traditional disaster recovery infrastructures often fail to meet enterprise

More information

QNAP in vsphere Environment

QNAP in vsphere Environment QNAP in vsphere Environment HOW TO USE QNAP NAS AS A VMWARE DATASTORE VIA ISCSI Copyright 2010. QNAP Systems, Inc. All Rights Reserved. V1.8 Document revision history: Date Version Changes Jan 2010 1.7

More information

Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5

Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 This document supports the version of each product listed and supports all subsequent versions until the

More information

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced

More information

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization The Drobo family of iscsi storage arrays allows organizations to effectively leverage the capabilities of a VMware infrastructure, including vmotion, Storage vmotion, Distributed Resource Scheduling (DRS),

More information

VMware vsphere on NetApp. Course: 5 Day Hands-On Lab & Lecture Course. Duration: Price: $ 4,500.00. Description:

VMware vsphere on NetApp. Course: 5 Day Hands-On Lab & Lecture Course. Duration: Price: $ 4,500.00. Description: Course: VMware vsphere on NetApp Duration: 5 Day Hands-On Lab & Lecture Course Price: $ 4,500.00 Description: Managing a vsphere storage virtualization environment requires knowledge of the features that

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2

Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2 Technical Note Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2 This technical note discusses using ESX Server hosts with a Hitachi Data Systems (HDS) NSC or USP SAN

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information