Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure

Size: px
Start display at page:

Download "Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure"

Transcription

1 Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology Abstract This white paper provides a detailed description of the technical aspects and benefits of deploying VMware Infrastructure version 3.5 on EMC Celerra and CLARiiON devices using Virtual Provisioning. February 2009

2 Copyright 2009 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part Number h6131 Applied Technology 2

3 Table of Contents Executive summary...4 Introduction...4 Audience... 5 Terminology... 5 Overview...7 CLARiiON Virtual Provisioning... 7 Requirements... 8 Celerra Virtual Provisioning... 8 Celerra NFS Virtual Provisioning... 8 Celerra iscsi Virtual Provisioning... 9 Requirements... 9 Considerations for VMware Infrastructure with Virtual Provisioning...10 VMFS, NFS, and RDM considerations with Virtual Provisioning Device visibility and access...10 VMware File System datastore on thin devices NFS datastore on virtually provisioned file systems Raw device mapping volumes on thin devices Virtual machine considerations on thin devices Creating VMware virtual machines on thin devices Impact of guest operating system activities on thin pool utilization Exhaustion of oversubscribed datastores Nondisruptive expansion of virtual disk on thin devices Expansion of a virtual datastore on thin devices vcenter server considerations with Virtual Provisioning Cloning virtual machines using VMware Infrastructure Client Cloning virtual machines using VMware vcenter Converter VMware VMotion, DRS, HA, and Virtual Provisioning Cold migration and Virtual Provisioning Hot migration using Storage VMotion and Virtual Provisioning Considerations for storage-based features on thin devices CLARiiON Celerra Performance considerations...39 CLARiiON Celerra CLARiiON Virtual Provisioning management...40 Thin pool management Thin pool monitoring Celerra Virtual Provisioning management...42 Thin file system and storage pool management Thin file system and storage pool monitoring Exhaustion of oversubscribed pools...47 Conclusion...48 References...49 Applied Technology 3

4 Executive summary The continual focus of evolution of the EMC CLARiiON and Celerra array technologies is to deliver enhanced product capabilities that improve effective storage capacity utilization as well as optimized performance, increased protection and security, more flexibility in interoperability support, and ease of use. The Virtual Provisioning feature in these products is aimed at improving storage utilization and performance delivery optimization. Virtual Provisioning, generally known in the industry as thin provisioning, enables organizations to improve ease of use and increase capacity utilization for certain applications and workloads. The implementation of Virtual Provisioning for CLARiiON and Celerra storage arrays directly addresses improvements in storage infrastructure utilization, as well as associated operational requirements and efficiencies. One of the biggest challenges for storage administrators is provisioning storage for new applications. Administrators typically allocate space based on anticipated future growth of applications. This is done to reduce future operational functions, such as incrementally increasing storage allocations or adding discrete blocks of storage as existing space is consumed. Using this approach results in overprovisioning (allocating more physical storage than an application will need for a long time) and incurring a higher cost than is necessary. Overprovisioning also leads to increased power, cooling, and floor space requirements. Even with the most careful planning, it may be necessary to provision additional storage in the future, which could potentially require an application outage. A second layer of storage overprovisioning occurs when a server and application administrator overallocate storage for their environment. The operating system sees the space as completely allocated, although only a fraction of the allocated space is actually used. For example, a file system that is created on a 100 GB LUN can hold a number of different files, and the total size of all the files cannot exceed 100 GB. But when the file system is initially created and two user files of 10 GB each are created in this file system, effectively, only 20 GB out of the 100 GB of storage space allocated is in use. But the remaining 80 GB of unused space cannot be reallocated to a different application, since the file system logically owns the entire 100 GB. EMC Virtual Provisioning addresses both of these issues. It allows more storage to be presented to an application than is physically available. More importantly, Virtual Provisioning allocates physical storage only when the storage is actually written to. This allows more flexibility in predicting future growth, reduces the initial cost of provisioning storage to an application, eliminates the waste that occurs as a result of overprovisioning, and eliminates the need to use further resources for subsequent storage allocations. Consolidating and optimizing IT resource utilization to manage and reduce IT cost has become a key focus for many enterprises that invest and rely heavily in technologies supporting their business goals. Virtualization technologies, such as VMware for server virtualization, have been gaining rapid adoption. Storage virtualization technology such as the Virtual Provisioning feature offered in EMC storage systems complements VMware technologies to deliver the full cost benefits from an effective IT virtualization implementation. Introduction This white paper addresses the considerations for deploying VMware Infrastructure version 3.x environments on thinly provisioned devices. An understanding of the principles that are exposed here will allow the reader to deploy VMware Infrastructure environments with Virtual Provisioning in the most effective manner. Applied Technology 4

5 Audience This white paper is intended for storage architects, and server and VMware administrators responsible for deploying VMware Infrastructure on the CLARiiON CX4 family using FLARE release 28 and Celerra using the DART operating environment release 5.5 (or later) with Virtual Provisioning. Terminology Table 1. Protocol specification terms Term Fibre Channel (FC) Internet Small Computer System Interface (iscsi) Network File System (NFS) Description A high speed networking protocol primarily used in storage area networks. The word Fibre is used as a generic term that can indicate copper or optical implementations of Fibre Channel products. A protocol that enables transport of block data over IP networks and transfers data by carrying SCSI commands over IP networks. A distributed file system that allows systems on the network to share remote file systems by allowing the systems to share a single copy of a directory. Table 2. Basic CLARiiON array and Virtual Provisioning terms Term CLARiiON LUN LUN Migration MetaLUNs MirrorView RAID groups Storage pool SAN Copy SnapView Thin LUN Thin pool Description Logical subdivisions of RAID groups in a CLARiiON storage system. A CLARiiON feature that dynamically migrates data to another LUN or metalun without disrupting running applications A collection of traditional LUNs that are striped or concatenated together and presented to a host as a single LUN. Additional LUNs can be added dynamically, allowing metaluns to be expanded on the fly. Software designed for disaster recovery solutions by mirroring local production data to a remote disaster recovery site. It offers two products: MirrorView/Synchronous and MirrorView/Asynchronous. One or more disks grouped together under a unique identifier in a CLARiiON storage system. A general term used to describe RAID groups and thin pools. In the Navisphere Manager GUI, the storage pool node contains RAID groups and thin pool nodes. Data mobility software that runs on the CLARiiON. Software used to create replicas of a source LUN. These point-in-time replicas can be pointer-based snapshots or full binary copies called clones or BCVs. A logical unit of storage where physical space allocated on the storage system may be less than the user capacity seen by the host server. A group of disk drives used specifically by thin LUNs. There may be zero or more thin pools on a system. Disks may be a member of no more than one thin pool. Disks that are in a thin pool cannot also be in a RAID group. Table 3. Basic Celerra array and Virtual Provisioning terms Term Automatic File System Extension Automatic Volume Management (AVM) Description Configurable Celerra file system feature that automatically extends a file system created or extended with AVM when the high water mark (HWM) is reached. Feature of the Celerra Network Server that creates and manages volumes automatically. AVM organizes volumes into storage pools that can be allocated to file systems. Applied Technology 5

6 Term Celerra Logical Unit Number (LUN) Celerra Replicator Disk Volume File system High water mark (HWM) iscsi snapshot Persistent Block Reservation (PBR) Regular iscsi LUN Snapshot Logical Unit (SLU) SnapSure Storage pool Description Identifying number of a SCSI or iscsi object in Celerra that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. (The LUN is actually an ID for the logical unit, but the term is often used to refer to the logical unit itself.) A Celerra service that produces a read-only, point-in-time copy of a source file system or iscsi LUN. This point-in-time copy can be either on the same Celerra system (local replication), or on another Celerra system (remote replication). The service periodically updates the copy, making it consistent with the source file system or iscsi LUN. A physical storage unit as exported from the storage array. All other volume types are created from disk volumes. Method of cataloging and managing the files and directories on a storage system. Trigger point at which the Celerra Network Server performs one or more actions, such as extending a file system, as directed by the related feature's software/parameter settings. A point-in-time copy of an iscsi LUN. Technique of reserving an adequate number of blocks in a file system to support the creation of a logical unit of a specified size. The blocks are reserved for the logical unit whether or not they are in use. iscsi LUN that uses Persistent Block Reservation (PBR) to ensure that the file system has sufficient space for all data that might be written to the LUN. An iscsi snapshot promoted to logical unit status and configurable as a disk device through an iscsi initiator. On a Celerra system, a feature that provides read-only point-in-time copies, also known as checkpoints, of a file system. Automated Volume Management (AVM), a Celerra feature, organizes available disk volumes into groupings called storage pools. Storage pools are used to allocate available storage to Celerra file systems. Storage pools can be created automatically by AVM or manually by the user. Table 4. Related VMware Infrastucture terms Term Cluster Cold Migration Data center Datastore ESX (formerly named ESX Server) Guest operating system Network File System (NFS) Description A cluster is a collection of ESX hosts and associated virtual machines that share resources and a management interface Cold migration provides the ability to migrate a virtual machine from one physical ESX host to another, and/or from one storage system to another with application service interruption The primary organizational structure used in vcenter Server, which contains hosts and virtual machines. A special logical container, analogous to file system, that hides specifics of each storage device and provides a uniform model for storing virtual machine files. Depending on the type of storage used, ESX datastores can have a VMFS or NFS file system format. VMware s high-end server product that installs directly on the physical hardware and therefore offers the best performance. An operating system that runs on a virtual machine File system on a NAS storage device. ESX 3.x supports NFS version 3 over TCP/IP. ESX can access a designated NFS volume located on an NFS server. ESX mounts the NFS volume and uses it for its storage needs. NFS is one of the two datastore formats that are available with ESX (the other is VMFS). Applied Technology 6

7 Term Raw device mapping (RDM) Storage VMotion Templates VMware Infrastructure (VI) Client Virtual disks Virtual machine vcenter Server (formerly named Virtual Center) VMotion VMware File System (VMFS) Description Raw device mapping volumes consists of a pointer in a.vmdk file and a physical raw device. The pointer in the.vmdk points to the physical raw device. The.vmdk file resides on a VMFS volume, which must reside on shared storage. Technology that allows users to migrate a virtual machine from one storage system to another while the virtual machine is up and running. A way to import virtual machines and store them as templates that can be deployed at a later time to create new virtual machines. A VMware application that provides an interface for data center management and virtual machine access. VI Client is one of the available methods for accessing a VMware virtual data center. Disks presented to a virtual machine from a VMFS volume. A virtualized x86 PC environment on which a guest operating system and associated application software can run. Multiple virtual machines can operate on the same physical machine concurrently. A VMware Infrastructure management product that manages and provides valuable services for virtual machines and underlying virtualization platforms from a central, secure location. VMotion technology provides the ability to migrate a running virtual machine from one physical ESX to another without application service interruption A clustered file system that stores virtual disks and other files that are used by virtual machines. VMFS is one of the two datastore formats that are available with ESX (the other is NFS). Overview This section gives an overview of CLARiiON and Celerra Virtual Provisioning and outlines the requirements for enabling the Virtual Provisioning feature of CLARiiON and Celerra storage systems. CLARiiON Virtual Provisioning CLARiiON thin LUNs are logical LUNs that can be used in many of the same ways that traditional CLARiiON LUNs are used. Unlike traditional CLARiiON LUNs, thin LUNs do not need to have physical storage completely allocated at the time the LUN is created and presented to a host. A thin LUN is not usable until it has been bound to a shared storage pool known as a thin pool. Multiple thin LUNs may be bound to any given thin pool. The thin pool is comprised of disks that provide the actual physical storage to support the thin LUN allocations. When a thin LUN is created, 2 GB of physical storage is mapped from the thin pool to the LUN. When a write is performed on a thin LUN where more storage is needed besides the 2 GB physical storage allocated upfront, the CLARiiON s mapping service allocates more storage to the thin LUN from the thin LUN pool; it allocates the amount of storage needed for the write in 8 KB chucks (extents) and is optimally packed. This approach reduces the amount of storage that is actually consumed. When a read is performed on a thin LUN, the data read is retrieved from the appropriate disk in the thin pool to which the thin LUN is associated. If for some reason a read is performed against an unallocated portion of the thin device, zeroes are returned to the reading process. When more physical data storage is required to service existing or future thin devices, the thin pool can be expanded by adding additional disks to existing thin pools dynamically (without a system outage). A thin pool can be expanded when it is approaching full storage allocations, and new thin LUNs can be created and associated with existing thin pools. The following figure depicts the relationships between thin LUNs and their associated thin pools. There are six LUNs associated with thin Pool A and three LUNs associated with thin pool B. Applied Technology 7

8 Figure 1. Thin LUNs and thin pools containing disk drives Requirements CX4 models running FLARE release 28.5.support the Virtual Provisioning feature. CLARiiON Virtual Provisioning Enabler must be purchased and installed on the storage system to create and manage thin pools and thin LUNs. Please check the latest FLARE 28.5 release notes available on Powerlink for additional information. Celerra Virtual Provisioning Celerra Virtual Provisioning technology is available for NFS and CIFS file systems, and for iscsi LUNs. Celerra Virtual Provisioning allows you to allocate storage based on actual usage rather than on long-term usage projections. Although it appears to users that the maximum amount of storage has been allocated to them, in reality a much smaller amount of storage has been allocated, and more storage will be allocated when they actually need it. Celerra NFS Virtual Provisioning Virtual Provisioning is a feature that can be enabled on the Celerra NFS file system; this feature must be used with the Automatic File System Extension feature. When configuring these features, the user must select values for the maximum size parameter and the high water mark (HWM) parameter. The Celerra Control Station will extend the file system when needed depending on the values of these parameters. Automatic File System Extension guarantees that the file system usage (measured by the ratio of used space to allocated space) will always be at least 3 percent below the HWM. With Automatic File System Extension, when the file system usage reaches the HWM, an automatic extension event notification is sent to the Celerra sys_log and the file system is automatically extended. If Virtual Provisioning is enabled, the maximum size (rather than the amount of storage actually allocated) is presented to the NFS, CIFS, or FTP clients. If there is not enough free storage space to extend the file system to the requested size, Automatic File System Extension extends the file system to use all of the available storage. (For example, if Automatic File System Extension requires 6 GB but only 3 GB is available, the file system automatically extends by 3 GB.) In this case, an error message appears indicating there was not enough storage space available to Applied Technology 8

9 perform automatic extension. When there is no available storage, Automatic File System Extension fails. If this happens, the file system must be manually extended. Celerra iscsi Virtual Provisioning A Celerra iscsi LUN is created within a standard Celerra file system, and emulates a SCSI disk device by using a dedicated file called a file storage object. The file storage object provides the physical storage space for data stored on the iscsi LUN. By default, an iscsi LUN is created using the Persistent Block Reservation (PBR) storage method (also called a regular iscsi LUN). With PBR, the entire requested disk size is reserved for the LUN although it is not taken from the reservation pool. However, when a virtually provisioned iscsi LUN is created (using the Virtual Provisioning storage method), space is not reserved on the disk for the LUN. Additional space is allocated to the LUN only when it is actually required by the user. Therefore, it is important to ensure that file system space is available before data is added to the LUN. For this reason, Automatic File System Extension must be enabled on the file system on which the virtually provisioned LUN is created. EMC also recommends that you enable Virtual Provisioning on this file system to optimize storage utilization. When using a high water mark (HWM) for Automatic File System Extension, the number of blocks that have been used determines when the HWM is reached. File system extension can occur even when usage of the production LUN appears to be low from the host s perspective. Snapshots, for example, consume additional file system space. In addition, deleting data from a LUN does not reduce the number of blocks allocated to the LUN. By default, a snapshot LUN (SLU) of a virtually provisioned iscsi LUN is virtually provisioned, and an SLU of a regular iscsi LUN is fully provisioned. For a regular iscsi LUN to be virtually provisioned, the sparstws parameter on the Celerra Data Mover must be adjusted. Requirements Celerra Virtual Provisioning is supported by all Celerra models with release 5.5 (or later). For NFS, Virtual Provisioning must be used with Automatic File System Extension; the file system should be created or extended using an AVM storage pool; and the Control Station must be running and operating properly for Automatic File System Extension to work correctly. Further information on using Virtual Provisioning with Celerra file systems can be found in the Managing EMC Celerra Volumes and File Systems with Automatic Volume Management technical module available on Powerlink. A virtually provisioned iscsi LUN can be created only on a file system that has auto extension enabled. Further information on using Virtual Provisioning with Celerra iscsi LUNs can be found in the Configuring iscsi Targets on EMC Celerra technical module available on Powerlink. Applied Technology 9

10 Considerations for VMware Infrastructure with Virtual Provisioning This section discusses things to consider when deploying a VMware Infrastructure using Virtual Provisioning. In this configuration, the behavior of VMware Infrastructure features depends on the feature itself, how it is used, and the guest operating system running in the VMware Infrastructure environment. This section also discusses how the various VMware Infrastructure features interact with virtually provisioned devices. VMFS, NFS, and RDM considerations with Virtual Provisioning Device visibility and access CLARiiON thin LUNs appear like any other SCSI attached device to the VMware Infrastructure. An example of this is shown in Figure 2. The devices highlighted (vmhba0:0:5 and vmhba0:0:6) are thin LUNs that have been presented from a CLARiiON storage system. The thin LUN can be used to create a VMware File System, or be assigned exclusively to a virtual machine as a raw disk mapping (RDM). Similarly virtually provisioned Celerra iscsi LUNs will be also discovered by the iscsi software adapter (or an iscsi HBA if connected to ESX). Figure 2. CLARiiON thin LUNs viewed in VMware Infrastructure Client Virtually provisioned network file systems appear like any other network file system to the VMware Infrastructure. An example of this is shown in Figure 3. The :/fs_thin is a virtually provisioned Celerra file system ( /fs_thin in this example) on a Data Mover that can be reached through the IP address. The virtually provisioned file system can be used to create an NFS datastore (such as fs_thin in this example). Applied Technology 10

11 Figure 3. Virtually provisioned file system viewed in VMware Infrastructure Client VMware File System datastore on thin devices This section outlines the impact of creating a VMFS datastore on a CLARiiON thin LUN or a Celerra virtually provisioned iscsi LUN. VMware File System datastore creation and formatting The VMware File System (VMFS) has interesting and valuable characteristics when used in a virtually provisioned environment. In Figure 4, a VMFS is created on a 500 GB CLARiiON thin LUN (device vmhba0:0:5). The amount of storage required to store the VMFS metadata is a function of the size of the thin LUN or device. The metadata for the VMFS on the thin LUN vmhba0:0:5 consumes 563 MB of storage. This is shown in Figures 4 and 5. Figure 6 on page 13 shows how the CLARiiON operating system responds to the write activity generated by formatting the VMFS. In this case, since a capacity of 2 GB is already allocated during thin LUN creation in Navisphere, creating the 563 MB VMFS does not affect the consumed size of the thin LUN. Therefore, the VMFS does not write all of its metadata to disks when it is created. The VMFS formats and uses the reserved area for metadata as requirements arise. This also applies to a virtually provisioned Celerra iscsi LUN. Applied Technology 11

12 Figure 4. VMware File System creation on a CLARiiON thin LUN Figure 5. Metadata area reserved on the VMware File System Applied Technology 12

13 Figure 6. Consumed capacity from a thin pool in response to VMware File System format activity in Navisphere NFS datastore on virtually provisioned file systems This section outlines the impact of creating an NFS datastore on a virtually provisioned Celerra file system. NFS datastore creation and formatting With Celerra Virtual Provisioning, is it possible to set an initial allocation to the file system. Once created, the NFS datastore is presented to ESX with the maximum size (which the user specifies in the maximum size parameter) of the virtually provisioned file system rather than its actual allocated size. This file system, along with the NFS datastore that was created in it, is automatically extended as the file system usage grows. This extension is performed according to the setting of the Automatic File System Extension feature. Unlike VMFS, an NFS datastore is managed by the NAS storage system rather than by ESX itself. Therefore, in this case, no datastore metadata information is written by ESX. The Celerra file system metadata is stored on the Celerra file system and its size is a function of the size of the file system. Unlike the VMFS metadata, as the file system metadata is written by Celerra it is not virtually provisioned. The capacity consumed by the file system metadata is not presented to ESX. As shown in Figure 7, an NFS datastore fs_thin was created on the virtually provisioned Celerra file system /fs_thin. ESX is presented with only the maximum size of the file system (close to 500 GB). ESX is unaware that only 10 GB was allocated to the file system in Celerra. Upon its creation, the NFS datastore consumes 592 KB (NFS datastore used space). Applied Technology 13

14 Figure 7. NFS datastore creation on a virtually provisoned Celerra file system As shown in Figure 8, the file system is virtually provisioned with a maximum size of 500 GB and an initial size of 10 GB. As seen in this figure, the Celerra file system metadata, which is 156 MB in size, is not presented to ESX. Due to this file system metadata, ESX is presented with a GB datastore (rather than 500 GB). Figure 8. Consumed capacity from a virtually provisioned Celerra file system following a NFS datastore creation Applied Technology 14

15 Raw device mapping volumes on thin devices This section outlines the impact of creating a RDM volume on a CLARiiON thin LUN or a virtually provisioned iscsi LUN. The creation of a RDM does not have any impact on the thin LUN, since the RDM volume does not format the thin LUN at the ESX level. Therefore, when an RDM volume (with physical or virtual compatibility) is presented to a virtual machine, and I/O is generated by the guest operating system running in the virtual machine, the VMware kernel does not play a direct role in transferring this I/O. In this configuration, the considerations for using thin LUNs are the same as the considerations for physical servers running the same operating system and applications. Virtual machine considerations on thin devices Creating VMware virtual machines on thin devices The same wizard (the New Virtual Machine wizard provided by VMware Infrastructure Client) is used to configure and manage VMware datastores on thin devices that is used for other datastores. To the wizard, these datastores are all the same. This is true for VMFS and NFS datastores. Creating virtual machines on a VMFS datastore Figure 9 shows the final step (in the New Virtual Machine wizard) to create a virtual machine with a 16 GB virtual disk on a VMFS datastore hosted on a thin device. (In this example the thin device is a virtually provisioned Celerra iscsi LUN.) When Finish is clicked, the VMware Infrastructure Client performs a number of actions, including creating the virtual disk to support the virtual machine. Figure 10 shows the storage utilization of the VMFS, and the thin pool supporting the datastore, after the New Virtual Machine wizard is finished. The figure shows that a small amount of storage is initialized when a virtual machine is created. However, as shown in Figure 11, the VMware kernel reserves the storage requirement for the virtual machine on the VMFS for future use. Figure 9. Using the VMware Infrastructure Client to create a new virtual machine Applied Technology 15

16 Figure 10. Thin pool utilization when creating a new virtual machine Figure 11. VMware datastore utilization on creation of a new virtual machine Applied Technology 16

17 Virtual Provisioning works particularly well in the VMware Infrastructure, due to the default allocation mechanism provided by the VMware kernel API for creating new virtual disks, zeroedthick. In this allocation mechanism, the storage required for the virtual disks is reserved in the datastore, but the VMware kernel does not initialize all the blocks. The blocks are initialized by the guest operating system as writes are made to uninitialized blocks 1. Therefore, capacity from the virtually provisioned device is allocated to the virtual disk only when it is needed. The VMware kernel provides a number of allocation mechanisms for creating virtual disks, in addition to zeroedthick. All the VMware kernel allocation mechanisms are listed in Table 5. Table 5. Allocation policies when creating new virtual disks on a VMware datastore Allocation mechanism (Virtual Disk format) Zeroedthick eagerzeroedthick thick Thin Rdm Rdmp Raw 2gbsparse VMware kernel behavior All space is allocated at creation but it is not initialized with zeroes. However, the allocated space is wiped clean of any previous contents. This is the default policy when creating new virtual disks. This allocation mechanism allocates all of the space and initializes all of the blocks with zeroes. This allocation mechanism performs a write to every block of the virtual disk, and hence results in equivalent storage use in the thin pool. A thick disk has all the space allocated at creation time. If the guest operating system performs a read from a block before writing to it, the VMware kernel may return stale data. EMC recommends not using this format. This allocation mechanism does not reserve any space on the VMFS when it creates a virtual disk. The space is allocated and zeroed on demand. This is the default allocation scheme when using the NFS protocol. The virtual disk created in this mechanism is a mapping file that contains the pointers to the blocks of SCSI disk it is mapping. However, the SCSI INQ information of the physical media is virtualized. This format is commonly known as the Virtual compatibility mode of raw disk mapping. This format is similar to the rdm format. However, the SCSI INQ information of the physical media is not virtualized. This format is commonly known as the Pass-through raw disk mapping. This mechanism can be used to address all SCSI devices supported by the kernel except for SCSI disks. The virtual disk created using this format is broken into multiple sparsely allocated extents (if needed), with each extent no more than 2 GB in size. It can be seen from Table 5 that the eagerzeroedthick format is not ideal for use with virtually provisioned devices. Also, while the thin allocation policy appears ideal for virtual provisioned devices, it is not recommended when using a thin LUN. This is because the risk of exceeding the thin pool capacity is much higher when virtual disks are allocated using this policy. This happens because the oversubscription to physical storage occurs at two independent layers that currently do not communicate with each other. It is important to note that the current version of VMware Infrastructure does not offer an option to create virtual disks differently for virtual machines 2. Creating virtual disks using a thin allocation mechanism requires the use of the CLI utility (vmkfstools) on the service console, or using remote CLI for ESXi. The use of command line should be avoided unless there is no other alternative. 1 The VMFS returns zeroes to the guest operating system if it attempts to read blocks of data that it has not previously written to. This is true even in cases where information from previous allocation is available the VMFS does not present stale data to the guest operating system when the virtual disk is created using the zeroedthick format. 2 The New Virtual Machine wizard offers the capability of creating virtual or physical compatibility raw disk mapping. Applied Technology 17

18 Creating virtual machines on an NFS datastore With an NFS datastore, on the other hand, a different behavior is observed much due to the inherent nature of the NFS protocol and its difference from block-level protocols such as Fibre Channel or iscsi. To illustrate NFS behavior, the New Virtual Machine wizard was used to create a virtual machine with a 16 GB virtual disk on an NFS datastore hosted on a virtually provisioned Celerra file system. Following the execution of this wizard, the VMware Infrastructure Client performs a number of actions including creation of the virtual disk that is required to support the virtual machine. Figure 12 shows the storage utilization of the NFS datastore and the virtually provisioned file system supporting the datastore after the New Virtual Machine wizard has completed. The figure clearly shows that only 0.2 MB of storage was initialized when a new virtual machine was created. As shown in Figure 13 on page 20, only a small amount of storage was reserved to the new virtual machine on the NFS datastore following the little that was actually written to the datastore when the virtual machine was created (virtual machine configuration files). This is unlike VMFS, in which the entire storage requirements for the new virtual machine were reserved when it was created (as shown in Figure 11). Figure 14 on page 20 provides further insight into the structure of the virtual disk. Although the encapsulated file of the virtual disk VM_thin_nfs-flat.vmdk is listed as 16 GB in size, it actually consumes only 152 KB on the file system. This behavior of the VMware Infrastructure with a virtually provisioned NFS file system is a result of the NFS protocol, not of Virtual Provisioning. With NFS, storage for a virtual machine is not reserved in advance; it is reserved when data is actually written to the virtual machine. This is because the NFS protocol is thinly provisioned by default. Data blocks in the file system are allocated to the NFS client (ESX in this case) only when they are needed. For this reason, unlike the VMFS datastore, the NFS datastore usage information on the VI Client matches the usage of the corresponding Celerra file system (as shown in Figures 12 and 13). Applied Technology 18

19 Figure 12. File system utilization when creating a new virtual machine over NFS Applied Technology 19

20 Figure 13. NFS datastore utilization on the creation of a new virtual machine Figure 14. Structure of the virtual disk that was created for the new virtual machine on the NFS datastore It is important to note that the current version of VMware Infrastructure does not offer an option to create virtual disks differently for virtual machines 3. Creating virtual disks using a thin allocation mechanism requires the use of the CLI utility (vmkfstools) on the service console, or using remote CLI for ESXi. For NFS, this is not required as the protocol is already thin by default. 3 The New Virtual Machine wizard offers the capability of creating virtual or physical compatibility raw disk mapping. Applied Technology 20

21 Impact of guest operating system activities on thin pool utilization As discussed in the previous section, only a small amount of storage is allocated when a new virtual machine is created. However, the utilization of the thin pool grows rapidly as the user performs various activities in the virtual machine. For example, the installation of an operating system in the virtual machine causes write I/Os to previously uninitialized blocks of data. These I/Os result in allocating additional storage in the thin pool associated with the thin device. The amount of storage used by the virtual machine depends on the behavior of the operating system, the logical volume manager, the file systems, and applications running inside the virtual machine. Poor allocation and reuse of blocks freed from deleted files can quickly result in thin pool allocation that is equal to the size of the virtual disks presented to the virtual machines. Nevertheless, thin pool allocation (to support a virtual machine) can never exceed the size of the virtual disks. Therefore, users should consider the behavior of the guest operating system and applications when configuring virtual disks for new virtual machines. Exhaustion of oversubscribed datastores In some cases, when using thin devices with VMware Infrastructure, it may not be possible to create additional virtual machines even when the thin device that contains the datastore is not full. This is because with Virtual Provisioning the actual allocated capacity is not presented to ESX. With VMFS, as previously shown in Figure 11, ESX reserves the entire virtual disk capacity of a newly created virtual machine. That is despite the fact that only a fraction of this capacity may actually be allocated on the thin device itself (Figure 10). Therefore, even if the thin device itself is not full, the creation of a new virtual machine fails when the reserved datastore capacity is exceeded. Figure 15 shows the error message that is displayed in this case. Nevertheless, it is still possible to use the unallocated thin device capacity to create other datastores or for non-vmware use. With NFS, on the other hand, such a scenario does not occur, because the NFS protocol is thinly provisioned by design. As previously shown in Figures 12 and 13, the virtual disk capacity of a newly created virtual machine is not to be reserved on the file system. Storage is only to be reserved and allocated when it is needed. Therefore, the NFS datastore utilization always matches the file system utilization in Celerra. Additional virtual machines can be created on this datastore even when the capacity of all their virtual disks exceeds the datastore size. For both datastore formats, the CLARiiON and Celerra thin pool monitoring capabilities should be used to get advance notification that the thin pool that contains the datastore will soon be oversubscribed. Applied Technology 21

22 Figure 15. Error message when oversubscribing a VMFS datastore Nondisruptive expansion of virtual disk on thin devices When a virtual machine is created, a virtual disk is formatted for it. This virtual disk should be sized according to the virtual machine s needs. The use of Virtual Provisioning guarantees that an underutilized virtual disk does not consume any unneeded storage capacity. However, in some cases, the virtual disk must be expanded to accommodate the needs of the new virtual machine. ESX 3.5 provides two methods to nondisruptively expand a virtual disk. One method uses the vmkfstool CLI utility that is available on the ESX service console or the ESXi remote CLI. The other method uses the VMware vcenter Converter and its ability to configure an existing virtual machine and extend the virtual disk of the virtual machine. Both methods work well with virtually provisioned storage, including VMFS or NFS datastores. Following the extension, the virtual disk maintains its virtually provisioned characteristics. It is important to note that these two methods only extend the virtual disk. After it is extended, the guest OS in the virtual machine should discover the additional storage and format it as a new disk partition. As shown in Figure 15, the virtual disk of a virtual machine was extended from 16 GB to 32 GB. The Windows guest OS discovered this additional space as an unallocated partition within the same physical disk. The Disk Management Windows utility can now be used to format this partition so that the partition can be used. Applied Technology 22

23 Figure 16. Device structure as discovered by the Windows guest OS following a virtual disk expansion Expansion of a virtual datastore on thin devices A properly configured thin device presented to VMware Infrastructure should not have a need for expansion even when some virtual disks in it need to be expanded. However, in some cases, a need to expand virtual disks may require to first expand the virtual datastore in which they are provisioned on. The use of Celerra together with the capabilities of the Virtual Provisioning technology allows for addressing such a need while still keeping an optimal utilization of storage. VMFS datastore expansion Celerra provide ways to dynamically extend a thin device while preserving its virtual-provisioned characteristics. The use of these array-based dynamic LUN extension features ensures that the datastore is optimally distributed even after the thin LUN is extended. For a VMFS datastore, such extension should be followed by an extension of the VMFS datastore. This is done using the VMFS extents feature to add an additional extent to the datastore on the expanded thin storage. Figure 17 shows a dynamic extension of a virtually provisioned Celerra iscsi LUN. In this example, the thin device was extended from 10 GB to 20 GB. Figures 18 and 19 illustrate the steps to extend the VMFS datastore on this thin device. Applied Technology 23

24 Figure 17. Dynamic extension of a thin LUN (virtually provisioned iscsi LUN) Figure 18. Discovery of an extended thin device following an array-based LUN extension Applied Technology 24

25 Figure 19. VMFS datastore extension on the extended thin device It is important to note that with ESX 3.5, following an array-based LUN extension, virtual machines on the VMFS datastore that need to be extended must be powered off or suspended first. Figure 20 shows the error message that appears after a VMFS datastore is extended to occupy an extended LUN, and the virtual machines provisioned on the datastore are powered on. Applied Technology 25

26 Figure 20. VMFS datastore extension error following a thin device expansion with virtual machines powered on If it is not possible to power off these virtual machines, one workaround is to create another thin device on CLARiiON or Celerra, and then add it as an additional VMFS extent. With this method the datastore data is concatenated across the two thin devices. This allows nondisruptive datastore extension. This workaround is less favorable as it may influence the overall performance of the datastore. NFS datastore expansion As with iscsi, Celerra provides ways to dynamically extend a virtually provisioned file system. Furthermore, Automatic File System Extension provides a way for the file system to be extended automatically without manual system administrator intervention. Unlike VMFS, the extension of the NFS datastore is done only on the NAS system and no further configuration is required on the VMware Infrastructure. This is because the NFS datastore, unlike VMFS, is managed by the NAS system and not by ESX. Furthermore, for this reason it is possible to nondisruptively extend an NFS datastore while the virtual machines on it are powered on. After an extension, a refresh of the vcenter Server GUI may be needed to discover the changes on the NAS system. Figure 21 shows how the virtually provisioned file system extension is presented to the vcenter Server GUI. Applied Technology 26

27 Figure 21. NFS datastore extension following an extension of a virtually provisioned file system vcenter server considerations with Virtual Provisioning VMware administrators must understand the limitations of using VMware Infrastructure with CLARiiON and Celerra Virtual Provisioning technology. The behavior of VMware Infrastructure features in a virtually provisioned configuration depends on virtual machines usage and the guest operating system running on the infrastructure. This section describes how the various VMware Infrastructure features interact with virtually provisioned devices. VMware s DRS and VMotion technologies are not affected by Virtual Provisioning because they do not involve any storage relocation. Virtual machine clones, templates, and hot migration (using the VMware Infrastructure Client) are not thin friendly because VM Cloning fully allocates all blocks. However, there is a workaround for this using VMware vcenter Converter that is described in a later section. Like VM Cloning, VMware Templates also allocate all blocks. The workaround is to shrink vmdk s before creating a template and to use the Compact option. Cloning virtual machines using VMware Infrastructure Client In a large VMware Infrastructure environment it is impractical to manually install and configure the guest operating system on every new virtual machine. To address this, VMware provides multiple mechanisms to simplify the process of creating preconfigured new virtual machines: Applied Technology 27

28 Cloning: This is a wizard-driven method that allows users to select a virtual machine 4 and clone it to a new virtual machine. Creating template: Templates can be created by either cloning an existing virtual machine to a new template or by converting an existing virtual machine that is already in place. The process to clone an existing virtual machine to a new template involves copying the virtual disks associated with the source virtual machine. Deploy from template: The users can use a virtual machine 4 to create a template. Once the template has been created, new virtual machines can be deployed from the template and customized to meet specific requirements. VMware Infrastructure provides wizards for both activities. Detailed discussion of these options is beyond the scope of this paper. Readers should consult VMware documentation for details. Cloning virtual machines and the impact on virtually provisioned devices The VI Client wizard used to clone virtual machines offers users a number of customizations. The users can select the ESX cluster on which the cloned virtual machine will be hosted, the resource pool to associate the virtual machine to, and the VMware datastore on which to deploy the cloned virtual machine. The only option that impacts the virtually provisioned devices is the process that is utilized to deploy the virtual disk for the cloned virtual machine. In the example that is presented here, a thin LUN VP_VM_DS is presented to a storage group that has ESX1 host connected to it Figure 22 shows a screenshot of the Virtual LUN on CLARiiON after installing the Windows 2003 operating system on a VM. The Virtual LUN has 25 GB of space and only 4 GB has been consumed for the installation of the operating systems. Figure 22. Thin pool allocation before the cloning process 4 All virtual machines cannot be cloned or converted to a template. Please consult the VMware documentation for further details. Applied Technology 28

29 Figure 23 shows a screenshot of the VMware Infrastructure Client while cloning is occurring. The example shows the cloning of the virtual machine W2K3 VP VM 1 to a new virtual machine with the name W2K3 VP VM 2. The virtual disk associated to both virtual machines is located on the VMware datastore (VP_VM _DS) on a virtual LUN. Figure 23. Cloning virtual machines using VMware Infrastructure Client Figure 24 shows the thin pool allocation after the cloning process is complete in Navisphere Manager. The Navisphere screenshot shows that approximately 8 GB of storage associated with the source virtual machine has been cloned, instead of 4 GB. It should be noted that the cloning process converts the virtual disk on the source virtual machines that had been created using the "zeroedthick" format to the "eagerzeroedthick" format on the cloned virtual machine. Since VMware Infrastructure currently does not provide a mechanism to change the allocation policy for the virtual disk on the cloned virtual machine, the cloning process is inherently detrimental to the use of virtually provisioned devices. Applied Technology 29

30 Figure 24. Thin pool utilization after cloning a virtual machine Figure 25 shows the screenshot that illustrates the VMware Infrastructure Client after the cloning process has been completed. It can be observed from the screenshot that VMware Infrastructure sees that 16 GB of space has been used to get two virtual machines installed. Applied Technology 30

31 Figure 25. Completed cloning of a virtual machine using VMware Infrastructure This limitation of the virtual machine cloning wizard was noticed on VMFS and NFS datastores. VMware is aware of this limitation and is working closely with EMC engineering to rectify the behavior. Future releases of VMware Infrastructure will include enhancements that will make the cloning process Virtual Provisioning-friendly. Until then, EMC does not recommend using the cloning wizard to provision new virtual machines. Creating templates from existing virtual machines As discussed earlier, VMware Infrastructure provides three mechanisms to simplify the provisioning of new virtual machines. The first option, cloning, was discussed in the previous section. Templates, the second option, can be created by either cloning an existing virtual machine or by converting an existing virtual machine in place. The process to clone an existing virtual machine to a new template involves copying the virtual disks associated with the source virtual machine. The problem described in the previous section also occurs during the process to clone a virtual machine to a template. This was noticed on both VMFS and NFS datastores. Figure 26 shows the utilization of a virtually provisioned Celerra file system before and after a template was created from a virtual machine. In this example, the source virtual machine includes a virtually provisioned 16 GB virtual disk that consumed about 5 GB on disk. But, following its creation, the template was allocated with the entire virtual disk size and about 21 GB was allocated on the file system. Applied Technology 31

32 Figure 26. Virtually provsioned file system allocation before/after the template creation from a virtual machine Therefore, if the datastore holding the templates is a thin device, the user will not benefit from the storage optimization provided by Virtual Provisioning technology. However, unlike the cloning wizard, the Clone to Template wizard offers the option of using the Compact format. When this option is selected, the cloned virtual disk is allocated using the thin format. Deploying new virtual machines from templates Once a template has been created, new virtual machines can be deployed from the template and customized to meet specific requirements. However, on both VMFS and NFS datastores, when a new virtual machine is created from the thin template, the disk is created using the eagerzeroedthick format. This, once again, defeats the purpose of a virtually provisioned environment as it results in a fully allocated virtual disk. Cloning virtual machines using VMware vcenter Converter Using VMware vcenter Converter, a pre-existing virtual machine can be copied and deployed in one step that allows for customization if necessary. This option of creating or copying virtual machines uses VMware vcenter Converter. By using a wizard, VMware vcenter Converter supplies functionality to copy and import a physical or standalone virtual machine into any VMware virtualization platform. Unlike the previous options, through the use of the VMware vcenter Converter Import Virtual Machine wizard (Figure 27) it is possible to simply deploy virtual machines with correct thin allocations, in both VMFS and NFS datastores, in a proper virtually provisioned environment. Applied Technology 32

33 Figure 27. VMware vcenter Converter Import Wizard In order to use VMware vcenter Converter to deploy a new machine, the source virtual machine must be powered off. Once the virtual machine is powered off, the copy process can be initiated. Once a source virtual machine is selected, the option to alter virtual disk settings is offered. For VMware vcenter Converter to correctly allocate space that is aligned with the ideals of Virtual Provisioning, the virtual disk size must be altered. If the disk size is maintained, the process will result in a fully allocated vmdk file on the target thin pool. To correct this, the disk size must be increased by at least 1 MB (Figure 28). If the disk size is increased, the wizard will allocate space using the thin format instead of the eagerzeroedthick format, making it thin friendly. Applied Technology 33

34 Figure 28. VMware vcenter Converter Wizard This process results in a correctly provisioned virtual machine that makes the most of what Virtual Provisioning has to offer. The new virtual machine takes up only 1.57 GB in the thin pool instead of the 8.00 GB that is allocated inside the virtual machine. VMware vcenter Converter calls this a growable disk. In other words, the vmdk file will only take up additional space on the specified datastore only when data is actually written to it. VMware VMotion, DRS, HA, and Virtual Provisioning When VMware Distributed Resource Scheduling (DRS) and VMware High Availability (HA) are used with VMotion technology, it provides load balancing and automatic failover for virtual machines with ESX 3.x or ESXi. To use VMware DRS and HA, a cluster definition must be created using vcenter Server. The ESX hosts in a cluster share resources including CPU, memory, and disks. All virtual machines and their configuration files from such a cluster must reside on shared storage such as CLARiiON or Celerra storage, so that users can power on the virtual machines from any host in the cluster. Furthermore, the hosts must be configured to have access to the same virtual machine network so VMware HA can monitor heartbeats between hosts on the console network for failure detection. EMC s CLARiiON and Celerra thin devices behave like any other SCSI disk or network file system attached to the ESX kernel. If a thin device is presented to all nodes of a VMware DRS cluster group, vcenter Server allows live migration of viable virtual machines on the thin device from one node of the cluster to another using VMware VMotion. As the virtual machines remain in their current datastore these vcenter Server technologies do not trigger any conversion or reformatting of virtual disks and therefore will not affect the use of Virtual Provisioning. Applied Technology 34

35 Cold migration and Virtual Provisioning VMware Infrastructure supports cold migration of virtual machines from one ESX to another. The cold migration process can also be used to change the datastore hosting the virtual machine. There is no impact to the cold migration process when a virtual machine is moved from one ESX to another while maintaining the location of the datastore containing the virtual machine files. However, changing the datastore location as part of the migration process can have a negative impact. The migration of the data to a VMFS or NFS datastore on a thin device is performed using the eagerzeroedthick format and results in unnecessary allocation from the thin pool. Therefore, the cold migration functionality cannot be used for migrating virtual machines from non-thin devices to thin devices. Hot migration using Storage VMotion and Virtual Provisioning VMware Storage VMotion is a solution that enables users to perform live migration of virtual machine disk files across heterogeneous storage arrays with complete transaction integrity and no interruption in service for critical applications. Since the process of hot migration involves migrating the virtual disk associated with the source virtual machine, this process is not thin friendly. The problem described in previous sections also occurs during hot migration on both VMFS and NFS datastores. Therefore, if the datastore holding the VMs is a thin device, the user will not benefit from the storage optimization provided by Virtual Provisioning technology. Considerations for storage-based features on thin devices CLARiiON SnapView SnapView replicates data only within the same CLARiiON storage array. SnapView supports two forms of data replication: SnapView snapshot and SnapView clone. SnapView snapshot SnapView snapshots are logical point-in-time images of a LUN that take only seconds to create. When a snapshot session is started for a LUN, SnapView software uses a pointer-based, copy-on-first-change method to keep track of how the source LUN looks at a particular point in time. SnapView clones SnapView clones provide users the ability to create fully populated binary copies of LUNs within a single storage system. Once populated, clones can be fractured from the source and presented to a secondary server to provide point-in-time replicas of data. Users will be able to perform local "thin to thin" replication with CLARiiON thin devices by using standard SnapView operations. This includes SnapView snapshots and clones. Following is an example of cloning a "Thin Source LUN" using CLARiiON's SnapView clone technology. "Thin Source LUN" is a thin LUN on Thin Pool 0. It has a user capacity of 5 GB and consumed capacity of 3 GB. Thin Source LUN is assigned to a Windows server and has been cloned to "Thin Source LUN_Clone_1." As indicated in the screenshot, the clone of the source image is using only the user consumed capacity of 3 GB. Applied Technology 35

36 Figure 29. Snap clone replication Also note that thin LUNs cannot be private LUNs. This means that reserved LUN pools for SnapView snapshots and clone private LUNs (CPLs) for SnapView clones cannot be thin LUNs. Because thin LUNs share a pool of storage it is possible for a LUN to run out of space, even if the allocation size is bigger. If this happens, clones functionality can be damaged. LUN migration CLARiiON LUN migration allows users to change performance and other characteristics of existing LUNs without disrupting host applications. It moves data with the change characteristics that the user selects from a source LUN to a destination LUN of the same or larger size. LUN migration can be used on thin LUNs, traditional LUNs, and metaluns. Figure 30 explains the three types of behavior that different CLARiiON local replication and LUN migration operations exhibit in relation to thin LUNs. Table 6 describes the behavior of each specific operation. Applied Technology 36

37 thin LUN thin LUN thin LUN Traditional LUN Traditional LUN Fully allocated thin LUN Figure 30. SnapView clones and LUN migration behavior on Virtual LUNs Cloning or migrating a traditional LUN to a thin LUN does not save space initially. But the new thin LUN can be expanded so that configured user capacity exceeds allocated capacity; thereby adding some of the benefits of Virtual Provisioning in the process. Table 6 indicates the specific replication support semantics for different CLARiiON replication products. If replication is supported, the respective slot indicates the result. X indicates that it is not supported in the first release of Virtual Provisioning. Table 6. Protocol specification terms Source Destination LUN SnapView SnapView RecoverPoint SAN MirrorView LUN LUN Migration snapshots clones Copy Traditional Traditional Traditional Snapshot 5 Traditional Traditional Traditional Traditional Snapshot 5 Fully Traditional Thin Fully Fully X X 7 Provisioned 6 Provisioned Provisioned Thin Thin Thin Snapshot 5 Thin Thin X X 4 Thin Traditional Traditional Snapshot 5 Traditional Traditional X X 4 Remote replication is supported through RecoverPoint. RecoverPoint supports replication for virtual to virtual LUNs and traditional to virtual LUNs. Support for MirrorView and SAN Copy will be in future releases. 5 A thin LUN cannot be a reserved LUN. Destination LUN type does not mean anything in this case. The result is a traditional snapshot LUN. 6 Fully provisioned for the size of its source LUN since migration can be to a larger LUN. 7 SAN Copy does not allow a thin LUN push or pull copy. In cases where SAN Copy does not know the type of remote target participating in the SAN Copy session, it would allow the remote target to be configured as thin. If this copy is initiated, the remote thin LUN will become fully provisioned after the copy completes since the source LUN is not thinly provisioned. Applied Technology 37

38 Celerra Celerra SnapSure Celerra SnapSure creates a point-in-time snapshot of a Celerra file system (checkpoint file system). For a virtually provisioned file system, only allocated blocks will be included in the snapshot (after they were modified). Furthermore, a file restored from a checkpoint file system requires the same amount of storage that it consumed on the virtually provisioned file system. Figure 31 shows the utilization of a virtually provisioned file system that include a single virtual machine with about 4 GB allocated to it. Figure 31. Virtually provisioned file system utilization before a snapshot was created for the file system The virtual machine was then deleted from disk after a snapshot was created for the file system. The virtual machine was then restored from the checkpoint file system. Figure 32 shows the utilization of the file system after the restore was completed. As seen in this figure, the storage allocated to the restore virtual machine is the same as the storage capacity that was allocated to the virtual machine before it was deleted. Figure 32. Virtually provisioned file system utilization after the virtual machine was restored from the checkpoint file system Celerra iscsi snapshots An iscsi snapshot is a point-in-time representation of the data stored on an iscsi LUN. A snapshot of a virtually provisioned iscsi LUN requires only as much space as data was changed in the 8 production LUN. A file restored from an iscsi snapshot requires the same amount of storage that it consumed on the virtually provisioned iscsi LUN. 8 This is the default behavior for a virtually provisioned Celerra iscsi LUN. However, a snapshot of a fully provisioned LUN will be, by default, fully provisioned. This default behavior can be altered by adjusting the sparsetws Data Mover parameter. Applied Technology 38

39 Celerra Replicator Celerra Replicator is an asynchronous remote replication mechanism. It produces a read-only point-in-time copy of a source file system or an iscsi LUN; it then periodically updates this copy to make it consistent with the source object. With a virtually provisioned file system or iscsi LUN, only data that is allocated on the source object is copied to the target object. Therefore, like with SnapSure and iscsi snapshots, the destination file system or iscsi LUN is also virtually provisioned. Performance considerations CLARiiON and Celerra Virtual Provisioning are designed to provide ease of provisioning and improve capacity utilization for certain applications. This section describes the performance implications with Virtual Provisioning. CLARiiON Virtual Provisioning is appropriate for applications that can tolerate some performance variability. Some workloads see performance improvements using wide striping with thin provisioning. CLARiiON Virtual Provisioning trades CPU cycles for lower administrative overhead and space efficiency. A mapping service handles data placement per best practices. This automated data placement adds indexing overhead and requires CPU cycles to manage, and thus has lower performance than traditional LUNs. When creating larger size thin pools, a larger number of LUNs can be leveraged by VMware Infrastructure for I/O. However, when multiple thin LUNs contend for shared spindle resources in a given pool, and when utilization reaches higher levels, the performance for a given application can become more variable. If this variability is not desirable for a particular application, that application could be dedicated to its own moderate size thin pool. Alternatively, Navisphere Quality of Service Manager can be used to manage resource contention within the pool as well as contention between LUNs in different thin pools and RAID groups. Fibre Channel and SATA drives should be deployed in separate pools. Where possible, drives in a thin pool should be the same rpm and be the same size. In a VMware Infrastructure environment, larger LUN sizes are usually configured and presented to multiple ESX hosts; therefore, larger thin pools with multiple disk drives are needed. The VMware best practice guidelines outlined for traditional LUNs are also applicable for thin LUNs and can be found in the EMC CLARiiON integration with VMware ESX Server white paper available on Powerlink. Celerra In general, some applications, I/O workloads, and storage deployment scenarios see performance improvements from using Virtual Provisioning. However, it is important to note that these improvements may change over time as the virtually provisioned file system expands and as the data is used, deleted, or modified. In a virtually provisioned file system, a performance improvement is noticed mostly with random and mix read I/O. Because the virtually provisioned file system initially occupies less space on disk than a fully provisioned file system occupies, there are smaller disk seeks required for random reads. Disk seeks are a large component of I/O latency, so minimizing seeks can improve performance. With sequential read I/O, on the other hand, disk seeks are already infrequent, and therefore a performance improvement would not be noticed. Write I/O will also not see much performance improvement as disk seek is usually not necessary or only minimal (except for random overwriting), and in large part will be cached anyway. It should be emphasized that this performance improvement may decrease over time as the file system is further used and extended, thus increasing the size of disk seeks and the corresponding latency. For a virtually provisioned iscsi LUN this performance improvement can be even greater due to the way iscsi LUNs are implemented in Celerra and how this affects disk seek time. A Celerra iscsi LUN is a single file object within a Celerra file system and the file system will attempt to keep the file spatially contiguous. Therefore, the virtually provisioned LUN will remain in contiguous disk space as it is created Applied Technology 39

40 and extended. A fully provisioned iscsi LUN will be built across a much larger space and the guest OS may spread data across that space according to its own organizational rules. Therefore, the fully provisioned iscsi LUN may see much more disk seek than would the virtually provisioned LUN. As with virtually provisioned file systems, this performance advantage will decrease over time as the LUN is extended. Furthermore, provisioning a group of similar virtual machines simultaneously, a common VMware Infrastructure scenario, in a virtually provisioned file system or iscsi LUN can lead to data defragmentation among the provisioned virtual machines. In this case, the infrequently accessed OS and application binaries from all the provisioned virtual machines will be clustered in one area of the disk, and the more frequently accessed data from all virtual machines will be clustered in another. This should be good for performance, especially initially, as all of the data hotspots will be contiguous on disk. It is expected that this benefit would decrease in time as data grows, more space is allocated, and more fragmentation occurs as the data hotspots move. The VMware best practice guidelines outlined for fully provisioned file systems and iscsi LUNs are also applicable for virtually provisioned ones and can be found in the VMware ESX Server using EMC Celerra Storage Systems Solutions Guide available on Powerlink. CLARiiON Virtual Provisioning management Thin pool management When storage is provisioned from a thin pool to support multiple thin devices there is usually more virtual storage provisioned to hosts than is supported by the underlying data devices. This is one of the main reasons for using Virtual Provisioning. However, there is a possibility that applications using a thin pool may grow rapidly and request more storage capacity from the thin pool than is actually there. This is an undesirable situation. The next section discusses the steps necessary to avoid running into this condition on CLARiiON storage platforms. Thin pool monitoring Along with Virtual Provisioning come several methodologies to monitor the capacity consumption of the thin pools. Navisphere Manager can be used to monitor the storage pool utilization as well as display the current space allocations. Users can also add alerts to objects that need to be monitored with event monitor, which will send alerts as , page, SNMP traps, and so forth. Usable pool capacity is the total physical capacity available to all LUNs in the pool. Allocated capacity is the total physical capacity currently assigned to all thin LUNs. Subscribed capacity is the total host reported capacity supported by the pool. When the thin LUN allocations begin to approach the capacity of the pool, the administrator will be alerted. Two non-dismissible pool alerts are provided to track pool %full. One is a user settable %full threshold at the warning severity level, which can range from 1% to 84%. Once the pool %full reaches 85%, the pool issues a built-in alert at the critical severity level. Both alerts trigger an associated event that can be configured for notification. Both the user settable alert and the built-in alert continue to track the actual %full value as the pool continues to fill. In Figure 33, the Thin Pool Alerts field on the Advanced tab of the Thin Pool Properties dialog box is set at 2%. Figure 34 shows the screenshot of Navisphere alerts when the thin pool reaches its user settable % full threshold. Applied Technology 40

41 Figure 33. User settable % full threshold Figure 34. Navisphere Manager % full threshold alerts Applied Technology 41

42 Subscribed Capacity %Full Threshold Allocated Capacity All t d C it Over Subscribed Capacity Usable Pool Capacity Figure 35. Pool % full threshold Adding drives to the pool nondisruptively increases available usable pool capacity for all attached LUNs. Allocated capacity is reclaimed by the pool when LUNs are deleted. Available Shared Capacity Consumed Capacity Allocated Capacity Usable Pool Capacity Figure 36. Consumed and Available capacity System administrators and storage administrators must put processes in effect to monitor the capacity for thin pools to make sure that they do not get filled. The pools can be dynamically expanded to include more data devices without application impact. Celerra Virtual Provisioning management Thin file system and storage pool management Celerra provides various settings to better manage the operation of virtually provisioned file systems and iscsi LUNs for existing and future needs. These settings include High Water Mark and Maximum capacity, which are used by the Automatic File System Extension feature of the virtually provisioned file system. The file system is automatically extended whenever its usage exceeds the configured High Water Applied Technology 42

43 Mark (HWM). It is extended enough to get to 3% below the configured HWM, up to the configured Maximum Capacity. This enables the file system to expand according to changes in demand. A virtually provisioned iscsi LUN, which is an object in the file system, can then be provisioned from this file system, and will expand based on its use up to its configured size. Figure 37 shows the available setting options for managing a virtually provisioned file system. Figure 37. Management setting for a virtually provisioned file system As with CLARiiON Virtual Provisioning, there is still a possibility that applications using a Celerra thin device may grow rapidly and request more storage capacity than is actually available (oversubscription). This is an undesirable situation. The next section discusses the steps necessary to avoid oversubscription on Celerra storage platforms. Thin file system and storage pool monitoring Celerra provides several methods to proactively monitor the utilization of virtually provisioned file systems and the storage pools on which they were created. Celerra also provides trending and prediction graphs for the utilization of virtually provisioned file system and storage pools. Figures 38 and 39 show the information that is provided on the utilization of a virtually provisioned file system and a virtually provisioned iscsi LUN. Applied Technology 43

44 Figure 38. Using Celerra Manager to find the utilization of a virtually provisioned file system Figure 39. Using Celerra Manager to find the utilization of a virtually provisioned iscsi LUN Applied Technology 44

45 Celerra Manager can also be used to configure proactive alerts when a virtually provisioned file system or storage pool are close to being oversubscribed. It is possible to customize these alert notifications according to file system and storage pool utilization, predicted time-to-fill, and overprovisioning. The alert notifications include logging the event in an event log file, sending an , or generating a Simple Network Management Protocol (SNMP) trap. Two types of Storage Used notifications can be configured: Current size how much of the currently allocated file system/storage pool capacity is used Maximum size how much of the configured maximum file system/storage pool capacity is used (when the file system/storage pool will be fully extended) Figure 40 illustrates the two types of Storage Used alert notifications that can be configured on a virtually provisioned Celerra file system, or on a storage pool. These alert notifications can also be configured based on capacity levels (for example, MB, GB, or TB) rather than on percentages. Figure 40. Storage Used alert notifications for virtually provisioned Celerra file systems and storage pools Figure 41 shows how to configure a Storage Used alert notification on file systems or storage pools using Celerra Manager. Applied Technology 45

46 Figure 41. Storage Used alert notification configuration for file systems or storage pools Similarly, as shown in Figure 42, alert notifications can be configured based on the Celerra time-to-fill predications for file systems and storage pools. Figure 42. Storage Projection alert notification configuration for file systems or storage pools Applied Technology 46

EMC CLARiiON Virtual Provisioning

EMC CLARiiON Virtual Provisioning Applied Technology Abstract This white paper discusses the benefits of Virtual Provisioning on EMC CLARiiON CX4 storage systems. It provides an overview of this technology, and describes how Virtual Provisioning

More information

EMC Replication Manager for Virtualized Environments

EMC Replication Manager for Virtualized Environments EMC Replication Manager for Virtualized Environments A Detailed Review Abstract Today s IT organization is constantly looking for ways to increase the efficiency of valuable computing resources. Increased

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology

IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server

More information

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 Technical Note Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2 This technical note discusses using ESX Server hosts with an IBM System Storage SAN Volume Controller

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft SQL Server EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008, 2009 EMC

More information

EMC CLARiiON Guidelines for VMware Site Recovery Manager with EMC MirrorView and Microsoft Exchange

EMC CLARiiON Guidelines for VMware Site Recovery Manager with EMC MirrorView and Microsoft Exchange EMC CLARiiON Guidelines for VMware Site Recovery Manager with EMC MirrorView and Microsoft Exchange Best Practices Planning Abstract This white paper presents guidelines for the use of Microsoft Exchange

More information

Virtual Provisioning for the EMC VNX2 Series VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Applied Technology

Virtual Provisioning for the EMC VNX2 Series VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Applied Technology White Paper Virtual Provisioning for the EMC VNX2 Series Applied Technology Abstract This white paper discusses the benefits of Virtual Provisioning on the EMC VNX2 series storage systems. It provides

More information

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices VMware vstorage Virtual Machine File System Technical Overview and Best Practices A V M wa r e T e c h n i c a l W h i t e P a p e r U p d at e d f o r V M wa r e v S p h e r e 4 V e r s i o n 2. 0 Contents

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

AX4 5 Series Software Overview

AX4 5 Series Software Overview AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management

More information

EMC VNXe3200 UFS64 FILE SYSTEM

EMC VNXe3200 UFS64 FILE SYSTEM White Paper EMC VNXe3200 UFS64 FILE SYSTEM A DETAILED REVIEW Abstract This white paper explains the UFS64 File System architecture, functionality, and features available in the EMC VNXe3200 storage system.

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.1 ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1

Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1 ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

SAN Implementation Course SANIW; 3 Days, Instructor-led

SAN Implementation Course SANIW; 3 Days, Instructor-led SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

Configuration Maximums VMware vsphere 4.1

Configuration Maximums VMware vsphere 4.1 Topic Configuration s VMware vsphere 4.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.1. The limits presented in the

More information

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment Executive Summary... 2 HP StorageWorks MPX200 Architecture... 2 Server Virtualization and SAN based Storage... 3 VMware Architecture...

More information

SnapManager 1.0 for Virtual Infrastructure Best Practices

SnapManager 1.0 for Virtual Infrastructure Best Practices NETAPP TECHNICAL REPORT SnapManager 1.0 for Virtual Infrastructure Best Practices John Lockyer, NetApp January 2009 TR-3737 LEVERAGING NETAPP DATA ONTAP FOR VMWARE BACKUP, RESTORE, AND DISASTER RECOVERY

More information

Managing EMC Celerra Volumes and File Systems with Automatic Volume Management

Managing EMC Celerra Volumes and File Systems with Automatic Volume Management Managing EMC Celerra Volumes and File Systems with P/N 300-004-148 Rev A06 June 2009 Contents Introduction..................................................3 System requirements.......................................3

More information

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE

EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions

More information

VMware Virtual Machine File System: Technical Overview and Best Practices

VMware Virtual Machine File System: Technical Overview and Best Practices VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:

More information

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog!

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog! Table of Contents Introduction 1 About the VMware VCP Program 1 About the VCP Exam 2 Exam Topics 3 The Ideal VCP Candidate 7 How to Prepare for the Exam 9 How to Use This Book and CD 10 Chapter Format

More information

VMware Site Recovery Manager with EMC RecoverPoint

VMware Site Recovery Manager with EMC RecoverPoint VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright

More information

VMware vsphere 5.1 Advanced Administration

VMware vsphere 5.1 Advanced Administration Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.

More information

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments

Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments White Paper Integration of ETERNUS DX Storage Systems in ware Environments Technical White Paper Integration of ETERNUS DX Storage Systems in ware Environments Content The role of storage in virtual server

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

Configuration Maximums VMware vsphere 4.0

Configuration Maximums VMware vsphere 4.0 Topic Configuration s VMware vsphere 4.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.0. The limits presented in the

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Navisphere Quality of Service Manager (NQM) Applied Technology

Navisphere Quality of Service Manager (NQM) Applied Technology Applied Technology Abstract Navisphere Quality of Service Manager provides quality-of-service capabilities for CLARiiON storage systems. This white paper discusses the architecture of NQM and methods for

More information

Using Raw Device Mapping

Using Raw Device Mapping VMware ESX Server Using Raw Device Mapping This technical note is intended to help ESX Server administrators to understand how raw device mapping works and decide when it s appropriate to use it. This

More information

WHITE PAPER: customize. Comprehensive Backup and Recovery of VMware Virtual Infrastructure Symantec Backup Exec 12.5 for Windows Servers

WHITE PAPER: customize. Comprehensive Backup and Recovery of VMware Virtual Infrastructure Symantec Backup Exec 12.5 for Windows Servers WHITE PAPER: customize Symantec Solutions for Windows Confidence in a connected world. Comprehensive Backup and Recovery of VMware Virtual Infrastructure Symantec Backup Exec 12.5 for Windows Servers White

More information

VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning

VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning Abstract This white paper provides insight into the virtualization of desktop systems using the EMC Celerra IP storage

More information

Virtual Machine Backup Guide

Virtual Machine Backup Guide Virtual Machine Backup Guide ESX 4.0, ESXi 4.0 Installable and vcenter Server 4.0, Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 This document supports the version

More information

Configuration Maximums VMware Infrastructure 3

Configuration Maximums VMware Infrastructure 3 Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure

More information

Configuration Maximums

Configuration Maximums Topic Configuration s VMware vsphere 5.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.0. The limits presented in the

More information

NetBackup for VMware Data Recovery Services to End the Dark Ages of Virtualization

NetBackup for VMware Data Recovery Services to End the Dark Ages of Virtualization NetBackup for VMware Data Recovery Services to End the Dark Ages of Virtualization VADP vstorage APIs for Data Protection Centralized Efficient Off-host LAN free Challenges Dynamic Environment Performance

More information

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Exam : VCP5-DCV Title : VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Version : DEMO 1 / 9 1.Click the Exhibit button. An administrator has deployed a new virtual machine on

More information

VMware vsphere Data Protection 6.1

VMware vsphere Data Protection 6.1 VMware vsphere Data Protection 6.1 Technical Overview Revised August 10, 2015 Contents Introduction... 3 Architecture... 3 Deployment and Configuration... 5 Backup... 6 Application Backup... 6 Backup Data

More information

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Storage VMware vsphere 4.1 W H I T E P A P E R Introduction VMware vsphere 4.1 brings many new capabilities to further extend the benefits of vsphere 4.0. These new features

More information

Virtualized Exchange 2007 Local Continuous Replication

Virtualized Exchange 2007 Local Continuous Replication EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Local Continuous Replication EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

Data Migration Techniques for VMware vsphere

Data Migration Techniques for VMware vsphere Data Migration Techniques for VMware vsphere A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper profiles and compares various methods of data migration in a virtualized

More information

EMC Disk Library with EMC Data Domain Deployment Scenario

EMC Disk Library with EMC Data Domain Deployment Scenario EMC Disk Library with EMC Data Domain Deployment Scenario Best Practices Planning Abstract This white paper is an overview of the EMC Disk Library with EMC Data Domain deduplication storage system deployment

More information

VMware vsphere Backup and Replication on EMC Celerra

VMware vsphere Backup and Replication on EMC Celerra VMware vsphere Backup and Replication on EMC Celerra Applied Technology Abstract NAS is a common storage platform for VMware vsphere environments. EMC Celerra can present storage to VMware ESX servers

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

VMware vsphere 5.0 Boot Camp

VMware vsphere 5.0 Boot Camp VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this

More information

EMC Symmetrix V-Max with Veritas Storage Foundation

EMC Symmetrix V-Max with Veritas Storage Foundation EMC Symmetrix V-Max with Veritas Storage Foundation Applied Technology Abstract This white paper details the benefits of deploying EMC Symmetrix V-Max Virtual Provisioning and Veritas Storage Foundation

More information

Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server

Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server 1. Mục tiêu khóa học. Khóa học sẽ tập trung vào việc cài đặt, cấu hình và quản trị VMware vsphere 5.1. Khóa học xây dựng trên nền VMware ESXi 5.1 và VMware vcenter Server 5.1. 2. Đối tượng. Khóa học dành

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

Oracle E-Business Suite Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON

Oracle E-Business Suite Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON Oracle E-Business Suite Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON Applied Technology Abstract This white paper demonstrates how VMware s Site Recovery Manager enables

More information

What s New with VMware Virtual Infrastructure

What s New with VMware Virtual Infrastructure What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management

More information

SAP Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON

SAP Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON SAP Disaster Recovery Solution with VMware Site Recovery Manager and EMC CLARiiON Applied Technology Abstract This white paper demonstrates how VMware s Site Recovery Manager enables the design of a powerful

More information

EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series 2 Applied Technology

EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series 2 Applied Technology EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series Abstract This white paper discusses various backup options for VMware ESX Server deployed on Celerra NS Series storage using EMC Avamar

More information

VBLOCK SOLUTION FOR SAP: SIMPLIFIED PROVISIONING FOR OPERATIONAL EFFICIENCY

VBLOCK SOLUTION FOR SAP: SIMPLIFIED PROVISIONING FOR OPERATIONAL EFFICIENCY VBLOCK SOLUTION FOR SAP: SIMPLIFIED PROVISIONING FOR OPERATIONAL EFFICIENCY August 2011 2011 VCE Company, LLC. All rights reserved. 1 Table of Contents Introduction... 3 Purpose... 3 Audience... 3 Scope...

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

QNAP in vsphere Environment

QNAP in vsphere Environment QNAP in vsphere Environment HOW TO USE QNAP NAS AS A VMWARE DATASTORE VIA ISCSI Copyright 2010. QNAP Systems, Inc. All Rights Reserved. V1.8 Document revision history: Date Version Changes Jan 2010 1.7

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Storage Administrator for Exchange SMB Edition

EMC Storage Administrator for Exchange SMB Edition EMC White Paper EMC Storage Administrator for Exchange SMB Edition Product Overview Abstract: This white paper examines the storage management technologies available with EMC Storage Administrator for

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2 Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer

More information

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced

More information

Information Infrastructure for Vmware

Information Infrastructure for Vmware Information Infrastructure for Vmware Considerations & Solutions Erez Etzyon Technology Consultants Team Leader 1 Agenda Evolution of VMWare infrastructure requirements. Considerations for VMWare deployments.

More information

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Integration note, 4th Edition Introduction... 2 Overview... 2 Comparing Insight Management software Hyper-V R2 and VMware ESX management...

More information

VBLOCK SOLUTION FOR SAP APPLICATION SERVER ELASTICITY

VBLOCK SOLUTION FOR SAP APPLICATION SERVER ELASTICITY Vblock Solution for SAP Application Server Elasticity Table of Contents www.vce.com VBLOCK SOLUTION FOR SAP APPLICATION SERVER ELASTICITY Version 2.0 February 2013 1 Copyright 2013 VCE Company, LLC. All

More information

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera

Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Solutions for Microsoft Exchange 2007 Virtualized Exchange 2007 Archiving with EMC EmailXtender/DiskXtender to EMC Centera EMC Commercial Solutions Group Corporate Headquarters Hopkinton, MA 01748-9103

More information

Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2

Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2 Technical Note Using VMware ESX Server With Hitachi Data Systems NSC or USP Storage ESX Server 3.0.2 This technical note discusses using ESX Server hosts with a Hitachi Data Systems (HDS) NSC or USP SAN

More information

VMware Site Recovery Manager and Nimble Storage

VMware Site Recovery Manager and Nimble Storage BEST PRACTICES GUIDE VMware Site Recovery Manager and Nimble Storage Wen Yu, Nimble Storage Ken Werneburg, VMware N I M B L E T E C H N I C A L R E P O R T : V M W A R E S I T E R E C O V E R Y M A N A

More information

COMPELLENT SERVER INSTANT REPLAY: A COMPLETE BOOT FROM SAN SOLUTION WHITE PAPER MARCH 2007

COMPELLENT SERVER INSTANT REPLAY: A COMPLETE BOOT FROM SAN SOLUTION WHITE PAPER MARCH 2007 COMPELLENT SERVER INSTANT REPLAY: A COMPLETE BOOT FROM SAN SOLUTION WHITE PAPER MARCH 2007 EECUTIVE SUMMARY For IT organizations with more than a few servers, deploying, provisioning and recovering servers

More information

VMware Virtual Machine Protection

VMware Virtual Machine Protection VMware Virtual Machine Protection PowerVault DL Backup to Disk Appliance Dell Symantec Symantec DL Appliance Team VMware Virtual Machine Protection The PowerVault DL Backup-to-Disk Appliance Powered by

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

VMware Consolidated Backup

VMware Consolidated Backup INFORMATION GUIDE VMware Consolidated Backup Improvements in Version 3.5 Contents Introduction...1 What s New in VMware Infrastructure 3 Version 3.5...2 Improved Storage Support...2 Using Consolidated

More information

Esri ArcGIS Server 10 for VMware Infrastructure

Esri ArcGIS Server 10 for VMware Infrastructure Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure

More information

Nasuni Filer Virtualization Getting Started Guide. Version 7.5 June 2016 Last modified: June 9, 2016 2016 Nasuni Corporation All Rights Reserved

Nasuni Filer Virtualization Getting Started Guide. Version 7.5 June 2016 Last modified: June 9, 2016 2016 Nasuni Corporation All Rights Reserved Nasuni Filer Virtualization Getting Started Guide Version 7.5 June 2016 Last modified: June 9, 2016 2016 Nasuni Corporation All Rights Reserved Document Information Nasuni Filer Virtualization Getting

More information

Virtualizing Sectra RIS/PACS Solutions for Healthcare Providers Enabled by EMC Unified Storage and VMware

Virtualizing Sectra RIS/PACS Solutions for Healthcare Providers Enabled by EMC Unified Storage and VMware Virtualizing Sectra RIS/PACS Solutions for Healthcare Providers Enabled by EMC Unified Storage and VMware Applied Technology EMC Global Solutions Abstract This white paper provides an overview of a Sectra

More information

W H I T E P A P E R. Understanding VMware Consolidated Backup

W H I T E P A P E R. Understanding VMware Consolidated Backup W H I T E P A P E R Contents Introduction...1 What is VMware Consolidated Backup?...1 Detailed Architecture...3 VMware Consolidated Backup Operation...6 Configuring VMware Consolidated Backup...6 Backing

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

Maximize Storage Efficiency with NetApp Thin Provisioning and Symantec Thin Reclamation

Maximize Storage Efficiency with NetApp Thin Provisioning and Symantec Thin Reclamation White Paper Maximize Storage Efficiency with NetApp Thin Provisioning and Symantec Thin Reclamation Jeremy LeBlanc; Adam Mendoza; Mike McNamara, NetApp Ashish Yajnik; Rishi Manocha, Symantec September

More information

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office

More information

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. E-Series NetApp E-Series Storage Systems Mirroring Feature Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

The EMC CLARiiON Navisphere Command Line Interface (CLI): History and Best Practices

The EMC CLARiiON Navisphere Command Line Interface (CLI): History and Best Practices The EMC CLARiiON Navisphere Command Line Interface (CLI): History and Best Practices A Detailed Review Abstract This white paper describes the EMC Navisphere Command Line Interface (CLI). It describes

More information

How to Backup and Restore a VM using Veeam

How to Backup and Restore a VM using Veeam How to Backup and Restore a VM using Veeam Table of Contents Introduction... 3 Assumptions... 3 Add ESXi Server... 4 Backup a VM... 6 Restore Full VM... 12 Appendix A: Install Veeam Backup & Replication

More information

DATA CENTER VIRTUALIZATION WHITE PAPER SEPTEMBER 2006

DATA CENTER VIRTUALIZATION WHITE PAPER SEPTEMBER 2006 DATA CENTER VIRTUALIZATION WHITE PAPER SEPTEMBER 2006 EXECUTIVE SUMMARY Many enterprise IT departments have attempted to respond to growth by adding servers and storage systems dedicated to specific applications,

More information

VM Instant Access & EMC Avamar Plug-In for vsphere Web Client

VM Instant Access & EMC Avamar Plug-In for vsphere Web Client White Paper VM Instant Access & EMC Avamar Plug-In for vsphere Web Client Abstract With the ever increasing pace of virtual environments deployed in the enterprise cloud, the requirements for protecting

More information

Backup Exec 12.5 Agent for VMware Virtual Infrastructure FAQ

Backup Exec 12.5 Agent for VMware Virtual Infrastructure FAQ Backup Exec 12.5 Agent for VMware Virtual Infrastructure FAQ Contents Overview... 1 Supported Configurations... 3 Backup... 6 Database and Application Protection... 9 Virtual Machine Recovery... 10 Licensing...

More information

Symantec Virtual Machine Management 7.1 User Guide

Symantec Virtual Machine Management 7.1 User Guide Symantec Virtual Machine Management 7.1 User Guide Symantec Virtual Machine Management 7.1 User Guide The software described in this book is furnished under a license agreement and may be used only in

More information

vsphere Storage ESXi 5.1 vcenter Server 5.1 EN-000809-02

vsphere Storage ESXi 5.1 vcenter Server 5.1 EN-000809-02 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

Backup and Recovery Best Practices With CommVault Simpana Software

Backup and Recovery Best Practices With CommVault Simpana Software TECHNICAL WHITE PAPER Backup and Recovery Best Practices With CommVault Simpana Software www.tintri.com Contents Intended Audience....1 Introduction....1 Consolidated list of practices...............................

More information

EMC s UNIFIED STORAGE AND MULTITENANCY

EMC s UNIFIED STORAGE AND MULTITENANCY White Paper EMC s UNIFIED STORAGE AND MULTITENANCY Technology Concepts and Business Considerations Abstract This white paper discusses how EMC s leading-edge technologies are used to implement secure multitenancy

More information