FILE ARCHIVING FROM EMC CELERRA TO DATA DOMAIN WITH EMC FILE MANAGEMENT APPLIANCE

Size: px
Start display at page:

Download "FILE ARCHIVING FROM EMC CELERRA TO DATA DOMAIN WITH EMC FILE MANAGEMENT APPLIANCE"

Transcription

1 White Paper FILE ARCHIVING FROM EMC CELERRA TO DATA DOMAIN WITH EMC FILE MANAGEMENT APPLIANCE Abstract This white paper is intended to guide administrators through the process of deploying the EMC File Management Appliance technology in environments containing EMC Celerra and Data Domain network storage. October 2010

2 Copyright 2010 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part Number h8083 2

3 Table of Contents Executive summary... 5 Audience... 5 Deploying File Management Appliance... 6 File Management Appliance architecture... 6 Network requirements... 7 Configuring FMA... 7 Environment configuration... 7 Preparing Celerra blades and virtual Data Movers for archiving... 7 Preparing Data Domain appliances for archiving... 9 Preparing Celerra NFS exports for archiving High availability and load balancing for archiving and recall services HA for archiving services Load balancing for archiving services HA for recall services Load balancing for recall services Backup and disaster recovery Restoring orphan management Archiving and recall architecture FMA software architecture Archiver system architecture Data archived off Celerra FileMover API Language settings Authentication and authorization Overview of the archiving process Overview of the recall process Data archived to Data Domain systems Authentication and authorization Interoperability with quotas Implementing an archiving strategy Developing archiving policies Classifying datasets Performance requirements Creating File Matching Expressions Developing archiving policies Simulation tasks Creating and monitoring archiving tasks

4 Monitoring the progress of a running task Scheduling archive tasks Managing archived data Managing Data Domain repositories Stub re-creation Stub scanner Orphan file management The File Management Appliance database Features that utilize the File Management Appliance database User interaction Database maintenance and backup Troubleshooting Performing a packet capture Log files Additional troubleshooting utilities Conclusion

5 Executive summary EMC File Management Appliance is used to implement a tiered storage strategy through file level archiving, thereby facilitating significant storage savings. This white paper is intended to guide administrators through the process of deploying the technology in environments containing EMC Celerra and Data Domain network storage. There are six sections in this paper: Deploying File Management Appliance Archiving and recall architecture Implementing an archiving strategy Managing archived data The File Management Appliance database Troubleshooting The information contained in this paper applies to version This paper may be updated periodically. It is always recommended to ensure you have the latest version prior to applying any information contained herein. File Management Appliance software version is loaded onto a physical appliance called FMA or a virtual appliance called FMA/VE. The general term File Management Appliance or FMA will be used whenever the content in this document applies to both types of appliance. All content discussing EMC Celerra and EMC Data Domain is in reference to the specific software versions listed in the Interoperability Matrix. FMA will interoperate with any Celerra and Data Domain hardware that runs a supported software version. Audience This white paper is intended for use by administrators as well as storage administrators responsible for a production environment where FMA will be deployed. This paper does not discuss practices for using in environments containing EMC Centera or NetApp storage. This paper contains supplemental information to the standard documentation available for and assumes the reader is familiar with all information contained therein. If you have not reviewed those documents, please do so in addition to reading this paper in order to gain a complete understanding of the technology. 5

6 Deploying File Management Appliance This section describes how to deploy in production environments. File Management Appliance architecture can be delivered in the form of either a physical or virtual appliance. The capabilities and features available on these appliances are the same and either can be used to archive from Celerra to Data Domain to create a robust solution. The (FMA) and the File Management Appliance/VE (FMA/VE) are suitable for small and large enterprise environments requiring the capability to archive up to 250 million files per appliance. Figure 1. File Management Appliance deployment In this figure: 1. File Management Appliances will read files off of primary storage and write them to the Data Domain storage using the NFS protocol. The original file on NAS is then converted to a stub file using the FileMover interface of the Celerra blade. 6

7 Aside from the use of the FileMover interface, FMA will appear just as if it was another NFS client. 2. When clients attempt to read/write a stub file on primary storage it triggers the Celerra to perform a recall of the file data. 3. When recall is needed, the Celerra blade will read the contents of the stub file and will connect to the NFS export where the associated file data is located. The data is read back and passed on to the client that triggered the recall. Network requirements File Management Appliance relies on IP and Ethernet networking to communicate with Celerra and Data Domain storage systems. FMA must be able to establish IP connectivity to the systems with which it will interact. Specific sites may have performance requirements for archiving and recall speed. In such cases, it is recommended to ensure that FMA has appropriate bandwidth between itself and the Celerra and Data Domain systems that will be involved in archiving and that the network latency is acceptable. As a best practice, File Management Appliance and the file servers that will be involved in archiving should be deployed at the same site. When multiple sites are involved, network performance can have greater impact due to WAN conditions, causing slower archiving or recall speeds. Configuring FMA By default, the rfhsetup utility is launched when logging in to the FMA CLI. This utility will allow you to configure each of the network interfaces of FMA. For specific details on networking configuration, please consult the and File Management Appliance/VE Getting Started Guide. Environment configuration Data can only be archived and recalled if the configuration steps described in this section have been performed. These steps may need to be repeated when new Data Movers and/or file systems will be archived. Preparing Celerra blades and virtual Data Movers for archiving In order to archive NFS data off of a Celerra blade, File Management Appliance will require access to the Mount v3, NFS v3, and NLM v4 RPC services running on the blade. When the Celerra will be used as the source for archiving tasks, FMA must have access to the FileMover service of each blade/virtual Data Mover (VDM) as well as access to TCP port Access to these interfaces is necessary whenever the Celerra will be used as an archiving source. In order to take advantage of the File Management Appliance capability to automatically create DHSM connections for file systems, the File Management Appliance will need access to the XML APIv2 interface of the Celerra Control Station 7

8 that manages the file system. Direct command line access to the Celerra Control Station is not used by the File Management Appliance. This feature is only available when the Celerra is running DART 5.6 software. Configuring a Celerra as an archiving source The following tasks must be completed in order to use File Management Appliance to archive data off of a Celerra blade. You will need to repeat these steps for each blade that will serve as the source of an archiving task. Enable filename translation on the Celerra Control Station The FMA, FMHA, or FMA/VE expects that all filenames are derived from the Celerra Network Server in UTF-8 format. To preserve filenames correctly, perform the following: 1. Log in to the Celerra Control Station as nasadmin. 2. Use a text editor to open the file: /nas/site/locale/xlt.cfg. 3. Locate the last line of the file. Typically the last line appears as: :::: txt: Any thing that didn t match above will be assumed to be latin-1 4. Add the following line immediately above the last line: ::FMA_IP_ADDR::: FMA requires no translation (UTF-8) where FMA_IP_ADDR is the IP address of your appliance. 5. To update the configuration, type: /nas/sbin/uc_config -update xlt.cfg 6. To verify the new configuration, type: /nas/sbin/uc_config -verify FMA_IP_ADDR -mover ALL where FMA_IP_ADDR is the IP address of your appliance. Output will appear in the format: server_name : FMA_IP_ADDR is UTF-8 Create the FileMover API user on the blade Log in to the Celerra Control Station that manages the blade as the root user and then create a new user on the blade by running the command: /nas/sbin/server_user <data_mover> -add -md5 -passwd <user> As an example: /nas/sbin/server_user server_2 -add -md5 -passwd rffm Authorize the FileMover API user and the FMA IP address Log in to the Celerra Control Station as nasadmin and append the newly created user to the list of users privileged to access the blade s FileMover interface. Also 8

9 append all of the IP addresses of the appliance to the list of privileged IPs that can access the blade s FileMover interface. Run the command: server_http <data_mover> -append dhsm -users <user> -hosts <fm_ip_addresses> As an example: server_http server_2 -append dhsm -users rffm -hosts , Start the FileMover service (DART 5.6 only) The FileMover service must be enabled on the blade. Run the command: server_http <data_mover> -service dhsm start (optional) Create an XML APIv2 user When Celerra is running software version DART 5.6, FMA has the capability to automatically create the DHSM links needed to perform archiving. In order to utilize this feature, FMA must be given credentials for a Control Station user that is authorized to access the XML APIv2 interface and to modify the FileMover configuration. To create the new user you must log in to the Celerra GUI as root and then navigate to the Security > Administrators page. Select the option to create a new user and provide privileges to access the XML APIv2 interface. Select the option to make the user a member of the filemover group. Note that you must use the exact same username and password for the Control Station user as the FileMover API blade user created in the previous step. If you fail to create the user with the same username and password then automatic creation of DHSM links or archiving will fail. Define the Celerra server in FMA When defining the Celerra blade in the FMA configuration, you will need to supply: A logical name that will be used to reference the file server IP addresses of the blade Credentials for a FileMover API user (Optional, Celerra 5.6 only) Control Station IP address When defining a VDM in the FMA configuration, do not select the File server is a VDM checkbox in the FMA GUI. Use of this option disables FMA from discovering NFS exports available via the IP interface of the VDM and will prevent NFS archiving tasks using the source VDM. Preparing Data Domain appliances for archiving In order to archive NFS data to a Data Domain system, File Management Appliance will require access to the Mount v3 and NFS v3 RPC services running on the system. 9

10 When Data Domain will be the target of archiving from Celerra, File Management Appliance will use the Mount v3 and NFS v3 RPC services to read/write to the storage filesystem. File Management Appliance does not support archiving to a Data Domain system using CIFS. Only NFS metadata (UNIX owner ID and group ID, mode bits) will be written with the file contents during archiving. As a result, only NFS metadata can be recovered with a stub file when leveraging the stub re-creation functionality of FMA. Configuring a Data Domain as an archiving destination The following tasks must be completed in order to use File Management Appliance to archive data to a Data Domain system. You will need to repeat these steps for each system that will serve as the target of an archiving task. 1. Create the repository path on Data Domain In order to archive to a Data domain system, a repository needs to be created and exported to one or more File Management Appliances. This repository directory structure must be created manually before a DHSM connection can be created from a Celerra blade. To create the repository path, mount to the desired repository filesystem from a Linux client and create the desired path using the mkdir command. For example: # mkdir /mnt/datadomain # mount datadomain.domain.prv:/backup /mnt/datadomain # mkdir /mnt/datadomain/repository 2. Create the repository export on Data Domain There must be an export created on the Data Domain server for the File Management Appliance to mount and subsequently write data to during archiving. The export must also be configured to allow access to any Celerra blade that needs to recall archived data. Create a new export path by running the command from the Data Domain CLI: nfs add <path> <client IP list> [(option list)] As an example: nfs add /backup/repository , (rw,no_root_squash,no_all_squash,insecure) Note: In order to create a DHSM connection to the Data Domain repository from a Celerra blade, the insecure option must be used when creating the NFS path on Data Domain. This option allows RPC requests to originate on a TCP port higher than When specifying the client IP list, note that any File Management Appliance that will use this repository must be listed. This permits the specified client to access the repository path with read/write permissions. Similarly, any Celerra blade network interface that can be used to connect to the repository to recall archived data must be listed or recalls will fail resulting in data unavailability. 3. Configure NFS read and write parameters 10

11 Celerra Data Movers running DART r4 and earlier will be unable to recall from a Data Domain repository without modifying two NFS parameters on the Data Domain system. This is due to known Celerra DART issue resolved beyond the indicated code version. Contact EMC Support for the procedure to resolve this issue. Preparing Celerra NFS exports for archiving Note: A single Celerra blade can be configured in multiple File Management Appliances as an archiving source at the same time, but more than one File Management Appliance should never be used to archive data from a single file system. The IP addresses of the File Management Appliance must be added to the root and read/write ACLs for any NFS export that will be used as the source or destination of an archiving task. In addition, the IP addresses of the primary Celerra blade will need to be given root and read/write access to the NFS export on the Data Domain system to which data will be archived. This can be done from the Data Domain CLI as described in the previous section. In order to use an NFS export as the source of an archiving task, DHSM must be enabled for the Celerra file system from which the NFS export was created. If DHSM is not enabled then the Celerra can be registered in FMA but archiving and recall will fail for the export. File Management Appliance can automatically enable DHSM for file systems that will be used as an archiving source if the source Celerra system is running DART version 5.6 and the optional step was taken to create a XML API v2 user on the Celerra Control Station. Note that the connection is created when archiving tasks are run. DHSM can be manually enabled using the command: fs_dhsm -modify <primary_fs> -state enabled As an example: fs_dhsm -modify filesystem1 -state enabled In addition to enabling DHSM, a connection must be defined linking the primary Celerra file system to the NFS export(s) to which its data will be archived. File Management Appliance can archive files off of a single NFS export to multiple NFS exports and therefore multiple connections may need to be defined, one for each destination export. As with the previous step, File Management Appliance can automatically create DHSM connections if the source Celerra is running DART version 5.6 and an XML API v2 user has been created on the Celerra Control Station. DHSM connections can be manually created using the command: fs_dhsm -connection <primary_fs> -create -type nfsv3 secondary <secondary_server>:/<repository_path> -proto TCP userootcred True As an example: 11

12 fs_dhsm -connection filesystem1 -create -type nfsv3 secondary datadomain.mydomain.prv:/backup/repository - proto TCP userootcred True Note that the target of a connection should be specified as a hostname/fqdn when running the fs_dhsm -connection <primary_fs> -create command. When a blade needs to establish a connection to secondary storage it will first attempt to resolve the hostname in the local hosts file. If the name cannot be resolved locally, a DNS query is issued by the blade. When archiving from Celerra to Data Domain, if the local hostname resolution of the source blade is not going to be used, a DNS A record is required to resolve the FQDN of the secondary storage server to IP addresses. A PTR record (reverse DNS) is also required to map the IP addresses of the secondary storage server to the FQDN. If the source filesystem on Celerra has deduplication enabled and is being backed up to a Data Domain system, the Celerra space reduced NDMP backup functionality should be disabled on the filesystem if archiving to the same Data Domain system. This allows for more effective deduplication on the Data Domain system for archiving and backup of the same data blocks compared to compressed NDMP backups. Note: Celerra File Level Retention (FLR) enabled file systems cannot be used as an archiving source. As a final step, you will need to create a repository for the Data Domain NFS export using the FMA GUI or rffm addnasrepository command. High availability and load balancing for archiving and recall services HA for archiving services HA for archiving services is not a feature of. If a File Management Appliance fails then archiving services will cease until a replacement appliance is implemented. In the event of a failure, the disaster recovery procedure should be followed to implement the replacement appliance. Load balancing for archiving services A single Celerra file system can be managed by only one EMC File Management Appliance. Multiple File Management Appliances can be configured to archive data off of the same Celerra file server but never the same file system. Therefore, archiving activities can be load balanced between multiple File Management Appliances against a single Celerra blade but not against a single Celerra file system. HA for recall services Celerra blades will recall archived data directly from secondary storage. File Management Appliance is not involved in the recall process. No specific actions are required in relation to File Management Appliance in order to provide HA for recall services. When properly configured, recall will succeed as long as the primary and secondary storage servers can communicate over the network. 12

13 Load balancing for recall services Load balancing is not applicable for recall services when using Celerra tiered storage with Data Domain. Backup and disaster recovery For, disaster recovery refers to the processes required to ensure that an equivalent functional environment can be created using an alternate appliance in the event that the existing appliance is irrecoverably damaged. The following sections describe major areas of functional equivalence that should be considered in the development of any disaster recovery plan. When recovering from a disaster, the tasks should be performed in the order they are discussed in this guide. The failure of the motherboard or of multiple hard drives of an FMA is an example of a disaster that would require the execution of this procedure. The corruption of the virtual disks associated with an FMA/VE could be another example. The following steps should be taken in the order listed, as needed. Note: If only the File Management Appliance is lost then Celerra blades will still be able to recall data from the Data Domain system and no data unavailability will occur. Step 1. Recover the File Management Appliance If a File Management Appliance is lost, a new appliance should be loaded with the same version of the software that was previously running. An upgrade to the latest software version can be applied later if needed. For FMA, a clean installation of the software should be performed using the fm_clean option when booting from CD. For FMA/VE, a matching version of the FMA/VE virtual appliance should be imported to ESX. The networking should then be configured for the appliance. Note that the new appliances do not need to use the same IP addresses of the old appliances. However, when the IP addresses change there will be additional steps needed to reconfigure the environment. Step 2. Restore the software configuration The most convenient method to restore the configuration of a File Management Appliance after a disaster is from a backup of the File Management Appliance. It is highly recommended to run backup tasks periodically using the automated Backup/Recovery feature of FMA and save the output file to a secure location on NAS or EMC Centera. In the event of a disaster, the latest version of the backup file can be restored to the appliance using the fmrestore command. The EMC File Management Appliance and File Management Appliance/VE Getting Started Guide provides instructions on how to recover backup files from EMC Centera and NAS after a disaster. The use of these utilities will ensure that primary and secondary servers are defined in the new appliance configuration using identical values from the original appliance. In addition, the contents of the File Management Appliance database are backed up 13

14 and restored using these commands. Therefore, archived file lists, policies, tasks, and so on will all be preserved and restored with this method. In the event that a backup of the File Management Appliance has not been performed or is not available, an administrator can manually configure the FMA by defining the primary and secondary servers. Particular care should be taken to maintain identical configurations wherever possible. At a minimum, the logical names used to define primary and secondary servers must remain the same. Afterwards, archiving policies and tasks should be defined and run. Each time an archiving task is run, a new stub scanner task is created if it does not already exist. The creation of stub scanning tasks is important to rebuild the orphan management capabilities of the appliance. Note that it is not necessary to define and run policies and tasks if a recent backup of the configuration was restored to the appliance. These actions only need to be taken if no backup was available or if the backup was very outdated. Step 3. Adjust the environment after appliances have been recovered The file server definition in the File Management Appliance configuration may need to be updated if a Celerra system was lost and the replacement did not maintain identical settings such as IP addresses or FileMover users. It is critical to ensure that any elements defined in the File Management Appliance configuration reflect the state of the new environment. If the IP addresses of the appliance have changed then the export permissions for all NFS NAS repositories should be checked to ensure sufficient privileges have been granted to the new IP addresses. It may also be desirable to remove privileges granted to the old IP addresses. In addition, any local hosts files or DNS records referencing the Celerra blades or File Management Appliance should be updated. Restoring orphan management The orphan file management feature allows File Management Appliance to clean up unused data on secondary storage. In order to remove a piece of data from secondary storage the appliance must have an entry in its database to indicate that it was the creator of that piece of data. Therefore a File Management Appliance will not delete anything from a secondary storage location unless it has a database entry that references it. Due to this requirement it is very important to preserve the integrity of the File Management Appliance database. It is highly recommended to perform periodic backups of the database using the automated Backup/Recovery functionality mentioned above. However, it should be noted that even when periodic backups are taken there may still be some orphan data on secondary storage that cannot be identified by the appliance. Consider the following sequence of events: The administrator runs a backup task on FMA. An archiving task is launched. 14

15 A user deletes a stub file created by the archiving task that was just launched. The administrator needs to recover from a disaster by rebuilding a new FMA so the backup file is loaded using the fmrestore command In this scenario, there will not be a record in the FMA database referencing the object on secondary storage nor will a stub exist on primary storage. FMA will not be able to delete the object on secondary storage because it cannot confirm that it created it. In order to minimize the impact of this scenario an administrator should take frequent backups of the FMA database, especially during periods of heavy archiving activity. If the user did not delete the stub, then the FMA stub scanner would have re-created the FMA database entry when it read the stub contents during its weekly scan. This would restore the ability to perform orphan file management. In a worst-case scenario where no backup of the FMA database was ever taken, the new FMA will be able to perform orphan management for all stubs found by the stub scanner but all data that was already orphaned before the stub scanner runs cannot be cleaned up by the appliance. Archiving and recall architecture This section describes the architecture and mechanism for archiving and recall functionality in environments utilizing. FMA software architecture File Management Appliance runs a Linux-based operating system loaded with specialized software. Aside from the operating system, the core component of File Management Appliance is the File Management Daemon (FMD), a process that is part of the group of components referred to as the filemanagement service. The FMD accepts input through an XML-RPC interface from a handful of sources including the CLI, GUI, and other processes running on the system. The FMD does not monitor the CLI and GUI for events. The CLI and GUI components send requests to the XML-RPC interface of the FMD. Therefore, all components run independently. If the FMD is stopped then the CLI and GUI will still be accessible but will not be able to query or command the FMD. As a similar example, the FMD can run with the web GUI shutdown. Archiver system architecture The archiver is a component of the filemanagement service that is spawned and controlled by the FMD. The archiver itself is broken down into two major components: the filewalker and a pool of archiving threads. When an archiving task is run, the FMD creates a thread that spawns and manages an archiver process. The archiver will instruct the filewalker to collect and analyze metadata from files within the archiving source. Filewalking threads will use CIFS or NFS operations based upon the protocol specified for the archiving task. Filewalking threads compare the file metadata to the archiving policy and note files that should be archived by creating an entry in an internal queue in the appliance memory. 15

16 Archiving threads monitor the queue and are responsible for carrying out an archiving process detailed in the Overview of the archiving process section. When multiple archiving tasks are running concurrently, they will compete for the threads in these two pools. The File Management Appliance software is designed with performance in mind, and the entire system resources can be dedicated to the quick completion of an archiving task provided there are no other bottlenecks in the environment. An archiving activity can typically be completed faster by running multiple archiving tasks concurrently. This is a delicate balancing game and the administrator should monitor the CPU and memory resources of the appliance and other environment equipment closely when running concurrent tasks. Files and data streams smaller than 8 KB on Celerra file systems will not be archived to Data Domain systems by File Management Appliance. This rule prevents file data from being archived to secondary storage if the stub data that will be written to primary storage takes up the same number of file system blocks. Data archived off Celerra This section describes the interactions between File Management Appliance and Celerra. FileMover API The FileMover API is a purpose-built interface designed to facilitate Distributed Hierarchical Storage Management (DHSM). FileMover provides a range of API calls that standardize the creation and management of stub files. This ability to provide a standard interface for controlling stub files and a standard format for stub data makes the FileMover interface and DHSM architecture well suited for archiving purposes. Language settings The DART file system architecture allows characters to be included in filenames that may not be able to be displayed on Windows or UNIX/Linux clients. As an example, Linux clients mounting with NFS can create filenames that end with the period character (. ). In response to CIFS access from a Windows client, the filer would truncate the filename and append a tilde and a number ( ~1 ). In order to archive files off Celerra, File Management Appliance requires that filenames be displayed the same between NFS and CIFS. Unicode must be enabled for CIFS and the character encoding presented to File Management Appliances must be set to UTF-8. It is recommended to engage EMC Celerra support for assistance when the language settings of a Celerra need to be modified. Authentication and authorization To archive data from an NFS export, the IP addresses of the FMA network interfaces must be given root and read/write permissions. In addition, the IP addresses of the primary Celerra blade/vdm will need to be given root and read/write access to the NFS export on the Data Domain system to which data will be archived. Note that 16

17 archiving may succeed even if the IP addresses of the primary Celerra blade are not given appropriate privileges for the destination export. However, subsequent recalls will fail. File Management Appliance also requires access to the FileMover interface of Celerra blades with data that will be used as an archiving source. Note that Celerra VDMs do not have a FileMover interface and that FMA will connect to the host blade FileMover interface. Some of these requirements are checked when an administrator first attempts to define a server, and an error will be returned to inform the administrator that nothing was added to the File Management Appliance configuration. Some problems cannot be detected until an archiving or data collection task is run or until a file needs to be recalled. Best Practice: Before archiving any file from a production file system, create, archive, and recall a test file using the same primary and secondary storage locations. Perform these tests every time you want to archive data off of a new source file system or to a new destination. This will ensure that configuration problems do not affect the availability of real production data. Overview of the archiving process Archiving threads perform the following actions when delayed stubbing is disabled: 1. A file matches an archiving policy and File Management Appliance sends a FileMover API call to the Celerra blade to determine the last modified timestamp. 2. The source filename and NFS owner ID, GID, and mode bits are read. 3. A new file is created in the destination Data Domain repository. The name of the file and its location in the repository are generated by an algorithm; the original name and location are not used. The owner ID, GID, and mode bits are set identically to those of the source file for an NFS archiving task. 4. The file data is read from the source by File Management Appliance and written to the destination. 5. The last modified timestamp of the destination file is set to match that of the original file on the source. 6. The FileMover API is used by File Management Appliance to convert the source file into a stub file and this results in the CIFS offline bit being set. The blade will verify that the last modified time read in step 1 matches the current last modified time of the file. If the times differ because the file has changed, then the file will not be stubbed. 7. An entry is inserted into the File Management Appliance database to record the archiving operation. With delayed stubbing enabled in the archiving policy, step 6 in the sequence above will not take place but an entry will be placed in the FMA database. This entry will reference the time based on the time at archiving and the delayed stubbing period when the file should be stubbed. A background archiving thread will query the FMA 17

18 database on a daily basis to determine files where the current timestamp based on the system clock of the FMA appliance exceeds the stubbing time recorded in the FMA database. When a file is found according to these requirements, step 6 will be executed to stub the file. Overview of the recall process Celerra blades take an active role in the recall process by reading data directly from secondary storage. During archiving of an NFS export, File Management Appliance utilizes the FileMover API to convert file data on primary storage to stub data. The stub data indicates the protocol and the unique path or ID that should be used by the blade to access the archived data. Without the involvement of File Management Appliance, file data archived to an NFS export on a Data Domain system will be read back directly by Celerra blades when a recall is triggered. The recall process is triggered by read and write I/O to files that have been archived. In response to read operations, blades offer fine-grained control over how file data is recalled if it is not stored locally and whether or not it will be saved back to primary storage. The option is referred to as the read policy override and there are four methods for processing recall events that can be specified for file systems, DHSM connections, and within stub data. None This method can be applied to file systems and DHSM connections. When data needs to be recalled, the read policy override for the associated DHSM connection is checked first. If it is set to none then the read policy override for the file system containing the stub is checked. It is also set to none when the read policy specified within the stub data is checked. Full All file data is read from secondary storage and saved to primary storage. Passthrough File data is read from secondary storage and passed to clients without being written to primary storage. This is the read policy specified by File Management Appliance when a stub file is created. Partial Individual blocks of file data requested by clients are read from secondary storage and written to primary storage. Note: Write operations always trigger a Full recall. Data archived to Data Domain systems This section describes the architecture and mechanism for archiving and recall functionality in environments utilizing FMA with Data Domain systems as an archiving destination. Authentication and authorization In order to archive data to an NFS repository on a Data Domain system, you will need to configure the IP addresses of the FMA with root and read/write access to the export hosting the repository. The steps to accomplish this are described in the Preparing Data Domain appliances for archiving section. Failure to provide authorization for FMA IP addresses will prevent archiving from completing successfully. 18

19 In addition, the IP address of all source Celerra blades must be given root and read/write access to the repository export. Failure to provide authorization for blade IP addresses will not prevent the successful completion of archiving operations. However, data unavailability will occur when a blade fails to service a recall request because it is not authorized to read the data from secondary storage. Interoperability with quotas Celerra servers support the enforcement of quotas that are used to limit the amount of data or number of files a user or group can create. Data archived from a quota directory/quota tree will no longer count toward a user s quota. As an example, a user with a 100 MB quota who creates a 100 MB file will not be able to create any additional data. However, after the file is archived the user will be able to write an additional 100 MB of data. Note that only byte quotas are affected by archiving and file limits remain the same. For additional information on the effects of quotas on archiving and recall, please reference the Using EMC Celerra FileMover document available from EMC Powerlink. Implementing an archiving strategy The quality of an archiving strategy is often judged on how it affects users of primary storage tiers since file data reaching the end of its lifecycle is likely to be stored on slower disks. Key factors include the number/percent of files a user must work with that have been archived and the impact to the average service response time for operations the user performs to those files. While the performance aspect is based mostly on environmental factors, the amount of archived files a user has to deal with is directly related to the quality of the archiving policies created by storage administrators. Good archiving policies qualify the probability that a file will be used in the future (whereas bad archiving policies result in excessive amounts of data being recalled from secondary storage). Good archiving or tiering strategies ensure that the performance impact associated with accessing the archived data results in the minimal impact to users of primary storage. File data should be kept on storage with appropriate performance levels based upon the stage in the information lifecycle that it has reached. As data transitions through the stages of the lifecycle, the performance requirements decrease. FMA allows administrators to model and then implement archiving strategies to create tiered storage systems and realize lower TCO. It can be used to: Classify files sprawled across a NAS environment into distinct datasets Manage the locations where individual file data is stored Shrink large datasets and file systems by migrating file data to alternate locations (also decreasing backup window requirements) 19

20 As a result of the fast and transparent archiving and recall architecture, these benefits can be attained without causing significant impact to the performance experienced by end users. Developing archiving policies Classifying datasets The key to designing a good archiving policy is being able to determine the probability that a file will need to be read or written over time. To be able to make this determination, we must define the boundaries of a dataset (the primary storage location) and classify the files it contains. This may be an iterative process wherein while classifying the files in a dataset, it becomes apparent there is more than one distinct dataset, forcing the boundaries to be reset and classification to begin again. Classification of files in a dataset requires knowledge from two sources and goes hand in hand with the creation of archiving policies. First, the metadata maintained on the primary storage tier provides valuable information about the last time files were modified and accessed as well as file sizes, names, and locations in a dataset. Second, the end users of the storage can provide insight into how the dataset is used in general. This information will provide guidance to an administrator when designing archiving policies. As an example, consider the NAS environment of a typical hospital. The first step to designing an archiving policy is to examine the layout of the existing primary storage tier, identify the contacts that have requested storage, and determine how the end users are accessing their storage. During this step, you may find that various departments throughout the hospital are using NAS to store digital images of X-rays, MRIs, and CT scans. Other departments are using it to store copies of patient bills and payments, or medical records. Still others are using the NAS to hold general home directory data. In further discussion with the contacts from the hospital it is discovered that the X- ray, MRI, and CT images are used for very different purposes and are typically needed for different lengths of time. Due to the ways in which the hospital applies X-ray technology, the images are typically required for the treatment of patients for about six months. However, MRI and CT images are only required for treatment over a period of a few days. Due to regulatory requirements, the hospital must keep bills and payments on file for five years. However, once a bill has been paid, the finance department reviews the copy at the end of each month and the copy is not typically accessed again. The stored medical records are not needed unless a patient visits the hospital again and typically, only the last two years of records are required. However, because the immediate availability of patient records is critical, the hospital contact requests that only medical records older than two years be archived. Given these conditions, a good archiving policy will need to account for one or more of the following: X-ray images must not be archived until they are six months old. MRI and CT images must not be archived until they are two weeks old. 20

21 Copies of bills and payments must not be archived until they are 45 days old. Medical records must not be archived until they are two years old. Data in user home directories should not be archived if it has been accessed recently. Note that the first four conditions are all based upon the length of time since a file was last modified and that the last condition is based upon when a file was last accessed. This is because the images and records affected by the first four conditions are static files that will not be changed over time, but the home directories contain dynamic files. Therefore, in designing an archiving policy for dynamic files, we assume that the more recently a file was last accessed, the more likely it is to be accessed again soon. When designing an archiving policy for static files, we assume that as data ages it becomes less likely to be accessed again soon. Metadata stored on NAS will typically be required in order to translate the conditions listed above into an archiving policy. As an example, assume that the X-ray, MRI, and CT images are stored on an NFS export of file system fs1 inside a directory named repository. Filenames for X-ray images begin with the letter X, MRIs begin with the letter M, and CT images begin with the letter C. Copies of bills and payments are stored in the scans directory in the same export on fs1. Medical records are stored in the mr directory and home directories are stored in the hd directory on file system fs2, which is both shared and exported. The medical records are written and accessed through NFS and the home directories also utilize NFS. In this scenario, an NFS archiving task would be developed for the repository directory on the fs1 file system utilizing an archiving policy with the following rules: 1. Archive a file if the name begins with X and the last modified timestamp is > six months old. 2. Archive a file if the name begins with M or C and the last modified timestamp is > two weeks old. 3. Or else do not archive the file. A second NFS task would be developed for the scans directory on the fs1 file system utilizing an archiving policy with the following rules: 1. Archive a file if the last modified timestamp is > 45 days old. 2. Or else do not archive the file. An NFS task would also be developed for the mr directory on the fs2 file system utilizing an archiving policy with the following rules: 1. Archive a file if the last modified timestamp is > two years old. 2. Or else do not archive the file. In order to prepare a task for the user home directories, we must qualify what it means for a file to be accessed recently. We define recent as a particular length of time since a file was last accessed and we then classify files in a dataset based on whether the last accessed timestamp falls within that length of time. As we increase 21

22 or decrease the length of time, we will select a larger or smaller percentage of the files in the dataset. With this premise in mind, we would develop an additional NFS task for the hd directory on the fs2 file system using an archiving policy with the following rules: 1. Archive a file if the last accessed timestamp is greater than some period of time. 2. Or else do not archive the file. Now we are faced with the challenge of quantifying some period of time. A common method for doing this involves choosing a percentage of the total number of files or total amount of file data within a dataset that should be archived, and then adjusting the length of time used in the policy to reach the exact percentage. This method utilizes data collection tasks and previews and is discussed elsewhere in this paper. This method does not necessarily result in optimal performance for users or maximum reduction in TCO. However, for very dynamic datasets (any file data can change or be accessed at any time with no discernable trends) this may be the preferred method for the sake of simplicity. A more advanced method involves the analysis of the last accessed times of all files within a dataset in order to gain a more accurate understanding of how the age of a last accessed timestamp correlates to the likelihood that a file will be used again in the future. For typical datasets, there is an exponential relationship between the age of a last accessed timestamp and the probability of future use of a file. An example of a dataset with this type of relationship would be copies of ISO images generated as part of software development practices. There may be many users accessing the latest build of the software (for instance, to perform QA testing). There may also be many users accessing the various general availability builds that were provided to customers. The general availability builds are all copied to a special directory path in the file system. It is extremely unlikely that anyone will access software images older than 30 days for builds that were not provided to customers as no testing or development would be occurring off of that code base/image aside from the occasional bug fix. Given such a dataset, we can be very precise and design an archiving policy that balances both the age of a file since it was created and how recently it was accessed. We may design an archiving policy with this logic: 1. Do not archive any files in the specific directory containing the GA releases 2. Archive all files not accessed within the last 45 days 3. Archive all files created more than 45 days ago that have not been accessed in the last 7 days This policy will prevent GA software images from being archived while ensuring that data that hasn t been accessed in a long time is moved to secondary storage. In addition, the third rule ensures that if a software image is recalled it will not stick around on primary storage for another 45 days. After 1 week of not being accessed the data will be moved back to secondary storage. 22

23 Performance requirements NAS clients may have performance requirements for the speed at which they can access archived data. As an example, a user trying to stream back a video file encoded at 1024 KB/s from NAS is unlikely to accept a data transfer rate of any less. However, service levels can decrease significantly if archiving is not applied correctly. The key to meeting performance requirements is to ensure that data is stored on an appropriate tier of storage based upon its progress through the information lifecycle. Archiving policies should be developed to select files at particular stages of the information lifecycle and place them on appropriate storage tiers. You should determine the performance requirements for data that will be selected as you develop the archiving policy. Each of the following points may affect recall performance. Will the secondary disk storage provide sufficient performance when clients need to access archived data? Will the primary and secondary Celerra blades have sufficient system resources (CPU, RAM) to provide the required performance levels for recall operations? Is there sufficient network bandwidth between the primary and secondary Celerra blades for recall operations? Will the network latency cause a significant effect? What type of recall policy override will be used for Celerra DHSM connection strings? The partial and full recall modes require data to be written to the disks of primary storage, increasing the time it takes to service recall requests. Creating File Matching Expressions A File Matching Expression (FME) is one or more conditions used by File Management Appliance when data is being archived. A statement in an FME consists of an attribute, an operator, and a value. Supported attributes by the FMA policy engine are the CIFS/NFS last accessed timestamp and last modified timestamp, as well as the size of file data and the format of a filename. Operators are applied against the specified file attribute and compared against a value in order to return a true or false value. As an example, operators that can be applied to file size are greater than (>), less than (<), greater than or equal to (>=), and less than or equal to (<=). The value supplied when evaluating the file size attribute is a number of bytes. This allows rules to be crafted that select files or varying ranges of sizes. These same operators can be applied to the last accessed and last modified timestamps but the value will be applied as a period of time, allowing rules to be crafted that select files that have not been accessed or modified during a period of time. When using the filename attribute, operators allow exact filenames to be specified ( equals ) or compared to a regular expression ( matches regex ). An FME can consist of one or more statements. When multiple statements are part of a single FME, the statements will be logically combined using the AND operator. 23

FILE ARCHIVING FROM NETAPP TO EMC DATA DOMAIN WITH EMC FILE MANAGEMENT APPLIANCE

FILE ARCHIVING FROM NETAPP TO EMC DATA DOMAIN WITH EMC FILE MANAGEMENT APPLIANCE White Paper FILE ARCHIVING FROM NETAPP TO EMC DATA DOMAIN WITH EMC FILE MANAGEMENT APPLIANCE Abstract This white paper is intended to guide administrators through the process of deploying the EMC File

More information

OPTIMIZING PRIMARY STORAGE THROUGH FILE ARCHIVING WITH EMC CLOUD TIERING APPLIANCE

OPTIMIZING PRIMARY STORAGE THROUGH FILE ARCHIVING WITH EMC CLOUD TIERING APPLIANCE White Paper OPTIMIZING PRIMARY STORAGE THROUGH FILE ARCHIVING WITH EMC CLOUD TIERING APPLIANCE A Detailed Review Abstract This paper describes the EMC Cloud Tiering Appliance. The Cloud Tiering Appliance

More information

Isilon OneFS. Version 7.2.1. OneFS Migration Tools Guide

Isilon OneFS. Version 7.2.1. OneFS Migration Tools Guide Isilon OneFS Version 7.2.1 OneFS Migration Tools Guide Copyright 2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes the information in this publication is accurate

More information

Isilon OneFS. Version 7.2. OneFS Migration Tools Guide

Isilon OneFS. Version 7.2. OneFS Migration Tools Guide Isilon OneFS Version 7.2 OneFS Migration Tools Guide Copyright 2014 EMC Corporation. All rights reserved. Published in USA. Published November, 2014 EMC believes the information in this publication is

More information

Quick Start - NetApp File Archiver

Quick Start - NetApp File Archiver Quick Start - NetApp File Archiver TABLE OF CONTENTS OVERVIEW SYSTEM REQUIREMENTS GETTING STARTED Upgrade Configuration Archive Recover Page 1 of 14 Overview - NetApp File Archiver Agent TABLE OF CONTENTS

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

Using Windows Administrative Tools on VNX

Using Windows Administrative Tools on VNX EMC VNX Series Release 7.0 Using Windows Administrative Tools on VNX P/N 300-011-833 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011 -

More information

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems

More information

EMC Celerra Network Server

EMC Celerra Network Server EMC Celerra Network Server Release 5.6.47 Using Windows Administrative Tools with Celerra P/N 300-004-139 REV A02 EMC Corporation Corporate Headquarters: Hopkintons, MA 01748-9103 1-508-435-1000 www.emc.com

More information

Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h8270.1 REV A01 Date June, 2011

Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h8270.1 REV A01 Date June, 2011 Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h8270.1 REV A01 Date June, 2011 Contents Introduction... 2 Roadmap... 3 What is in this document... 3 Test Environment...

More information

Use QNAP NAS for Backup

Use QNAP NAS for Backup Use QNAP NAS for Backup BACKUP EXEC 12.5 WITH QNAP NAS Copyright 2010. QNAP Systems, Inc. All Rights Reserved. V1.0 Document revision history: Date Version Changes Apr 2010 1.0 Initial release Note: Information

More information

Installing Management Applications on VNX for File

Installing Management Applications on VNX for File EMC VNX Series Release 8.1 Installing Management Applications on VNX for File P/N 300-015-111 Rev 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC Celerra Version 5.6 Technical Primer: Control Station Password Complexity Policy Technology Concepts and Business Considerations

EMC Celerra Version 5.6 Technical Primer: Control Station Password Complexity Policy Technology Concepts and Business Considerations EMC Celerra Version 5.6 Technical Primer: Control Station Password Complexity Policy Technology Concepts and Business Considerations Abstract This white paper presents a high-level overview of the EMC

More information

Hitachi Data Migrator to Cloud Best Practices Guide

Hitachi Data Migrator to Cloud Best Practices Guide Hitachi Data Migrator to Cloud Best Practices Guide Global Solution Services Engineering April 2015 MK-92HNAS045-02 Notices and Disclaimer Copyright 2015 Corporation. All rights reserved. The performance

More information

How To Configure Vnx 7.1.1 (Vnx) On A Windows-Only Computer (Windows) With A Windows 2.5 (Windows 2.2) (Windows 3.5) (Vnet) (Win

How To Configure Vnx 7.1.1 (Vnx) On A Windows-Only Computer (Windows) With A Windows 2.5 (Windows 2.2) (Windows 3.5) (Vnet) (Win EMC é VNX dm Series Release 7.1 Configuring VNX dm User Mapping P/N 300-013-811 Rev 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright â 2009-2012

More information

Understanding EMC Avamar with EMC Data Protection Advisor

Understanding EMC Avamar with EMC Data Protection Advisor Understanding EMC Avamar with EMC Data Protection Advisor Applied Technology Abstract EMC Data Protection Advisor provides a comprehensive set of features that reduce the complexity of managing data protection

More information

EMC ViPR Controller. Service Catalog Reference Guide. Version 2.3 XXX-XXX-XXX 01

EMC ViPR Controller. Service Catalog Reference Guide. Version 2.3 XXX-XXX-XXX 01 EMC ViPR Controller Version 2.3 Service Catalog Reference Guide XXX-XXX-XXX 01 Copyright 2015- EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes the information

More information

EMC DiskXtender for NAS Release 3.0

EMC DiskXtender for NAS Release 3.0 EMC DiskXtender for NAS Release 3.0 Theory of Operations P/N 300-004-497 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2006-2007 EMC Corporation.

More information

Understanding EMC Avamar with EMC Data Protection Advisor

Understanding EMC Avamar with EMC Data Protection Advisor Understanding EMC Avamar with EMC Data Protection Advisor Applied Technology Abstract EMC Data Protection Advisor provides a comprehensive set of features to reduce the complexity of managing data protection

More information

Virtual Data Movers on EMC VNX

Virtual Data Movers on EMC VNX White Paper Virtual Data Movers on EMC VNX Abstract This white paper describes the high availability and portable capability of the Virtual Data Mover (VDM) technology delivered in the EMC VNX series of

More information

ACHIEVING STORAGE EFFICIENCY WITH DATA DEDUPLICATION

ACHIEVING STORAGE EFFICIENCY WITH DATA DEDUPLICATION ACHIEVING STORAGE EFFICIENCY WITH DATA DEDUPLICATION Dell NX4 Dell Inc. Visit dell.com/nx4 for more information and additional resources Copyright 2008 Dell Inc. THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

EMC AVAMAR INTEGRATION GUIDE AND DATA DOMAIN 6.0 P/N 300-011-623 REV A02

EMC AVAMAR INTEGRATION GUIDE AND DATA DOMAIN 6.0 P/N 300-011-623 REV A02 EMC AVAMAR 6.0 AND DATA DOMAIN INTEGRATION GUIDE P/N 300-011-623 REV A02 EMC CORPORATION CORPORATE HEADQUARTERS: HOPKINTON, MA 01748-9103 1-508-435-1000 WWW.EMC.COM Copyright and Trademark Notices Copyright

More information

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.2 November 2015 Last modified: November 3, 2015 2015 Nasuni Corporation All Rights Reserved Document Information Testing

More information

EMC VNXe File Deduplication and Compression

EMC VNXe File Deduplication and Compression White Paper EMC VNXe File Deduplication and Compression Overview Abstract This white paper describes EMC VNXe File Deduplication and Compression, a VNXe system feature that increases the efficiency with

More information

AUTOMATED DATA RETENTION WITH EMC ISILON SMARTLOCK

AUTOMATED DATA RETENTION WITH EMC ISILON SMARTLOCK White Paper AUTOMATED DATA RETENTION WITH EMC ISILON SMARTLOCK Abstract EMC Isilon SmartLock protects critical data against accidental, malicious or premature deletion or alteration. Whether you need to

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

CommVault Simpana Archive 8.0 Integration Guide

CommVault Simpana Archive 8.0 Integration Guide CommVault Simpana Archive 8.0 Integration Guide Data Domain, Inc. 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 Version 1.0, Revision B September 2, 2009 Copyright 2009

More information

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Administrator s Guide P/N 300-009-573 REV. A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

NTP Software File Auditor for NAS, EMC Edition

NTP Software File Auditor for NAS, EMC Edition NTP Software File Auditor for NAS, EMC Edition Installation Guide June 2012 This guide provides a short introduction to the installation and initial configuration of NTP Software File Auditor for NAS,

More information

Backup Solutions for the Celerra File Server

Backup Solutions for the Celerra File Server White Paper Backup Solutions for the Celerra File Server EMC Corporation 171 South Street, Hopkinton, MA 01748-9103 Corporate Headquarters: 508) 435-1000, (800) 424-EMC2 Fax: (508) 435-5374, Service: (800)

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide April, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be

More information

Installing, Uninstalling, and Upgrading Service Monitor

Installing, Uninstalling, and Upgrading Service Monitor CHAPTER 2 Installing, Uninstalling, and Upgrading Service Monitor This section contains the following topics: Preparing to Install Service Monitor, page 2-1 Installing Cisco Unified Service Monitor, page

More information

NexentaConnect for VMware Virtual SAN

NexentaConnect for VMware Virtual SAN NexentaConnect for VMware Virtual SAN User Guide 1.0.2 FP3 Date: April, 2016 Subject: NexentaConnect for VMware Virtual SAN User Guide Software: NexentaConnect for VMware Virtual SAN Software Version:

More information

Actifio Big Data Director. Virtual Data Pipeline for Unstructured Data

Actifio Big Data Director. Virtual Data Pipeline for Unstructured Data Actifio Big Data Director Virtual Data Pipeline for Unstructured Data Contact Actifio Support As an Actifio customer, you can get support for all Actifio products through the Support Portal at http://support.actifio.com/.

More information

Configuring Celerra for Security Information Management with Network Intelligence s envision

Configuring Celerra for Security Information Management with Network Intelligence s envision Configuring Celerra for Security Information Management with Best Practices Planning Abstract appliance is used to monitor log information from any device on the network to determine how that device is

More information

EMC ViPR Controller. Version 2.4. User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 DRAFT

EMC ViPR Controller. Version 2.4. User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 DRAFT EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 DRAFT Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,

More information

EMC Disk Library with EMC Data Domain Deployment Scenario

EMC Disk Library with EMC Data Domain Deployment Scenario EMC Disk Library with EMC Data Domain Deployment Scenario Best Practices Planning Abstract This white paper is an overview of the EMC Disk Library with EMC Data Domain deduplication storage system deployment

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Managing Cisco ISE Backup and Restore Operations

Managing Cisco ISE Backup and Restore Operations CHAPTER 14 This chapter describes the Cisco Identity Services Engine (ISE) database backup and restore operations, which include Cisco ISE application configuration and Cisco Application Deployment Engine

More information

Acronis Backup & Recovery 11.5 Quick Start Guide

Acronis Backup & Recovery 11.5 Quick Start Guide Acronis Backup & Recovery 11.5 Quick Start Guide Applies to the following editions: Advanced Server for Windows Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server

More information

EMC Documentum Repository Services for Microsoft SharePoint

EMC Documentum Repository Services for Microsoft SharePoint EMC Documentum Repository Services for Microsoft SharePoint Version 6.5 SP2 Installation Guide P/N 300 009 829 A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748 9103 1 508 435 1000 www.emc.com

More information

VMware vsphere Data Protection

VMware vsphere Data Protection VMware vsphere Data Protection Replication Target TECHNICAL WHITEPAPER 1 Table of Contents Executive Summary... 3 VDP Identities... 3 vsphere Data Protection Replication Target Identity (VDP-RT)... 3 Replication

More information

Symantec NetBackup OpenStorage Solutions Guide for Disk

Symantec NetBackup OpenStorage Solutions Guide for Disk Symantec NetBackup OpenStorage Solutions Guide for Disk UNIX, Windows, Linux Release 7.6 Symantec NetBackup OpenStorage Solutions Guide for Disk The software described in this book is furnished under a

More information

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.0 July 2015 2015 Nasuni Corporation All Rights Reserved Document Information Testing Disaster Recovery Version 7.0 July

More information

EMC Data Domain Management Center

EMC Data Domain Management Center EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes

More information

The safer, easier way to help you pass any IT exams. Exam : E20-895. Backup Recovery - Avamar Expert Exam for Implementation Engineers.

The safer, easier way to help you pass any IT exams. Exam : E20-895. Backup Recovery - Avamar Expert Exam for Implementation Engineers. http://www.51- pass.com Exam : E20-895 Title : Backup Recovery - Avamar Expert Exam for Implementation Engineers Version : Demo 1 / 7 1.An EMC Avamar customer is currently using a 2 TB Avamar Virtual Edition

More information

Interworks. Interworks Cloud Platform Installation Guide

Interworks. Interworks Cloud Platform Installation Guide Interworks Interworks Cloud Platform Installation Guide Published: March, 2014 This document contains information proprietary to Interworks and its receipt or possession does not convey any rights to reproduce,

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015 VMware vsphere Data Protection REVISED APRIL 2015 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Requirements.... 4 Evaluation Workflow... 5 Overview.... 5 Evaluation

More information

Setting Up Resources in VMware Identity Manager

Setting Up Resources in VMware Identity Manager Setting Up Resources in VMware Identity Manager VMware Identity Manager 2.4 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

ITCertMaster. http://www.itcertmaster.com. Safe, simple and fast. 100% Pass guarantee! IT Certification Guaranteed, The Easy Way!

ITCertMaster. http://www.itcertmaster.com. Safe, simple and fast. 100% Pass guarantee! IT Certification Guaranteed, The Easy Way! ITCertMaster Safe, simple and fast. 100% Pass guarantee! http://www.itcertmaster.com IT Certification Guaranteed, The Easy Way! Exam : E20-895 Title : Backup Recovery - Avamar Expert Exam for Implementation

More information

WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS?

WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS? Why Data Domain Series WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS? Why you should take the time to read this paper Provide data isolation by tenant (Secure logical data isolation for each tenant

More information

Quick Start - Virtual Server idataagent (Microsoft/Hyper-V)

Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) Page 1 of 31 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data

More information

VMware vsphere Data Protection 6.0

VMware vsphere Data Protection 6.0 VMware vsphere Data Protection 6.0 TECHNICAL OVERVIEW REVISED FEBRUARY 2015 Table of Contents Introduction.... 3 Architectural Overview... 4 Deployment and Configuration.... 5 Backup.... 6 Application

More information

EMC VNXe3200 UFS64 FILE SYSTEM

EMC VNXe3200 UFS64 FILE SYSTEM White Paper EMC VNXe3200 UFS64 FILE SYSTEM A DETAILED REVIEW Abstract This white paper explains the UFS64 File System architecture, functionality, and features available in the EMC VNXe3200 storage system.

More information

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY White Paper CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY DVTel Latitude NVMS performance using EMC Isilon storage arrays Correct sizing for storage in a DVTel Latitude physical security

More information

Quick Start - NetApp File Archiver

Quick Start - NetApp File Archiver Page 1 of 19 Quick Start - NetApp File Archiver TABLE OF CONTENTS OVERVIEW Introduction Key Features Terminology SYSTEM REQUIREMENTS DEPLOYMENT Installation Method 1: Interactive Install Method 2: Install

More information

Network Attached Storage. Jinfeng Yang Oct/19/2015

Network Attached Storage. Jinfeng Yang Oct/19/2015 Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

More information

EMC DATA DOMAIN ENCRYPTION A Detailed Review

EMC DATA DOMAIN ENCRYPTION A Detailed Review White Paper EMC DATA DOMAIN ENCRYPTION A Detailed Review Abstract The proliferation of publicized data loss, coupled with new governance and compliance regulations, is driving the need for customers to

More information

RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware

RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware Contact Information Go to the RSA corporate website for regional Customer Support telephone

More information

IBRIX Fusion 3.1 Release Notes

IBRIX Fusion 3.1 Release Notes Release Date April 2009 Version IBRIX Fusion Version 3.1 Release 46 Compatibility New Features Version 3.1 CLI Changes RHEL 5 Update 3 is supported for Segment Servers and IBRIX Clients RHEL 5 Update 2

More information

Backup and Recovery for SAP Environments using EMC Avamar 7

Backup and Recovery for SAP Environments using EMC Avamar 7 White Paper Backup and Recovery for SAP Environments using EMC Avamar 7 Abstract This white paper highlights how IT environments deploying SAP can benefit from efficient backup with an EMC Avamar solution.

More information

VMware Mirage Web Manager Guide

VMware Mirage Web Manager Guide Mirage 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

Release Notes. LiveVault. Contents. Version 7.65. Revision 0

Release Notes. LiveVault. Contents. Version 7.65. Revision 0 R E L E A S E N O T E S LiveVault Version 7.65 Release Notes Revision 0 This document describes new features and resolved issues for LiveVault 7.65. You can retrieve the latest available product documentation

More information

An Oracle Technical White Paper May 2015. How to Configure Kaspersky Anti-Virus Software for the Oracle ZFS Storage Appliance

An Oracle Technical White Paper May 2015. How to Configure Kaspersky Anti-Virus Software for the Oracle ZFS Storage Appliance An Oracle Technical White Paper May 2015 How to Configure Kaspersky Anti-Virus Software for the Oracle ZFS Storage Appliance Table of Contents Introduction... 2 How VSCAN Works... 3 Installing Kaspersky

More information

Introduction to Virtual Datacenter

Introduction to Virtual Datacenter Oracle Enterprise Manager Ops Center Configuring a Virtual Datacenter 12c Release 1 (12.1.1.0.0) E27347-01 June 2012 This guide provides an end-to-end example for how to use Oracle Enterprise Manager Ops

More information

Integration Guide. EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide

Integration Guide. EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide Integration Guide EMC Data Domain and Silver Peak VXOA 4.4.10 Integration Guide August 2013 Copyright 2013 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate

More information

NSS Volume Data Recovery

NSS Volume Data Recovery NSS Volume Data Recovery Preliminary Document September 8, 2010 Version 1.0 Copyright 2000-2010 Portlock Corporation Copyright 2000-2010 Portlock Corporation Page 1 of 20 The Portlock storage management

More information

VMware Data Recovery. Administrator's Guide EN-000193-00

VMware Data Recovery. Administrator's Guide EN-000193-00 Administrator's Guide EN-000193-00 You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product

More information

LifeSize UVC Video Center Deployment Guide

LifeSize UVC Video Center Deployment Guide LifeSize UVC Video Center Deployment Guide November 2013 LifeSize UVC Video Center Deployment Guide 2 LifeSize UVC Video Center LifeSize UVC Video Center records and streams video sent by LifeSize video

More information

Setting Up a Unisphere Management Station for the VNX Series P/N 300-011-796 Revision A01 January 5, 2010

Setting Up a Unisphere Management Station for the VNX Series P/N 300-011-796 Revision A01 January 5, 2010 Setting Up a Unisphere Management Station for the VNX Series P/N 300-011-796 Revision A01 January 5, 2010 This document describes the different types of Unisphere management stations and tells how to install

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02 EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC VNX Series. Using VNX File Deduplication and Compression. Release 7.0 P/N 300-011-809 REV A01

EMC VNX Series. Using VNX File Deduplication and Compression. Release 7.0 P/N 300-011-809 REV A01 EMC VNX Series Release 7.0 Using VNX File Deduplication and Compression P/N 300-011-809 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2009-2011

More information

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014 VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Additional Features and Benefits of

More information

EMC NetWorker. Licensing Guide. Release 8.0 P/N 300-013-596 REV A01

EMC NetWorker. Licensing Guide. Release 8.0 P/N 300-013-596 REV A01 EMC NetWorker Release 8.0 Licensing Guide P/N 300-013-596 REV A01 Copyright (2011-2012) EMC Corporation. All rights reserved. Published in the USA. Published June, 2012 EMC believes the information in

More information

Domain Management with EMC Unisphere for VNX

Domain Management with EMC Unisphere for VNX White Paper Domain Management with EMC Unisphere for VNX EMC Unified Storage Solutions Abstract EMC Unisphere software manages EMC VNX, EMC Celerra, and EMC CLARiiON storage systems. This paper discusses

More information

OnCommand Performance Manager 1.1

OnCommand Performance Manager 1.1 OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

How to Backup and Restore a VM using Veeam

How to Backup and Restore a VM using Veeam How to Backup and Restore a VM using Veeam Table of Contents Introduction... 3 Assumptions... 3 Add ESXi Server... 4 Backup a VM... 6 Restore Full VM... 12 Appendix A: Install Veeam Backup & Replication

More information

Migrating to vcloud Automation Center 6.1

Migrating to vcloud Automation Center 6.1 Migrating to vcloud Automation Center 6.1 vcloud Automation Center 6.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

EMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER

EMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER White Paper EMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER Abstract The objective of this white paper is to describe the architecture of and procedure for configuring EMC Documentum xplore

More information

FILE ARCHIVAL USING SYMANTEC ENTERPRISE VAULT WITH EMC ISILON

FILE ARCHIVAL USING SYMANTEC ENTERPRISE VAULT WITH EMC ISILON Best Practices Guide FILE ARCHIVAL USING SYMANTEC ENTERPRISE VAULT WITH EMC ISILON Abstract This white paper outlines best practices for deploying EMC Isilon OneFS scale-out storage with Symantec Enterprise

More information

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015 Metalogix SharePoint Backup Publication Date: August 24, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this

More information

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office

More information

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage TECHNICAL PAPER Veeam Backup & Replication with Nimble Storage Document Revision Date Revision Description (author) 11/26/2014 1. 0 Draft release (Bill Roth) 12/23/2014 1.1 Draft update (Bill Roth) 2/20/2015

More information

IM and Presence Disaster Recovery System

IM and Presence Disaster Recovery System Disaster Recovery System, page 1 Access the Disaster Recovery System, page 2 Back up data in the Disaster Recovery System, page 3 Restore scenarios, page 9 Backup and restore history, page 15 Data authentication

More information

CA ARCserve Backup for Windows

CA ARCserve Backup for Windows CA ARCserve Backup for Windows Agent for Microsoft SharePoint Server Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation") are for

More information

WHITE PAPER. Dedupe-Centric Storage. Hugo Patterson, Chief Architect, Data Domain. Storage. Deduplication. September 2007

WHITE PAPER. Dedupe-Centric Storage. Hugo Patterson, Chief Architect, Data Domain. Storage. Deduplication. September 2007 WHITE PAPER Dedupe-Centric Storage Hugo Patterson, Chief Architect, Data Domain Deduplication Storage September 2007 w w w. d a t a d o m a i n. c o m - 2 0 0 7 1 DATA DOMAIN I Contents INTRODUCTION................................

More information

Microsoft Exchange 2003 Disaster Recovery Operations Guide

Microsoft Exchange 2003 Disaster Recovery Operations Guide Microsoft Exchange 2003 Disaster Recovery Operations Guide Microsoft Corporation Published: December 12, 2006 Author: Exchange Server Documentation Team Abstract This guide provides installation and deployment

More information

VMware Site Recovery Manager with EMC RecoverPoint

VMware Site Recovery Manager with EMC RecoverPoint VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright

More information

Installing Windows XP Professional

Installing Windows XP Professional CHAPTER 3 Installing Windows XP Professional After completing this chapter, you will be able to: Plan for an installation of Windows XP Professional. Use a CD to perform an attended installation of Windows

More information

CA ARCserve Backup for Windows

CA ARCserve Backup for Windows CA ARCserve Backup for Windows Agent for Sybase Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Data ONTAP 8.2 MultiStore Management Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone: +1(888) 4-NETAPP Web:

More information

EMC AVAMAR 6.0 GUIDE FOR IBM DB2 P/N 300-011-636 REV A01 EMC CORPORATION CORPORATE HEADQUARTERS: HOPKINTON, MA 01748-9103 1-508-435-1000 WWW.EMC.

EMC AVAMAR 6.0 GUIDE FOR IBM DB2 P/N 300-011-636 REV A01 EMC CORPORATION CORPORATE HEADQUARTERS: HOPKINTON, MA 01748-9103 1-508-435-1000 WWW.EMC. EMC AVAMAR 6.0 FOR IBM DB2 GUIDE P/N 300-011-636 REV A01 EMC CORPORATION CORPORATE HEADQUARTERS: HOPKINTON, MA 01748-9103 1-508-435-1000 WWW.EMC.COM Copyright and Trademark Notices Copyright 2002-2011

More information

GRAVITYZONE HERE. Deployment Guide VLE Environment

GRAVITYZONE HERE. Deployment Guide VLE Environment GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including

More information

EMC Celerra Version 5.6 Technical Primer: Public Key Infrastructure Support

EMC Celerra Version 5.6 Technical Primer: Public Key Infrastructure Support EMC Celerra Version 5.6 Technical Primer: Public Key Infrastructure Support Technology Concepts and Business Considerations Abstract Encryption plays an increasingly important role in IT infrastructure

More information

Achieving Storage Efficiency through EMC Celerra Data Deduplication

Achieving Storage Efficiency through EMC Celerra Data Deduplication Achieving Storage Efficiency through EMC Celerra Data Deduplication Applied Technology Abstract This white paper describes Celerra Data Deduplication, a feature of EMC Celerra Network Server that increases

More information

Symantec NetBackup AdvancedDisk Storage Solutions Guide. Release 7.5

Symantec NetBackup AdvancedDisk Storage Solutions Guide. Release 7.5 Symantec NetBackup AdvancedDisk Storage Solutions Guide Release 7.5 21220064 Symantec NetBackup AdvancedDisk Storage Solutions Guide The software described in this book is furnished under a license agreement

More information

Oracle VM Server Recovery Guide. Version 8.2

Oracle VM Server Recovery Guide. Version 8.2 Oracle VM Server Recovery Guide Version 8.2 Oracle VM Server for x86 Recovery Guide The purpose of this document is to provide the steps necessary to perform system recovery of an Oracle VM Server for

More information