CommVault Building Block Configuration White Paper
|
|
- Noel Rogers
- 8 years ago
- Views:
Transcription
1 CommVault Building Block Configuration White Paper June 2011
2 Copyright 2011 CommVault Systems, Incorporated. All rights reserved. CommVault, CommVault and logo, the "CV" logo, CommVault Systems, Solving Forward, SIM, Singular Information Management, Simpana, CommVault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, Quick Snap, QSnap, SnapProtect, Recovery Director, CommServe, CommCell, ROMS, and CommValue are trademarks or registered trademarks of CommVault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice. The information in this document has been reviewed and is believed to be accurate. However, neither CommVault Systems, Inc. nor its affiliates assume any responsibility for inaccuracies, errors, or omissions that may be contained herein. In no event will CommVault Systems, Inc. or its affiliates are liable for direct, indirect, special, incidental, or consequential damages resulting from any defect or omission in this document, even if advised of the possibility of such damages. CommVault Systems, Inc. reserves the right to make improvements or changes to this document and information contained within, and to the products and services described at any time, without notice or obligation. June 2011 Content in this document is subject to change without notice Page 1
3 CommVault Building Block Configuration White Paper Contents 1. Introduction: What is a Building Block Physical Layer At a Glance Specifications and Configurations Examples of Servers that meet Building Block Requirements 1.2. Logical View Average Throughput Deduplication Databases Number of Deduplication Databases per Building Block Deduplication Building Block Size Settings Managing Multiple DDBs and Hardware Requirements Disk Space required for DDBs Disk Library 1.3. Disk Attachment Considerations 2. Global Deduplication Storage Policy Block Size 2.2. Disk Libraries 2.3. Remote Offices 2.4. Global Deduplication Storage Policy Caveats 2.5. Streams 2.6. Data Path Configuration 2.7. Use store Priming Option with Source-Side Deduplication 3. Deduplication Database Availability Considerations 4. Building Block Design Choosing the Right Building Block 4.2. Building Block Configuration Examples 5. Conclusion...34 June 2011 Content in this document is subject to change without notice Page 2
4 Introduction What is a Building Block? June 2011 Content in this document is subject to change without notice Page 3
5 1. What is a Building Block? A large data center requires a data management solution that can be flexible, scalable and hardware agnostic. This paper will illustrate how the CommVault Building Block Data Management Solution delivers that solution. The Building Blocks are flexible because they can grow by adding mount paths. They can also accommodate different retentions and different data types all within the same deduplication framework. The Building Blocks are scalable because they can grow to hundreds of TB of unique data. Through staggering full backups, the building blocks can protect large amounts of data with minimal infrastructure which holds down cost and liability. The Building Blocks are hardware agnostic by requiring hardware classes instead of specific models. Within this paper we describe six different examples of adequate servers from three major June 2011 Content in this document is subject to change without notice Page 4
6 manufacturers. Additionally, the solution is completely flexible with respect to the storage infrastructure including disk types, connectivity and brand. A Building Block is a modular approach to data management. A single Building Block is capable of managing 64 TB of deduplicated data within a Disk Library. Each Building Block also provides processing throughput of at least 2 TB/hr. The Deduplication Building Block design is comprised of two layers; the physical layer and logical layer. The physical layer is the actual hardware specification and configuration. The logical layer is the CommCell configuration that controls that hardware. Physical Layer There are FOUR design considerations that make up the Building Block s physical layer: Server Data Throughput Rate Disk Library Hardware Deduplication Database (DDB) LUN Logical Layer There are SEVEN aspects that comprise the Building Block logical layer: Average Throughput Deduplication Databases Number of Deduplication Databases per Building Block Deduplication Building Block Size Settings Managing Multiple Global Deduplication Databases and Hardware Requirements Disk Space required for Deduplication Database Disk Library June 2011 Content in this document is subject to change without notice Page 5
7 1.1. The Physical Layer The physical layer comprises the hardware of the solution. networking play apart in the physical layer. In addition to servers, storage and At A Glance Specifications and Configurations Minimum Server Specifications Components 64 bit OS 2 CPU, Quad Core 32 GB RAM Windows/Linux Minimum Data Throughput Port Specifications Option 1 (Recommended) 1 exclusive 10 GigE port Option 2 4, 1 GigE Parts NIC Teaming on Host June 2011 Content in this document is subject to change without notice Page 6
8 Disk Library Configuration Option 1 (Recommended) Network attached Storage (NAS) Exclusive 10 GigE port 7.2 K RPM SAS spindles Option 2 SAS/FC/iSCSI SAS: 6 Gbps HBA FC: 8 Gbps HBA iscsi: Exclusive 10 GigE NIC 7.2 K RPM SATA/SAS spindles Min. RAID 5 Raid groups with 7+ spindles each 2 TB LUNs up to 50 LUNs Dedicated Storage Adaptors Minimum DDB LUN Specifications Option 1 - Internal Disk Note: The LUN hosting the DDB should be 3x the size of the active DDB in order to allow for recovery point reconstruction operations. Option2 SAN Disk Note: The LUN hosting the DDB should be 3x the size of the active DDB in order to allow for recovery point reconstruction operations. 6 Gbps SAS HBA DDB Volume Specifications 15 k RPM SAS spindles RAID 0 4 spindles RAID spindles RAID 10 8 spindles RAID spindles FC: 8 Gbps HBA iscsi: Exclusive 10 GigE NIC DDB Volume Specifications 15 k RPM physical disks RAID 0 4 spindles RAID spindles RAID 10 8 spindles RAID spindles June 2011 Content in this document is subject to change without notice Page 7
9 Examples of Servers that Meet Building Block Requirements Servers Dell R710 with H700 and H800 controllers and MD storage HP DL 380 G6 with 480i internal controller and FC/10 GigE iscsi/ Gbps SAS for external storage Blades Dell M610 blades on Dell M1000e enclosure with 10 GigE backplane with EqualLogic or MD3000i storage OR 8 Gbps FC fabric. HP BL 460 or BL600 blades on in HP c7000 enclosure with 8 Gbps FC fabric and 10 GigE Ethernet fabric. IBM x3550 or above with internal SAS controller and IBM JS, PS or HS blade servers with FC/10 GigE external SAS/FC/10 GigE iscsi controller fabrics June 2011 Content in this document is subject to change without notice Page 8
10 1.2. The Logical Layer The logical layer is the software and configuration that controls the hardware. A properly configured logical layer allows the physical layer to achieve its potential Average Throughput The Building Block has a minimum throughput rate of 2 TB/hr up to a maximum of 4 TB/hr. A single Building Block can transfer between 48 TB to 96 TB in a 24 hour period. A typical streaming backup window is 8 hours, which allows a Building Block to transfer 16 TB to 32 TB of data. The following table shows expected amounts of data transferred over specific time periods and throughputs. Most design cases should be scaled from an assumption of 2 TB/hr, assuming a configuration as recommended in this document. Table Backup Window Total amount of data P/H 2 TB/H 3 TB/H 4 TB/H 8 Hours Hours Hours Hours Hours Hours Hours Hours Hours June 2011 Content in this document is subject to change without notice Page 9
11 Deduplication Database The Simpana v9 Deduplication engine utilizes a multi-threaded C-Tree server mode database. This database can scale to a maximum of 750 million records. This record limit is equivalent to 90 TB of data residing on the Disk Library and 900 TB of application data assuming a 10:1 deduplication ratio. The DDB has a recommended maximum of 50 concurrent connections, or streams. Any configuration above 50 concurrent DDB connections will have a negative impact to the Building Block performance and scalability. Deduplication Database Characteristics Database Threaded DDB Rows Capacity Application Data Connection C-Tree server mode Multi-Threaded 500 to max 750 million records TB for unique block 600 TB to deduplication ratio 50 concurrent connection Number of Deduplication Databases per Building Block CommVault recommends hosting a single deduplication database per Building Block. However, certain workloads may require higher concurrency but lower capacity. The Simpana Desktop/Laptop Solution is a perfect example of this workload. For such workloads, it is possible to host up to 2 DDBs per Building Block. This is known as DDB Extended Mode. Having the additional DDB allows a total of 100 streams per Building Block enabling higher concurrency for the workloads. In DDB Extended Mode, the total capacity of the DDB s will reach TB combined. One DDB may scale to 20 TB of raw data and the other to 40 TB raw data. There is no way to easily predict the size to which a DDB will grow. In this configuration, it is a best practice to stagger the backups so that only one DDB is utilized at a time. This will ensure that each DDB will scale closer to the TB of raw data capacity. June 2011 Content in this document is subject to change without notice Page 10
12 Deduplication Block Size Setting It is a CommVault best practice to configure Simpana v9 Deduplication Storage Policy block sizes at a minimum of 128 K. This recommendation is for all data types other than databases larger than 100 GB. Large databases can be configured at 256 K (1TB to 5TB) or 512 K (> 5TB) block sizes and should be configured WITHOUT software compression enabled at the Storage Policy Level. This setting represents the block size that the data stream is cut up into. In Simpana v9, enhancements have been made to eliminate the need for Storage Policies per data type. Any block from 16 k to the configured block size will automatically be hashed and checked into the deduplication database. This eliminates the complexity of multiple storage policies per data type Managing Multiple DDBs and the Hardware Requirements. The scalability of a DDB is highly dependent upon the deduplication block size. The larger the block size, the more data can be stored in the Disk Library. Assuming a standard block size of 128 K, a DDB using a single store can comfortably grow to 64 TB without performance penalty. Using this conservative number as a guide, one can predict the number of DDBs required for a given amount of unique data. By default, the software will generate hashes for blocks that are smaller than the specified size down to a minimum size of 16k. In Simpana v9, the block size hashing can be further reduced by using the registry key SignatureMinFallbackDataSize. This further reduces the minimal deduplication block size from 16 k to 4 k. With a 128 k block storage policy any block between 4 k or larger will be checked into the deduplication database. This registry key is ideal for Client Side Deduplication or a network optimized DASH copy over a slow network. SignatureMinFallbackDataSize Location: MediaAgent Type DWORD Value 4096 This registry key should be installed on MediaAgent or client performing signature generation. It can be pushed out via CommCell GUI the MediaAgent subkey will be created on the client. June 2011 Content in this document is subject to change without notice Page 11
13 Disk Space Required for DDB The amount of disk space required for the DDB will depend on the amount of data protected, deduplication ratios, retention, and change rate. This information should be placed in the Storage_Policy_Plan table of the Deduplication Calculator. The top number (in yellow outline) is the total amount of disk space required for the active DDB s. The lower number (in blue outline) is the individual total used by each Storage Policy copy. These numbers don t take into account the DDB recovery point or the working space required which is 3 times the store size Disk Library A Best Practice is to create a single Disk Library for deduplication with no more than three Building Blocks. This is illustrated in the following table. Data per Data Total in the Application Data at a Throughput of DDB Disk Library 10:1 ratio 6 TB/hr 60 TB 180 TB 1.8 PB 6 TB/hr 90 TB 270 TB 2.7 PB 6 TB/hr Non-deduplicated data should backup to a separate Disk Library whenever possible. Sequestering the data types into separate Disk Libraries allows for easier reporting on the overall deduplication savings. Mixing deduplicated and non-deduplicated data into a single library will skew the overall Disk usage information and make space usage prediction difficult. June 2011 Content in this document is subject to change without notice Page 12
14 Each Building Block can support 100 TB of disk storage. The disk storage should be partitioned into 2 4 TB LUNs and configured as mount points in the operating system. This equates to 50-2 TB LUNs, 33-3 TB LUNs, or 25-4 TB LUNS. This LUN size is recommended to allow for ease of maintenance for the Disk Library. Additionally, a larger array of smaller LUNs reduces the impact of a failure of a given LUN. Additional disk capacity should be added in 2-4 TB LUNs matching the original LUN configuration if possible. When GridStor is used apply the equal amount of capacity across all MediaAgents. For example, three MediaAgents would require a total of 6 TB, 2 TB per Building Block. It is not recommended to use third party real-time disk de-fragmentation software on a Disk Library or DDB-LUN. This can cause locks on files that are being access by backup, restore, DASH copies and data aging operation. Third party software can be used to defragment a mount path after it has been taken offline. Anti virus software should also be configured to NOT scan CommVault Disk Libraries and DDB-LUNs Disk Attachment Considerations Mount paths can be of two types, NAS paths (Disk Library over shared storage) or direct attached block storage (Disk Library over direct attached storage). In direct attached block storage (SAN) the mount paths are locally attached to the MediaAgent. With NAS, the disk storage is on the network and the MediaAgent connects via a network protocol. The NAS Mount Path is the preferred method for a mount path configuration. This provides several benefits over the direct attached configuration. If a MediaAgent goes offline, the Disk Library is still accessible by other MediaAgents in the library. With direct attached, if a MediaAgent is lost then the Disk Library is offline. Secondly, all network communication to the mount path occurs from the MediaAgent to the NAS device. June 2011 Content in this document is subject to change without notice Page 13
15 During restores and DASH copies, there is no intermediate communication between MediaAgents. In direct attached, all communication must pass through the hosting MediaAgent in order to service the DASH copy or restore. Backup activities are not affected by the mount path choice. In a direct attached design, configure the mount paths as mount points instead of drive letters. This allows for larger capacity solutions to configure more mount paths than there are drive letters. Smaller capacity sites can use drive letters as long as they do not exceed the number of available drive letters. From an administration perspective it s better to stick with drive letters or mounts paths and to not mix the two. There are no performance advantages to either configuration. Each MediaAgent should have no more than 50 writers across all the mount paths. A MediaAgent with 10-2 TB mount paths (20 TB of raw capacity) would have 5 writers per mount path. The purpose behind this is to evenly distribute the load across all mount paths and to ensure the number of concurrent connections to the DDB remains under the 50 connection limit. In a 3 Building Block GridStor configuration the total number of writers should not exceed 150 writers, 50 writers per MediaAgent. Configure the Disk Library to use Spill and fill mount paths as this allows for load balancing the writers evenly across all mount paths in the library. This setting is located in the Disk Library Properties > Mount Paths Tab. For further information please refer to Establish the Parameters for Mount Path Usage. Regardless of the type of disk being used, SAN or NAS, the configuration is the same. The Disk Library consists of disk devices that point to the location of the Disk Library folders. Each disk device will have a read/write path and a read only path. The read/write path is for the MediaAgent June 2011 Content in this document is subject to change without notice Page 14
16 controlling the mount path to perform backup. The read only path is for the alternate MediaAgent to be able to read the data from the host MediaAgent. This is to allow for restores or aux copy operations while the local MediaAgent is busy. For step by step instructions on configuring a shared Disk Library with alternate data paths please reference Configuring a Shared Disk Library with Alternate Data Paths. June 2011 Content in this document is subject to change without notice Page 15
17 Global Deduplication Storage Policy June 2011 Content in this document is subject to change without notice Page 16
18 2. Global Deduplication Storage Policy Global Deduplication Policy introduces the concept of a common deduplication store that can be shared by multiple Storage Policy copies, Primary or DASH, to provide one large global deduplication store. Each Storage Policy copy defines its own retention rules. However, all participating Storage Policy copies share the same data paths which consists of MediaAgents and Disk Library mount paths. A Global Deduplication Storage Policy (GDSP) should be used instead of a standard deduplication storage policy whenever possible. A GDSP allows for multiple standard deduplication policies to be associated to it allowing for global deduplication across all associated clients. The requirements for a standard Deduplication Storage Policy to become associated to a GDSP are common block size and Disk Library Block Size All associated standard Deduplication Policies are configured with the same block size regardless of the copy being associated to the GDSP. For example, the primary copy has a standalone deduplication database and DASH copy associated to a GDSP. Both the Primary and DASH copy will require the same block size. This is because the block size is configured at the Storage policy level and all copies will adhere to that value. Trying to associate a Storage Policy copy to a GDPS with a different block size will generate the following error: June 2011 Content in this document is subject to change without notice Page 17
19 2.2. Disk Libraries All associated storage policies, in a GDSP, will back up to the same Disk Library. If a different Disk Library is required then a different GDSP will be needed. All disk based library configurations are supported for a GDSP. There is no limit to the number of standard Deduplication Policies that can be associated to a GDSP. However, there are operational benefits to maintaining a simple design. Create standard Deduplication Policies based on client specific requirements and retention needs such as compression, signature generation, and encryption requirements. With a standard Deduplication Policy each specific backup requirement noted above would need a separate DDB Remote Offices Remote offices with local restorability requirements typically have small data sets and low retention. Although, a single standard Deduplication Policy, in most cases, will service the remote site s requirements for data availability, it is recommended to use a GDSP. Remote sites may need flexibility to handle special data such as legal information. In this case, a GDSP would allow this data to deduplicate with other data at the site Global Deduplication Storage Policy Caveats There are three important considerations when using Global Deduplication Storage Policies: Client computers cannot be associated to a GDSP; only to standard storage policies. Once a storage policy copy has been associated to a GDSP there is no way to change that association. Multiple copies within a storage policy cannot use the same GDSP. June 2011 Content in this document is subject to change without notice Page 18
20 2.5. Streams The stream configuration in a Storage Policy design is also important. When a Round-Robin design is configured, ensure the total number of streams across the storage policies associated to the GDSP does not exceed 50. This ensures that no more than 50 jobs will protect data at a given time and overload the DDB. For example, a GDSP may have four associated storage policies with 50 streams each for a total of 200 streams. If all policies were in concurrent use, the DDB would have 200 connections and performance would degrade. By limiting the number of writers to a total of 50, all 200 jobs may start, however, only 50 will run at any one time. As resources become available from jobs completing, the waiting jobs will resume Data Path Configuration When using SAN storage for the mount path, use Alternate Data Paths -> When Resources are offline -> immediately. In a GridStor environment this will ensure the backups are configured to go through the designated Building Block. If a data path fails or is marked offline for maintenance the job will failover to the next data path configured in the Data Path tab. Although Round-Robin between Data paths will work for SAN storage it s not recommended because of the performance penalty during DASH copies and restores. This is because of the multiple hops that have to occur in order to restore or copy the data. When using Use Alternate Data Path with When Resources are Offline then number of streams per client storage policy should not exceed 50. June 2011 Content in this document is subject to change without notice Page 19
21 When using NAS storage for the mount path, Round Robin, between Data Paths is recommended. This is configured in the Storage Policy copy properties -> Data Path Configuration tab of the storage policy associated to the GDSP and not in the GDSP properties. NAS mount paths do not have the same performance penalty because the network communication is between the servicing Media Agent and the NAS mount path directly Use Store Priming Options with Source-Side Deduplication The store priming feature queries a previously sealed DDB for hash lookup before requesting a client to send the data. The purpose of this feature is to leverage existing protected data in the Disk Library before sending new data over the network. The feature is designed for slow network based backup only. This would include Client-Side Deduplication and DASH copies. The feature is not recommended for LAN based backup or network links faster than 1 Gbps. Lab testing has shown that using this feature on the LAN can actually hinder backup performance. This is because it is faster to request the data from the client than perform the queries on the previously sealed DDB. This feature does not eliminate the need to re-baseline after a sealed deduplication database. It only eliminates the need for the client to send the data over the network to the MediaAgent. This feature requires Source-Side Deduplication to be enabled. June 2011 Content in this document is subject to change without notice Page 20
22 Deduplication Data Base Availability June 2011 Content in this document is subject to change without notice Page 21
23 3. Deduplication Database Availability The DDB recovery point is a copy of the active DDB. This copy is used to rebuild the DDB in the event of failure. When the recovery point process is initiated all communication to the active DDB is paused. The information in memory is committed to disk to ensure the DDB is in a quiesced state. The DDB is then copied from the active location to the backup location. After a DDB has been backed up successfully the previous recovery point is deleted. All communication to the DDB is then resumed. Throughout this time, the Job Controller will show the jobs in a running state. By default, the DDB recovery point is placed in a folder called BACKUP in the DDB location. Since this is a copy of the active DDB the LUN hosting the DDB will need THREE times the amount of disk space as the active DDB. This allows for the active DDB, the DDB recovery point, and an equal amount of working space. The DDB recovery point can be moved to an alternate location if more space is required. If this process is going to be used then the DDB LUN requires enough disk space for the active DDB plus growth. The DDB recovery point location will require two times the size of the active DDB. This allows for the recovery point and the working space for the DDB recovery point process. The best practice is to use the Disk Library for the recovery point destination. The default interval for recovery point creation is 8 hours. The registry key that controls this is the Create recovery Points Every registry key. Once the time interval has been reached the next backup will create the recovery point. It is not recommended to lower the Create Recovery Point Every setting to below 4 hours. Lowering the setting below 4 hours can have a negative impact on backup performance. There are 2 reasons for this. First, the recovery point flushes the DDB that is residing in memory to disk. When the jobs resume the DDB has to be loaded back into memory. This process can be time consuming. Secondly, all backup activity pauses while the active DDB is copied to the recovery point. June 2011 Content in this document is subject to change without notice Page 22
24 3.1. Considerations Changing the DDB recovery point interval requires the DDB engine to be restarted. This can be done by restarting the Media Management services from the CommVault Services Control Panel. To view the running time interval, locate the following entry in the SIDBEngine.log file. The value in brackets represents the interval in seconds. The Valid range for the DDB recovery point is 0-99 hours. ### Backup interval set to [28800] When moving the DDB recovery point to a network share, take the network speed into consideration when choosing the destination. Best practice is to use the fastest network connection available. During the DDB recovery point operation, if the copy process of the DDB to the backup folder takes longer than 20 minutes the running jobs will move into a pending state. This is because clients, by default, wait a maximum of 20 minutes when there is no response from the DDB. While the default value can be changed the best practice is to ensure the DDB recovery point process completes within 20 minutes. In order to extend the wait time, three possible registry keys may need to be applied. The examples that follow are all set for one hour. If the timeout value is set to accommodate the backup time for the DDB, then the backup will wait until the SIDB starts allowing threads to continue and will not go pending or show any errors. MediaAgent when Source-Side Deduplication is not being used. Location: MediaAgent Key: SIDBReplyTimeoutInS Type: DWORD Value: 3600 Client for Source-Side Deduplication Location: idataagent Key: SignatureWaitTimeSeconds Type: DWORD Value: 360 June 2011 Content in this document is subject to change without notice Page 23
25 MediaAgent for DASH copy which uses the same code as Source-Side Deduplication. Location: MediaAgent Key: SignatureWaitTimeSeconds Type: DWORD Value: 360 When using a Disk Library, as a recovery point destination, ensure that the mount path reserve space is set appropriately to accommodate the DDB recovery point. Without this the mount path could run out of disk space and fail all DDB recovery point operations until free space is available. To move the DDB recovery point to a network path the following registry value must be created. This change requires a support case to be opened as the SIDBBackupPathPassword string must be encrypted via a proprietary encryption tool that is not publically available. SIDBBackupPath Location: Type Value MediaAgent String Local or network path SIDBBackupPathUser Location: MediaAgent June 2011 Content in this document is subject to change without notice Page 24
26 Type Value String Domain/User. Only required of the network share SIDBBackupPathPassword Location: Type Value MediaAgent String Encrypted by a CommVault tool. Only required of the network share June 2011 Content in this document is subject to change without notice Page 25
27 Building Block Design June 2011 Content in this document is subject to change without notice Page 26
28 4. Building Block Design Designing the operational architectures involve several important considerations. These considerations include backup windows, data sets, throughput and retention Choosing the Right Building Block Backup Windows total amount of time allotted to protect the data set Data Set Throughput The amount of data to protect in the backup window Required Throughput to protect the Data Set within the Backup Window Retention How long the data is to be kept before aging off the system To determine the correct Building Block configuration, the Deduplication Calculator can be populated with the appropriate data. The summary page of the Deduplication Calculator provides the Backup Window, Total Amount of data to protect in a full cycle and the number of DDBs to protect the data for the required retention. To determine the required throughput, divide the Production Site Size by the Backup Window. The result is the required throughput needed by the Building Blocks to protect the data within the backup window. Take the required throughput and divide this by 2 to generate the number of Building Blocks required to protect the data set. June 2011 Content in this document is subject to change without notice Page 27
29 When using Building Blocks, different block sizes for different storage policies will require another deduplication database. Each deduplication database will have a specific hardware requirement as outlined in this document Building Block Configuration Examples In this section we will cover several configuration examples. These examples include a 1 Building Block configuration, a 3 Building block configuration and a staggered full backup configuration. Example 1: 1 Building Block Full backups performed one day a week. Information obtained from Deduplication Calculator Data Set Backup Window Retention Daily Change Rate 16 TB 8 Hours 4 Weeks 2% or 320GB Only one Building Block is required to protect the amount of data specified during the backup window. The daily change rage is 320 GB which can be protected by a single Building Block. Clients MediaAgent DDB 50 Writers Disk June 2011 Content in this document is subject to change without notice Page 28
30 Example 2: 3 Building Block Full backups performed one day a week. Information obtained from Deduplication Calculator Data Set 48 TB Backup Window Retention Daily Change Rate 8 Hours 4 Weeks 2% or 950GB This site would require three Building Blocks in order to protect the data within the backup window. The incremental change rate is 950 GB and can be handled by the Building Blocks. This will allow a total of 150 concurrent streams and an overall deduplication capacity, between the 3 nodes of TB of unique data across all the DDBs. The Deduplication Calculator estimates the deduplication store to be at 42 TB. Per the Deduplication Calculator the total required Disk Library space is 52 TB. Using 2 TB LUNs would yield 26 mount paths (52 TB/2 TB LUN = 26). Rounding the number of mount paths up to 27 results in each node hosting 9 mount paths (27 mount paths/3 BBs = 9). Increasing the number of mount paths to 27 would also increase the disk space to 54 TB. Clients MediaAgent MediaAgent MediaAgent DDB DDB DDB 50 Writers 50 Writers 50 Writers Disk Disk Disk June 2011 Content in this document is subject to change without notice Page 29
31 5.3. Staggering Full Backups Staggering full backups can have a major impact on the overall architecture and design. The next example will show the architectural impact of staggered full backups in a large environment. Example 3: Part 1: Traditional Backups Full backups performed one day a week. Information obtained from Deduplication Calculator Data Set Backup Window Retention Daily Change Rate 120TB 8 Hours 4 Weeks 2% or 2.4TB Number of DDBs 2 This site would require eight Building Blocks in order to protect the data within the backup window. The incremental change rate is 2.4 TB and can be handled by the Building Blocks. To protect the data within the backup window, 8 DDBs will be required. This will allow a total of 400 concurrent operational streams and an overall deduplication capacity between the eight nodes of TB of unique data across all the DDBs. The Deduplication Calculator estimates the deduplication store to be at 106 TB. Per the Deduplication Calculator, the total required Disk Library space is 130 TB. Using 2 TB LUNs would yield 65 mount paths (130 TB/2 TB LUN = 65). For evenly distributed mount paths the number would have to decrease to 64 or increase to 72. Decreasing the mount paths to 64 would reduce the overall capacity to 128 TB. Increasing the mount paths to 72 would increase the capacity to 144 TB. In this case, keep the mount paths at 65. Configure 8 mount paths for 7 MediaAgents and 9 mount paths for the 8 th. June 2011 Content in this document is subject to change without notice Page 30
32 Example 3: Part 2: Staggered Full Backups Full backups performed six days a week. Information obtained from Deduplication Calculator Data Set Backup Window Full backup Retention Daily Change Rate 120 TB 8 Hours Monday - Saturday 4 Weeks 2% or 2.4TB Number of DDBs 2 In this scenario, the site has a total data set of 120 TB. The full backups will occur Friday Wednesday leaving Thursday available for data aging operations. To figure out the daily data to protect the following formula is used. /day Next, determine the number of Building Blocks required for the data rate.. June 2011 Content in this document is subject to change without notice Page 31
33 Staggering the full backup would only require this site to use two Building Blocks in order to protect the data foot print in part 1 within the backup window. The Deduplication Calculator only calls for 2 DDBs for the amount of data being protected and the retention. This will allow a total of 100 concurrent streams and an overall deduplication capacity between the two nodes of TB of unique data across the DDBs. The Deduplication Calculator estimates the deduplication store to be at 106 GB. The total required Disk Library space is 130 TB. This is the same deduplication footprint as in Part 1. Using 2 TB LUNs would yield 65 mount paths (130 TB/2 TB LUN = 65). For evenly distributed mount paths the number would have to increase to 66. This also increases the total capacity to 132 TB. Each Building Block would have 33 mount paths. Staggering the backups across the week reduces the overall infrastructure required to protect the data set significantly. Clients MediaAgent MediaAgent DDB DDB 50 Writers 50 Writers Disk Disk June 2011 Content in this document is subject to change without notice Page 32
34 Conclusions June 2011 Content in this document is subject to change without notice Page 33
35 2. Conclusion The Building Block data management solution is flexible, scalable and hardware agnostic. The Building Blocks are flexible because they can grow by adding mount paths and they can accommodate different retentions and different data types all within the same deduplication framework. The Building Blocks are scalable because they can grow to hundreds of TB of unique data. Through staggering full backups the building blocks can protect large amounts of data with minimal infrastructure which holds down cost and liability. The Building Blocks are hardware agnostic by requiring hardware classes instead of specific models. As detailed in the preceding sections, we have shown there are six different examples of adequate servers across three major manufacturers. June 2011 Content in this document is subject to change without notice Page 34
Using Live Sync to Support Disaster Recovery
Using Live Sync to Support Disaster Recovery SIMPANA VIRTUAL SERVER AGENT FOR VMWARE Live Sync uses backup data to create and maintain a warm disaster recovery site. With backup and replication from a
More informationScheduling Tansaction Log Restores on a Standby SQL Server
Scheduling Tansaction Log Restores on a Standby SQL Server CONTENTS Introduction... 3 Audience... 3 Restore Environment... 3 Goal... 3 Solution... 3 2 INTRODUCTION You can schedule to restore the latest
More informationProtecting enterprise servers with StoreOnce and CommVault Simpana
Technical white paper Protecting enterprise servers with StoreOnce and CommVault Simpana HP StoreOnce Backup systems Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key
More informationA CommVault White Paper: Business Continuity: Commserve Licensing & Recovery Procedure
A CommVault White Paper: Business Continuity: Commserve Licensing & Recovery Procedure CommVault Corporate Headquarters 2 Crescent Place Oceanport, New Jersey 07757-0900 USA Telephone: 888.746.3849 or
More informationCOMMVAULT SIMPANA 10 SOFTWARE MULTI-TENANCY FEATURES FOR SERVICE PROVIDERS
COMMVAULT SIMPANA 10 SOFTWARE MULTI-TENANCY FEATURES FOR SERVICE PROVIDERS As cloud adoption continues to rise, so has the demand from Service Providers for software products that support their multi-tenant
More informationEnhancing Application Protection and Recovery with a Modern Approach to Snapshot Management
Enhancing Application Protection and Recovery with a Modern Approach to Snapshot Management A CommVault Business Value and Technology White Paper which covers leveraging a modern approach to managing snapshots
More informationProtect Microsoft Exchange databases, achieve long-term data retention
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
More informationEnhanced Protection and Manageability of Virtual Servers Scalable Options for VMware Server and ESX Server
PARTNER SOLUTION BRIEF Enhanced Protection and Manageability of Virtual Servers Scalable Options for VMware Server and ESX Server Companies relying on the benefits of virtualized environments to reduce
More informationA BUYER S CHECKLIST ENDPOINT DATA PROTECTION:
ENDPOINT DATA PROTECTION: A BUYER S CHECKLIST Endpoint data. It s often one of the most forgotten aspects of an enterprise data protection strategy. Yet, content on laptops, desktops and mobile devices
More informationWhite Paper. Learn to Fish: Catch the Full Value of Simpana Software with CommVault Training. August 2011
Learn to Fish: Catch the Full Value of Simpana Software with CommVault Training White Paper August 2011 1 Copyright 2011 CommVault Systems, Incorporated. All rights reserved. CommVault, CommVault and logo,
More informationCommVault Simpana Archive 8.0 Integration Guide
CommVault Simpana Archive 8.0 Integration Guide Data Domain, Inc. 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 Version 1.0, Revision B September 2, 2009 Copyright 2009
More informationThe Dell Email and File Archive Solution with CommVault Simpana Software
The Dell Email and File Archive Solution with CommVault Simpana Software A Dell CommVault Technical White Paper Dave Jaffe Solution Architect Dell Solution Centers Darin Camp Sr. Technical Alliance Manager
More informationA CommVault Business Value & Technology White Paper. Snapshot Management & Source-side Deduplication are Vital to Modern Data Protection
A CommVault Business Value & Technology White Paper Snapshot Management & Source-side Deduplication are Vital to Modern Data Protection Contents Executive Summary 3 New Approaches Needed to Tackle Data
More informationSharePoint Archive Rules Options
Key Capabilities u Control data growth Address common capacity management problems such as database growth. Reduce the time required to make backup copies by archiving old documents out of the primary
More informationAdministration GUIDE. Exchange Database idataagent. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 233
Administration GUIDE Exchange Database idataagent Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 233 User Guide - Exchange Database idataagent Table of Contents Overview Introduction Key Features
More information7 Benefits You Realize with a Holistic Data Protection Approach
7 Benefits You Realize with a Holistic Data Protection Approach Think of a wildfire that quickly spreads as it increases in speed and power. That is what is happening today as data growth increases the
More informationEMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication
More informationCommVault Simpana 10 Best Practices
CommVault Simpana 10 Best Practices for the Dell Compellent Storage Center Andrew Smith, Data Protection Product Specialist Kris Piepho, Microsoft Product Specialist July, 2013 Revisions Date Description
More informationHidden Costs of Virtualization Backup Solutions, Revealed
Hidden Costs of Virtualization Backup Solutions, Revealed 5 WAYS VIRTUAL BACKUP PRODUCTS CAN SURPRISE YOU LATER Today, nearly every datacenter has become heavily virtualized. In fact, according to Gartner
More informationDELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING
DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
More informationCommVault Simpana Remote and Branch Office Protection
BACKUP & RECOVERY ARCHIVE REPLICATION RESOURCE MANAGEMENT SEARCH Key Benefits Consolidate data over thin WAN connections and expedite local recovery with multiple backup and recovery options. Save time
More informationEvaluation of Enterprise Data Protection using SEP Software
Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &
More informationEMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the
More informationEMC BACKUP MEETS BIG DATA
EMC BACKUP MEETS BIG DATA Strategies To Protect Greenplum, Isilon And Teradata Systems 1 Agenda Big Data: Overview, Backup and Recovery EMC Big Data Backup Strategy EMC Backup and Recovery Solutions for
More informationUsing HP StoreOnce Backup systems for Oracle database backups
Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce
More informationCOMMVAULT SIMPANA SOFTWARE SOLUTION SETS FOR MODERN INFORMATION INFRASTRUCTURE AND CLOUD ENVIRONMENTS
FACT SHEET: COMMVAULT SIMPANA SOFTWARE SOLUTION SETS FOR MODERN INFORMATION INFRASTRUCTURE AND CLOUD ENVIRONMENTS CommVault has introduced new Simpana software solution sets that are purpose-built to modernize
More informationEnterprise-class Backup Performance with Dell DR6000 Date: May 2014 Author: Kerry Dolan, Lab Analyst and Vinny Choinski, Senior Lab Analyst
ESG Lab Review Enterprise-class Backup Performance with Dell DR6000 Date: May 2014 Author: Kerry Dolan, Lab Analyst and Vinny Choinski, Senior Lab Analyst Abstract: This ESG Lab review documents hands-on
More informationDell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper
Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
More informationQuantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
More informationCommVault Simpana 8.0 Application Introduction
CommVault Simpana 8.0 Application Introduction EMC Data Domain, Inc. 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 Version 1.0, Revision B December 14, 2011 Copyright
More informationSmarter Deduplication with CommVault Simpana Software
BACKUP & RECOVERY ARCHIVE REPLICATION RESOURCE NAGEMENT SEARCH Smarter Deduplication with CommVault Simpana Software Key Benefits Global deduplication across all data, regardless of data type, source or
More informationBackup and Recovery Best Practices With CommVault Simpana Software
TECHNICAL WHITE PAPER Backup and Recovery Best Practices With CommVault Simpana Software www.tintri.com Contents Intended Audience....1 Introduction....1 Consolidated list of practices...............................
More informationVirtual Machine Environments: Data Protection and Recovery Solutions
The Essentials Series: The Evolving Landscape of Enterprise Data Protection Virtual Machine Environments: Data Protection and Recovery Solutions sponsored by by Dan Sullivan Vir tual Machine Environments:
More informationProtecting Microsoft SQL Server with an Integrated Dell / CommVault Solution. Database Solutions Engineering
Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution Database Solutions Engineering By Subhashini Prem and Leena Kushwaha Dell Product Group March 2009 THIS WHITE PAPER IS FOR INFORMATIONAL
More informationEMC BACKUP-AS-A-SERVICE
Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase
More informationThe Five Fundamentals of Virtual Server Data Protection. February, 2013
The Five Fundamentals of Virtual Server Data Protection February, 2013 Contents The Drive to Virtualize... 3 The Five Fundamentals... 3 Virtual Server Data Protection Solved... 4 Data Protection and the
More informationEMC DATA DOMAIN OVERVIEW. Copyright 2011 EMC Corporation. All rights reserved.
EMC DATA DOMAIN OVERVIEW 1 2 With Data Domain Deduplication Storage Systems, You Can WAN Retain longer Keep backups onsite longer with less disk for fast, reliable restores, and eliminate the use of tape
More informationDisk-to-Disk Backup Solution
SSTG-AIM 2004 Disk-to-Disk Backup Solution Using CommVault QiNetix v5.0.0 with Adaptec Snap Server 4500 Storage Array Contents Introduction: The Data Protection Challenge...2 The Solution: Disk-to-Disk
More informationHP D2D NAS Integration with CommVault Simpana 9
HP D2D NAS Integration with CommVault Simpana 9 Abstract This guide provides step by step instructions on how to configure and optimize CommVault Simpana 9 in order to back up to HP StorageWorks D2D devices
More informationTABLE OF CONTENTS. Quick Start - Windows File System idataagent. Page 1 of 44 OVERVIEW SYSTEM REQUIREMENTS DEPLOYMENT
Page 1 of 44 Quick Start - Windows File System idataagent TABLE OF CONTENTS OVERVIEW Introduction Key Features Simplified Data Management Point-In-Time Recovery System State SnapProtect Backup Office Communications
More informationCase Study: DataSpring
Case Study: DataSpring WITH COMMVAULT SOFTWARE, DATASPRING BACKUP SERVICES CAN NOW PROTECT INFRASTRUCTURE, APPLICATIONS OR DEVICES. CHALLENGE Industry Provider of Professional and Cloud Services Corporate
More informationConfiguring Deduplication for High Performance. A Dell Technical White Paper Dell PowerVault DL Backup to Disk Appliance Powered By CommVault
Configuring Deduplication for High Performance A Dell Technical White Paper Dell PowerVault DL Backup to Disk Appliance Powered By CommVault Introduction The Dell PowerVault DL Backup to Disk Appliance
More informationBackups For Virtual Machines
BACKUP & RECOVERY ARCHIVE REPLICATION RESOURCE NAGEMENT SEARCH Key Benefits Offload backup operations from virtual machines to a Microsoft Hyper-V server eliminating backup agents on each virtual machine.
More informationIBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
More informationReducing Backups with Data Deduplication
The Essentials Series: New Techniques for Creating Better Backups Reducing Backups with Data Deduplication sponsored by by Eric Beehler Reducing Backups with Data Deduplication... 1 Explaining Data Deduplication...
More informationA CommVault White Paper: Quick Recovery
A CommVault White Paper: Quick Recovery Increased Application Availability for Multi-Platform SAN Environments CommVault Corporate Headquarters 2 Crescent Place Oceanport, New Jersey 07757-0900 USA Telephone:
More informationUsing HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup
Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP
More informationAdministration GUIDE. SQL Server idataagent. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 240
Administration GUIDE SQL Server idataagent Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 240 User Guide - SQL Server idataagent Table of Contents Overview System Requirements Supported Features
More informationDell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
More informationLong term retention and archiving the challenges and the solution
Long term retention and archiving the challenges and the solution NAME: Yoel Ben-Ari TITLE: VP Business Development, GH Israel 1 Archive Before Backup EMC recommended practice 2 1 Backup/recovery process
More informationAutomated Proactive Solution System (APSS) AUTOMATIC AND HEALTH SOLUTIONS
Automated Proactive Solution System (APSS) AUTOMATIC AND HEALTH SOLUTIONS Table of Contents Table of Contents... 2 Overview... 3 Automatic Solutions... 3 Using Automatic Solutions... 3 Interactive Session...
More informationThe Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
More informationDPAD Introduction. EMC Data Protection and Availability Division. Copyright 2011 EMC Corporation. All rights reserved.
DPAD Introduction EMC Data Protection and Availability Division 1 EMC 的 備 份 與 回 復 的 解 決 方 案 Data Domain Avamar NetWorker Data Protection Advisor 2 EMC 雙 活 資 料 中 心 的 解 決 方 案 移 動 性 ( Mobility ) 可 用 性 ( Availability
More informationQuick Start - Virtual Server idataagent (Microsoft/Hyper-V)
Page 1 of 31 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data
More informationSAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
More informationVirtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
More information5 Benefits of Disaster Recovery in the Cloud.
5 Benefits of Disaster Recovery in the Cloud. One of the biggest use cases for the cloud in fact, the biggest, in the case of hosted private cloud is for backup and disaster recovery. It adds up to one
More informationManagement with Simpana
Efficient, Affordable Data Management with Simpana Software and Microsoft Windows Azure Protect, Manage and Access Your Data Securely and Efficiently: On Premises, In the Cloud, From Anywhere, At Any Time,
More informationConfiguration Maximums VMware Infrastructure 3
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
More informationA CommVault Business-Value White Paper Unlocking the Value of Global Deduplication for Enterprise Data Management
Unlocking the Value of Global Deduplication for Enterprise Data Management Reduce Costs, Reduce Complexity, Reduce Recovery Times Contents Executive Summary 3 Redundant Data Reduces Storage Efficiency
More informationUsing Data Domain Storage with Symantec Enterprise Vault 8. White Paper. Michael McLaughlin Data Domain Technical Marketing
Using Data Domain Storage with Symantec Enterprise Vault 8 White Paper Michael McLaughlin Data Domain Technical Marketing Charles Arconi Cornerstone Technologies - Principal Consultant Data Domain, Inc.
More informationEMC Business Continuity for Microsoft SQL Server 2008
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010
More informationArchive 8.0 for File Systems and NAS Optimize Storage Resources, Reduce Risk and Improve Operational Efficiencies
BACKUP & RECOVERY ARCHIVE REPLICATION RESOURCE MANAGEMENT SEARCH Key Benefits Optimize capacity in primary storage with tiered storage architecture, providing cost and resource benefits Retain transparent
More informationHow To Install The Exchange Idataagent On A Windows 7.5.1 (Windows 7) (Windows 8) (Powerpoint) (For Windows 7) And Windows 7 (Windows) (Netware) (Operations) (X
Page 1 of 208 User Guide - Exchange Database idataagent TABLE OF CONTENTS OVERVIEW Introduction Key Features Add-On Components Customized Features for Your Exchange Version Terminology SYSTEM REQUIREMENTS
More informationSymantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations
Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Technical Product Management Team Endpoint Security Copyright 2007 All Rights Reserved Revision 6 Introduction This
More informationAn Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups
An Oracle White Paper September 2011 Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups Table of Contents Introduction... 3 Tape Backup Infrastructure Components... 4 Requirements...
More informationI. General Database Server Performance Information. Knowledge Base Article. Database Server Performance Best Practices Guide
Knowledge Base Article Database Server Performance Best Practices Guide Article ID: NA-0500-0025 Publish Date: 23 Mar 2015 Article Status: Article Type: Required Action: Approved General Product Technical
More information2.1.15 or later 1.4.0 or later. 2.4.3 or later. 1.5.1 or later
Dell PowerVault Backup to Disk Appliance Interoperability Guide This document provides information about the supported hardware and software versions for the Dell PowerVault Backup to Disk Appliance system.
More informationTechnical White Paper for the Oceanspace VTL6000
Document No. Technical White Paper for the Oceanspace VTL6000 Issue V2.1 Date 2010-05-18 Huawei Symantec Technologies Co., Ltd. Copyright Huawei Symantec Technologies Co., Ltd. 2010. All rights reserved.
More informationUsing EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
More informationOptimizing LTO Backup Performance
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
More informationContinuous Data Replicator 7.0
DATA PROTECTION ARCHIVE REPLICATION RESOURCE MANAGEMENT SEARCH Continuous Data Replicator 7.0 Continuous Data Protection (CDP) and Centralized Management of Remote Office Data Key Benefits Reduces management
More informationVirtual Server Agent v9 with VMware. March 2011
Virtual Server Agent v9 with VMware March 2011 Contents Summary... 3 Backup Transport Methods... 3 Deployment Scenarios... 3 VSA Installation Requirements... 4 VSA Patch Requirements... 4 VDDK Installation...
More informationCommVault Simpana Replication Software Optimized Data Protection and Recovery for Datacenter or Remote/Branch Office Environments
BACKUP & RECOVERY ARCHIVE REPLICATION RESOURCE MANAGEMENT SEARCH Key Benefits Consolidates data over thin WAN connections and expedites local recovery Eliminates tape management at multiple locations by
More informationLDA, the new family of Lortu Data Appliances
LDA, the new family of Lortu Data Appliances Based on Lortu Byte-Level Deduplication Technology February, 2011 Copyright Lortu Software, S.L. 2011 1 Index Executive Summary 3 Lortu deduplication technology
More informationNetApp Data Fabric: Secured Backup to Public Cloud. Sonny Afen Senior Technical Consultant NetApp Indonesia
NetApp Data Fabric: Secured Backup to Public Cloud Sonny Afen Senior Technical Consultant NetApp Indonesia Agenda Introduction Solution Overview Solution Technical Overview 2 Introduction 3 Hybrid cloud:
More informationDrobo How-To Guide. Use a Drobo iscsi Array as a Target for Veeam Backups
This document shows you how to use a Drobo iscsi SAN Storage array with Veeam Backup & Replication version 5 in a VMware environment. Veeam provides fast disk-based backup and recovery of virtual machines
More informationClearPath Storage Update Data Domain on ClearPath MCP
ClearPath Storage Update Data Domain on ClearPath MCP Ray Blanchette Unisys Storage Portfolio Management Jose Macias Unisys TCIS Engineering September 10, 2013 Agenda VNX Update Customer Challenges and
More informationOracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
More informationEMC DATA PROTECTION. Backup ed Archivio su cui fare affidamento
EMC DATA PROTECTION Backup ed Archivio su cui fare affidamento 1 Challenges with Traditional Tape Tightening backup windows Lengthy restores Reliability, security and management issues Inability to meet
More informationEMC Backup and Recovery for Microsoft Exchange 2007 SP2
EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the
More informationVirtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
More informationAutomated Proactive Solution System (APSS) INSTALLATION GUIDE
Automated Proactive Solution System (APSS) INSTALLATION GUIDE Table of Contents Table of Contents... 2 Overview... 3 Pre-Requisites... 3 Installation Process... 3 Post Installation... 10 Log Monitoring
More informationQuick Start - NetApp File Archiver
Quick Start - NetApp File Archiver TABLE OF CONTENTS OVERVIEW SYSTEM REQUIREMENTS GETTING STARTED Upgrade Configuration Archive Recover Page 1 of 14 Overview - NetApp File Archiver Agent TABLE OF CONTENTS
More informationSymantec NetBackup Deduplication Guide
Symantec NetBackup Deduplication Guide UNIX, Windows, Linux Release 7.1 21159706 Symantec NetBackup Deduplication Guide The software described in this book is furnished under a license agreement and may
More informationCommVault Galaxy 5.0 Using PS Series Groups and Auto-Snapshot Manager
CommVault Galaxy 5.0 Using PS Series Groups and Auto-Snapshot Manager Abstract This Technical Report describes how to backup and restore local and remote NTFS volumes and SQL databases using CommVault
More informationBEST PRACTICES GUIDE: VMware on Nimble Storage
BEST PRACTICES GUIDE: VMware on Nimble Storage Summary Nimble Storage iscsi arrays provide a complete application-aware data storage solution that includes primary storage, intelligent caching, instant
More informationSAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
More informationHardware Configuration Guide
Hardware Configuration Guide Contents Contents... 1 Annotation... 1 Factors to consider... 2 Machine Count... 2 Data Size... 2 Data Size Total... 2 Daily Backup Data Size... 2 Unique Data Percentage...
More informationData Replication INSTALATION GUIDE. Open-E Data Storage Server (DSS ) Integrated Data Replication reduces business downtime.
Open-E Data Storage Server (DSS ) Data Replication INSTALATION GUIDE Integrated Data Replication reduces business downtime. Open-E DSS Data Replication Open-E Data Server Storage (DSS) offers enterprise-class
More informationCOMMVAULT EDUCATION SERVICES
COMMVAULT EDUCATION SERVICES JANUARY - MARCH 2016 GLOBAL COURSE CATALOG NEW AND UPDATED OFFERINGS: DESIGNING A COMMCELL ENVIRONMENT; COMMCELL DEPLOYMENT AND CONFIGURATION; DATA SECURITY AND NETWORK CONTROL;
More informationSawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
More informationEMC Virtual Infrastructure for Microsoft Applications Data Center Solution
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
More informationThe Methodology Behind the Dell SQL Server Advisor Tool
The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity
More informationOracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
More informationSOURCE SIDE DEDUPLICATION GUIDE
SOURCE SIDE DEDUPLICATION GUIDE Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 23 Source Side Deduplication Guide TABLE OF CONTENTS OVERVIEW GETTING STARTED ADVANCED TROUBLESHOOTING ONLINE HELP
More informationWe look beyond IT. Cloud Offerings
Cloud Offerings cstor Cloud Offerings As today s fast-moving businesses deal with increasing demands for IT services and decreasing IT budgets, the onset of cloud-ready solutions has provided a forward-thinking
More informationEMAIL ARCHIVING: A BUYER S CHECKLIST
EMAIL ARCHIVING: A BUYER S CHECKLIST Email is the backbone of today s corporate business processes. It lives as the primary communication vehicle internally between employees and externally with customers,
More informationHow To Backup And Restore A Database With A Powervault Backup And Powervaults Backup Software On A Poweredge Powervalt Backup On A Netvault 2.5 (Powervault) Powervast Backup On An Uniden Power
Database Backup and Recovery using NetVault Backup and PowerVault MD3260 A Dell Technical White Paper Database Solutions Engineering Dell Product Group Umesh Sunnapu Mayura Deshmukh Robert Pound This document
More information