HP B6200 Backup System Recommended Configuration Guidelines

Size: px
Start display at page:

Download "HP B6200 Backup System Recommended Configuration Guidelines"

Transcription

1 HP B6200 Backup System Recommended Configuration Guidelines Introduction... 3 Purpose of this guide... 4 Executive summary... 4 Challenges in Enterprise Data Protection... 4 A summary of HP B6200 Backup System best practices... 4 Related documentation... 8 Concept Refresh... 9 Scenario 1 - Choosing the correct network template Planning for FC connection Planning for network configuration Understanding the IP address allocation Physical IP ports VIF addresses High availability and cabling Gateway setup and network templates Template 1, 1GbE and 10GbE subnets Template 2, 1GbE for data, replication and management Template 3, 10GbE for data, replication and management Template 4, two 1GbE subnets, one for management, the other for data Example of configuring the network IP address allocation after net set config VIF address requirements Physical Ethernet connection requirements Scenario 2 - Configuring shares and libraries to align with backup job segmentation Generic best practices Why multiplexing is a bad practice VTL best practices NAS best practices Understanding the maximum of devices supported per service set Worked Example Key considerations Working out and applying the mapping Pass Pass Pass Room for growth Scenario 3 - How to get the best out of B6200 Replication A review of replication best practices Seeding and the HP B6200 Backup System Co-location Temporary increased WAN link speed Floating D2D Copy to physical tape Implementing replication best practices with HP B Using dedicated nodes for replication targets (Active/Passive replication) Adding local backups to replication target nodes Active/Active configuration Scenario 4 Configuring Many-to-One replication

2 Implementing floating D2D seeding Balancing Many-to-One replication Replication and load balancing Scenario 5 - How to get the best from HP autonomic failure What happens during autonomic failure? Failover support with backup applications Designing for failover Scenario 6 - Monitoring the HP B6200 Backup System Events reporting Events generated if couplet storage fills Housekeeping load Storage reporting Hardware Problem Report alerts SNMP reporting HP Insight Remote Support Microsoft SCOM (System Center Operation Manager) HP Replication Manager Appendix A FC failover supported configurations Key Failover FC zoning considerations Fibre channel port presentations FC failover scenario 1, single fabric with dual switches, recommended FC configuration B6200 VTL configuration FC failover scenario 2, single fabric with dual switches, not advised FC configuration B6200 VTL configuration FC failover scenario 3, dual fabric with dual switches, recommended FC configuration B6200 VTL configuration What happens if a fabric fails? FC failover scenario 4, dual fabric with dual switches, not advised FC configuration B6200 VTL configuration Other factors to consider Appendix B B6200 Key Configuration Parameters Appendix C B6200 Sizing Considerations Replication Designer wizard Appendix D Glossary of Terms Appendix E Increasing NAS session timeout Appendix F Power Distribution Unit Options For more information

3 Introduction The Enterprise StoreOnce B6200 Backup System is a deduplication backup appliance supporting VTL and NAS emulations, which provides scale-up and scale-out performance with a user capacity of up to 512 TB and throughput of up to 28 TB/hour. The architecture uses high levels of redundancy supported by 2-node couplets that allow autonomic failover to the other node in a couplet should a failure on one node occur. Any backups will restart automatically after failover. The whole appliance is managed by a single graphical user interface (GUI) and also supports a command line interface (CLI). The HP B6200 Backup System is replication compatible with existing HP StoreOnce Backup Systems and can support a fan-in of up to 384 concurrent replication streams (up to 48 per node). B6200 Couplet 2 Extra storage D (paired with C) Internal communication switches B6200 Couplet 2 B6200 Couplet 2 Extra storage C (paired with D) B6200 Couplet 1 Extra storage B (paired with A) B6200 Couplet 1 B6200 Couplet 1 Extra storage A (paired with B) Figure 1: HP B6200 StoreOnce Backup System, 2-couplet (4-node) configuration 3

4 Purpose of this guide The purpose of this guide is to illustrate through fully developed scenarios how best to tune the HP B6200 Backup System for: Network and Fibre Channel connectivity Device creation and data segmentation Active/Passive and Active/Active replication performance Many to One replication performance Autonomic failover Executive summary Challenges in Enterprise Data Protection Requirements for a modern Enterprise Data Protection solution have many drivers:- Exponential growth of data Shrinking backup windows The need to design, plan and integrate a comprehensive Disaster Recovery capability The need for backup devices to be more available than ever before The HP B6200 StoreOnce Backup System responds to all these requirements by providing: Deduplication to drive more efficient storage of data Large device scalability to ensure every backup has access to devices and, so, reduce queuing time In-built low bandwidth replication for cost-effective copies of data offsite as part of a Disaster Recovery plan HP Autonomic failover (with appropriate ISV software) to allow backups to continue, even if a node in an HP B6200 StoreOnce Backup System fails. High scalability in terms of capacity, performance and replication to ensure the system grows as your business grows. Such capabilities need careful assessment before implementation. To get the best from the appliance the guidelines in the following section should be followed. A summary of HP B6200 Backup System best practices IMPORTANT: Users familiar with the HP VLS System should be aware that the HP StoreOnce B6200 Backup System does not behave in the same way and needs a completely different approach when architecting and tuning for best performance. Do not assume that an HP B6200 Backup System can replace an HP VLS System without major re-evaluation of requirements and alignment with HP B200 best practices. 1. Invest time in sizing your solution before purchase, taking care to include any replication requirements and sizing for failover requirements as early in the process as possible. Work with your HP Pre-Sales representative or Partner and perform a full Sizing exercise using the HP Storage Backup Sizing Tool (example shown in Appendix C) prior to purchase to ensure the device is sized correctly for capacity, throughput and replication. Size adequately for predicted data growth and predicted replication windows. The Sizing tool also makes allowances for housekeeping activities. (See Appendix D for a glossary of terminology.) 4

5 2. Think carefully about the power options available from the B6200 power distribution units and what your particular site can supply. Power to the HP B6200 Backup System is configurable in one of two ways: Monitored PDUs (dual power inputs, which have an HP Power monitor and support losing power on one of the inputs). These are available in single-phase or 3-phase types. There are four per rack and they require between 32A and 48A feeds, depending on international standards Modular PDUs. These are enclosed in the sides of the rack, are single phase only and support a single source power loss. They have current requirements of between 32 and 40 Amps depending on location. By choosing four PDUs, instead of two, the current can be successfully limited to only 32A supplies, which is all that is available in some locations. For more details about these Monitored and Modular Power options see Appendix F Power Distribution Unit Options. 3. Consider which of the four network template options available with the B6200 best suits your networking infrastructure and gateway requirements. Even if you start off with a small/mid range B6200 configuration it is strongly advised to pre-allocate all the IP addresses needed for a full configuration IN ADVANCE to prevent re-assignment of IP addresses when future upgrades are applied. (Up to 25 IP addresses may be required at installation time.) See Scenario 1 - Choosing the correct network template. 4. Consider moving to a 10 GbE infrastructure to get the best NAS share backup and replication capabilities from the HP B6200 Backup System. 5. The Fibre Channel VTL interface on the HP B6200 Backup System is 8 Gb with two ports per node. The support of NPIV (N port virtualization) on your main SAN switches is essential if you wish to take advantage of HP Autonomic failover of Virtual Tape devices (VTLs) on the B6200. See Appendix A FC failover supported configurations for supported switch zoning configurations. 6. VTL medium changers (robots) can be presented to Port1, Port 2, or Port 1 & 2 on the B6200 nodes. By placing all configured robots on Port 1 & Port 2 it is possible to build in external SAN redundancy (through the use of multiple fabrics). 7. Plan how devices will be used The HP B6200 Backup System supports both VTL devices and NAS shares. In Enterprise environments it is expected that most implementations will use a majority of VTL devices configured onto the FC port of each node. NAS shares will be required as backup devices, if specialist backup techniques are used that do not support tape media as such. Many virtualization backup software packages and in-built application backup only support backup to disk targets (NAS shares). An audit should be performed prior to implementation to decide what mixture of devices is required. 8. VTLs and NAS shares can be provisioned on the HP B6200 and presented to different backup applications, if required, allowing maximum flexibility. Always check the HP Enterprise backup solutions guide to check support of your software prior to purchase. 9. The HP B6200 Backup System provides up to 8 separate nodes in a single appliance with a single management GUI for all nodes, and failover capability across nodes within the same couplet as standard. The best practices for single-node D2D Backup Systems are well understood and documented. However, significant thought must be given to mapping 5

6 customer backup requirements and media servers across devices located on up to 8 separate nodes. See Scenario 2 - Configuring shares and libraries to align with backup job segmentation. 10. The preferred mapping approach is to segment customer data into different data types and then map the data types into different backup devices configured on the HP B6200 Backup System so that each backup device is its own unique deduplication store. This approach also improves deduplication ratio; similar data types mean more chance of redundant data. See Scenario 2 - Configuring shares and libraries to align with backup job segmentation 11. The use of an excessive number of streams to a single device can impair performance. Whether VTL backup device or NAS share backup device, no more that 16 streams should be running to a device concurrently. 12. Do not send multiplexed data to StoreOnce B6200/D2D Backup Systems. Multiplexing data streams from different sources into a single stream in order to get higher throughput used to be a common best practice when using physical tape drives. This was a necessity in order to make the physical tape drive run in streaming mode, especially if the individual hosts could not supply data fast enough. But multiplexing is not required and is in fact a BAD practice when it comes to D2D or B6200 deduplication devices. See also Why multiplexing is a bad practice. 13. To allow predictable performance customers should try and separate backup, replication and housekeeping activities into separate windows. Replication and housekeeping windows are configured in the GUI. 14. Plan ahead for replication, not only in sizing the day-to-day replication link size but also in making sure sufficient bandwidth is available during the seeding process (first initialization of data for replication) because the seeding data for the HP B6200 Backup System may be of a considerable size. These techniques will be discussed further in Scenario 3 - How to get the best out of B6200 Replication. 15. The preferred method of seeding with the HP B6200 Backup System is to use a temporarily increased size WAN link from your Telco provider. Alternatively, use co-location of the system racks with a 10GbE local link, break the replication link, ship one device to the required site, then re-establish the replication link. For cross-continent, or Many-to-One replication, seeding requirements the use of a Floating D2D4324 is recommended. 16. For replication between Data Center sites where HP B6200 Backup Systems are deployed on each site, it is probably best to allocate specific nodes at each site to be replication targets only. This is because the volume of replication traffic will be high and is probably best served by a dedicated node. See Scenario 3 - How to get the best out of B6200 Replication. 17. For large numbers of remote offices the HP B6200 Backup System can offer a single high capacity replication target, giving major benefits of a consolidated Disaster Recovery solution. Replication also generates housekeeping activity and, in the same way as backups, this replication load is best distributed across all available nodes. Regional considerations of when replication is likely to occur will also play a part in the design. 18. For Many-to-One replication using many small remote sites, the recommendation is to balance the replication load as equally as possible across multiple dedicated replication nodes. This optimizes resilience, so all replication performance is not reduced if a node fails-over. See Scenario 4 Configuring Many-to-One replication. 6

7 19. When HP B6200 Autonomic failover is deployed, IP failover is handled automatically because the device uses VIF (Virtual network interface). But for Fibre Channel failover to work the customer SAN infrastructure MUST support NPIV (N Port virtualization) and the zoning must be done by using World Wide Names (WWN). For more details see Appendix A FC failover supported configurations. 20. Whilst Autonomic failover is automatic, the Failback process (after a node is repaired) is manual and performed from the GUI or the CLI. Once failed over, a single node is handling the load of two nodes and reduced performance may result. An essential part of the best practices is testing the failover and failback scenario to ensure the performance during a failed-over situation is adequate. The customer has a choice at sizing time. Either fully load each node with work; if failover occurs, the customer will see a reduction in throughput because one node is doing all the work. Or oversize the nodes to use only 50% of their throughput; if failover occurs, there will be no perceived reduction in performance. 21. HP Autonomic failover also has some ISV dependencies in the form of scripts that need to be integrated into the post-execution fields of the backup software. Ensure the necessary ISV scripts or utilities are loaded on the media servers or in the backup jobs to ensure B6200 Autonomic failover works successfully. Validation of the particular ISV scripts with the HP B6200 Backup System should be part of the commissioning process. These scripts and utilities can be made selective, so that only the most important backup jobs are required to run on the single remaining node when one node fails. This reduces the overall load on the single remaining node. See Scenario 5 - How to get the best from HP autonomic. 22. Regularly check for software updates at Software Upgrades and use the.rpm package installer process documented in the HP B6200 StoreOnce Backup System user guide to upgrade the software. Always read the release notes before upgrade; these also contain installation instructions as well as information about any hardware firmware component revisions. Sales of all HP B6200 Backup System come with a compulsory Install and Startup service and an optional Configuration service. 7

8 Related documentation These configuration guidelines assume the reader is familiar with the concepts and architecture of the HP B6200 StoreOnce Backup System. Supporting documentation includes: HP B6000 Series StoreOnce Backup System Installation Planning and Preparation Guide with Checklists (PDF): This guide is the site installation preparation and planning guide. It contains checklists that should be completed prior to HP service specialists arriving on site to install the product. HP B6000 Series StoreOnce Backup System User Guide (PDF and online help): This guide describes how to use the GUI and common CLI commands. HP StoreOnce B6000 Series StoreOnce CLI Reference Guide (PDF): This guide describes all supported CLI commands and how to use them. HP B6000 Series StoreOnce Backup System Capacity Upgrade booklet (PDF): This guide describes how to install the capacity upgrade kits. Linux and UNIX Configuration Guide (PDF): This guide contains information about configuring and using HP StoreOnce Backup Systems with Linux and UNIX. HP StoreOnce Backup System Concepts Guide (PDF): If you are new to the HP StoreOnce Backup System, it is a good idea to read this guide before you configure your system. It describes the StoreOnce technology. These can be downloaded from B6200 Manuals At a node level many of the best practices are identical to those in single node D2D models and the following documentation is a good source of information. D2D Best Practices for VTL, NAS and Replication implementations. This can be downloaded from StoreOnce Manuals 8

9 Concept Refresh Figure 2: Basic concepts The above diagram shows the basic concepts of the HP B6200 StoreOnce architecture understanding the architecture is key to successful deployment. Node: this is the basic physical building block and consists of an individual server (HP Proliant server hardware) Couplet: this consists of two associated nodes and is the core of the failover architecture. Each couplet has a common disk storage sub-system achieved by dual controller architecture and cross-coupled 6Gbps SAS interfaces. Each node has access to the storage subsystem of its partner node. Service set: This is a collection of software modules (logical building blocks) providing VTL/NAS and replication functions. Each service set can have Virtual Tape (VT), NAS and replication configurations. Management Console: This consists of a set of software agents, each agent running on a node, only one of which is active at any one time. Each node is in communication via an internal network. The Management Console provides a virtual IP address for the Management GUI and CLI. If node failure is detected, any agent may become active (first one to respond). Only ONE active Management Console agent is allowed at any one time. Cluster: This is a collection of 1 to n couplets. For the initial B6200 Backup System n=4. This means that a 4-couplet, 8-node configuration is the largest permitted configuration at product launch. 9

10 Failover: This occurs within a couplet. Service sets for VTL/NAS/Replication will run on the remaining node in the couplet. Failback: This is a manual process to restart a node after recovery/repair. VIF: This is a Virtual network Interface. Network connections to the HP B6200 Backup System are to virtual IP addresses. The network ports of the nodes use bonded connections and each bonded (2 ports into 1 entity) interface has one virtual IP address. This means that if a physical port fails, the other port in the bonded pair can be used as the data channel because the VIF is still valid. This architecture eliminates single point of hardware failure. The architecture does not use LACP (Link Aggregation Control Protocol) so there are no specific network switch settings required. Storage shelf: This refers to the P2000 master controller shelf (one per node) or a P2000 JBOD capacity upgrade. JBODs are purchased in pairs and up to three pairs may be added to each couplet. They use dual 6Gbps SAS connections for resilience. In reality, up to 128 TB of storage is shared between two nodes in a couplet. Depending on customer requirements and single node best practices, it is possible to have, for example, a service set on node 1 that consumes 100 TB and a service set on node 2 that consumes 28 TB of the available 128 TB. (However, this is not best practice and not recommended.) This architecture scales dramatically and the maximum configuration can be seen below. Figure 3: HP B6200 Backup System, maximum configuration 10

11 Note that data, replication and storage failover is always between nodes in the same couplet but the Management Console (GUI and CLI) can failover to any node in the whole cluster. In all cases the deployment should center around what devices and services need to be configured on each node. In the following example Node 2 has failed and Service Set 2 has failed over to Node 1. Both service sets are running on Node1, but backup and replication performance will be reduced. The Management Console that was active on Node 2 has moved to Node 3, Couplet 2. This is not significant; the Management Console becomes active on the first node to respond. Figure 4: Showing Node 2 service set failed over to Node 1 11

12 Scenario 1 - Choosing the correct network template The very first deployment choice is how to use the external networking and Fibre Channel connections that the B6200 Enterprise StoreOnce Backup System presents. All GbE network connections are for NAS devices, replication and device management, and are bonded pairs to provide resiliency. There are 2 x 10 GbE ports and 4 x 1GbE ports on each node The Fibre Channel connections (2 x 8 Gbs) are for VTL devices and MUST be connected to a Fibre Channel switch; direct FC Connect is not supported. The switch MUST support NPIV for the FC failover process to work. The following diagram illustrates network and Fibre Channel external connections to a two-rack system. See Appendix A FC failover supported configurations for more information about how the FC cabling should be connected to FC switches and fabrics to best support failover. See Gateway setup and network templates and subsequent template examples for more information about ensuring the network cabling supports failover. Figure 5: HP B6200 Backup System customer connections: Fibre Channel, 1GbE and 10GbE 12

13 The following diagram illustrates the physical ports on the rear of each node. Figure 6: HP B6200 Backup System customer connections, Fibre Channel, 1GbE and 10GbE to rear of node 1 10 GbE external network connections (2 per node), normally used for NAS and replication data 2 1GbE external network connections (4 per node), Management, may also be used for data 3 8 GbE FC connections (2 per node), used for VTL Planning for FC connection The physical FC connection to the HP B6200 Backup System is straightforward; there are two FC ports per node, as shown in Figure 6. However, care must be taken to ensure there is no single point of failure in switch or fabric zoning that will negate the failover capabilities of the HP B6200 Backup System and its autonomic failover ability. Please see Appendix A FC failover supported configurations. Planning for network configuration The HP B6200 Backup System is always installed by HP service specialists, but they need to know the required network configuration. The HP B6000 Series StoreOnce Backup System Installation Planning and Preparation Guide with Checklists (PDF) is available from the StoreOnce web pages to help customers prepare their site and define requirements. Customers are asked to read this document and complete the checklists before HP service specialists arrive to install the system. This guide also provides detailed information about network templates and is available from B6200 Manuals. There are some network restrictions that the customer must follow: 13

14 No network bonding between 1GbE port and 10 GbE ports. The IP address range allocated must be contiguous. Do not use the IP address range x.x because this is reserved for internal networking. Customer can only have one external IP for configuring the Management Console and it should be the first or last IP in the allocated range. Using the network bonding will give the customer reliability because it provides High Availability, but will not increase the performance of the B6200 in general. The network configurations must be the same across all nodes in the same cluster If using separate subnets for data and management (templates 1 and 4) and multiple external switches are used, these switches must support a Multi Chassis Link Aggregation protocol that is compatible with the rest of the customer s network switch infrastructure. For more information, see the white paper at No VTL support on the Ethernet ports Only one gateway is supported DHCP is not supported because it is not appropriate for the failover process which relies on known virtual IP addresses There must be enough physical connections available to meet the template requirements (see Physical Ethernet connection requirements) Note: The ilo3 ports are not available for System Insight manager monitoring the ILO3 ports are an integral part of the autonomic failover architecture. They are pre-wired do not attempt to change the wiring. In a normal network connection and configuration each network link or port will be assigned a unique IP address to be able to route the traffic data to and from it. The HP B6200 Backup System uses a High Availability (HA) solution that allows more than one link or port to have the same IP address. This is done by bonding the ports together to form a new virtual interface (VIF), which is totally transparent to the user. High availability [HA] with respect to networking is considered one of the most commonly used network configurations for performance and/or for redundancy purposes. Understanding the IP address allocation To understand the IP address allocation on supported configurations (called templates and described below), it is important to understand the difference between Physical IP ports and Virtual (VIF) addresses. Physical IP ports The physical ports are the ports that are used to connect the HP StoreOnce Backup System to the customer s network. Two 10GbE and four 1GbE ports are available on each node for connecting to the customer s Ethernet network(s). Physical ports are always bonded and a physical IP address is required for each external bonded Ethernet port. Once the HP service engineer has configured your network, the physical IP addresses are subsequently used for HP support purposes only. 14

15 VIF addresses The VIF addresses are key to ensuring continued performance and availability in the event of failover and are assigned as part of the network configuration process. There are two instances of VIF addresses: The Management Console VIF The B6000 Management Console uses the Management VIF address to access the Backup System from the customer s network for all manageability tasks. Because this Management VIF address is dynamic on the system, it can be active on the master node and passive on the other nodes, but should the master node fail for any reason the Virtual Management Console simply moves to another node and can still be accessed using the same VIF address. Data Path VIFs The VIF Data Path address is associated to a service set, which is the set of services (NAS, replication and so on), available for a node. Should the physical port fail, data will automatically be processed by the service set associated with the failover port using the VIF Data Path address. No change is needed to the VIF address of the service set, allowing hosts and the B6000 Backup System to function correctly. Each couplet has two nodes and, therefore, two service sets, which means that each couplet has two Data Path VIFs. IMPORTANT: The VIF addresses are the IP addresses that the customer needs to access the B6000 Management Console (the management VIF), and to configure NAS backup targets and replication configurations (the data path VIFs). These addresses are not known until the HP service engineer has configured the network. The HP service engineer will leave a record of these addresses after installation and they can also be displayed using the CLI (command line interface). High availability and cabling In the HP B6200 Backup System the network connections for redundancy are configured to support high availability (HA) port failure mode. In this mode more than one Ethernet ports are linked in what is called a network bond from the HP B6200, and the network switch uses the physical links as if they are a single link. If there are two ports in this bond, only one physical link will be active and carrying data at one time, and the second physical link in this bond will be in a passive or standby mode. If the active link goes down for any reason, the passive link will change its operation mode to become an active node and will carry all the traffic instead of the dead link. This ensures high availability to the HP B6200. The transition between the active and the passive links is done automatically without any interaction from the customer or the switch. The HP B6200 moves the Ethernet MAC addresses between its ports so the active link MAC address can appear on another port of the network switch that is connected to the B6200, and the switch can route network traffic through the newly active port instead of the previously failed connection. However, incorrect cabling can cancel out the high availability infrastructure of the product. To avoid a single point of failure in the overall architecture, there should be two switches for each network to which you are connecting. For EACH bonded pair of cables, the first cable should be connected to Switch 1; the other cable should be connected to Switch 2. If one of the switches connected to HP B6200 goes down, the HP B6200 can still be accessed via the second switch without the need of rewiring or immediate replacements. Figures 9 to 12 illustrate the four templates with each network connected to two switches. Note that Template 2 uses mode 6 adaptive load balancing. 15

16 Gateway setup and network templates The HP B6200 Backup System supports four different network templates. This is in order to support various customer configurations depending on whether a 10GbE infrastructure is available or not, and whether customers want to be able to access the Management Console and/or Data remotely. By Data, we mean NAS shares (devices) and replication traffic. The HP B6200 Backup System supports only one gateway for the whole cluster. This is an important consideration when selecting a network template because customers must decide whether they want data and/or management to be available remotely. 1. Template 1: Device Management Console ONLY on 2 x1gbe network, Data on 2 x 10GbE network (this template expects two IP address ranges, which can be in different subnets, only one network will be accessible remotely) 2. Template 2: Management AND Data on 4 x1gbe bonded network. This template assumes one subnet. 3. Template 3: Management and Data on 10GbE bonded network. This template assumes one subnet; the two 10GbE connections per node are bonded. 4. Template 4: Management on 1GbE network (2 x 1GbE connections per node) and Data on 1GbE network (2 x 1GbE bonded connection per node). This configuration allows two separate 1GbE subnets. (This template expects two IP address ranges, which can be in different subnets, only one network will be accessible remotely.) The template the customer requires will be configured as part of the install and startup process offered by HP, but it is important for the customer to be aware of the specific features of each template and inform the HP installation engineer which template he requires. The worked scenarios later on will use systems based on Template 1 since this is anticipated to be the most common usage in most Enterprise Data Centers. Template 1 uses 1 GbE for low bandwidth management traffic with high bandwidth 10GbE for backup and replication to be optimal for all data traffic usage at the same time. Note: Figures 7 and 8 assume that Template 1 has been selected and show a 10GbE data network. The principal is exactly the same for Template 4, which has a 1GbE data network. Templates 2 and 3 support only one network, so these considerations do not apply. 16

17 In Figure 7, the gateway is configured in the Data/Replication network (Gateway Option 1). The customer can replicate or back up data across different subnets as long as the gateway is routed correctly. So all hosts locally and in remote sites 1 and 2 can be backed up and all D2Ds locally and in remote sites 1 and 2 can be replicated. But the Management Console (GUI or CLI) can only be accessed from the local subnet. (If a customer wants to access the B6200 management subnet from a client on Remote Site 1 or 2, remote manageability access is still possible if the remote client has root access to one of the Admin machines in the manageability subnet on the local site.) Figure 7: Gateway is configured in the 10 GbE Data/Replication network (X on the Remote Site Admin A machines indicate that they cannot be used for B6200 management 17

18 In Figure 8, the gateway is in the management network (Gateway Option 2). The customer can access the Management Console (GUI or CLI) across different subnets as long as the gateway is routed correctly, but the only access path to the HP B6200 for data backup / restore jobs is within the local subnet. With this configuration none of the clients in Remote Sites 1 and 2 and none of the D2D units in Remote Sites 1 and 2 can contact the HP B6200 Backup System. Figure 8: Gateway is configured in the 1 GbE Management network (X on the D2D appliances on the remote sites indicates they cannot replicate to the HP B6200 Backup System) 18

19 Template 1, 1GbE and 10GbE subnets Template 1 supports users who have a 10GbE network and a 1GbE network and wish to use separate subnets for data and management. The gateway must be in the same subnet as the network that is being used to connect to remote sites, normally the data subnet. This is shown as Option 1 on the diagram below. Each pair of bonded ports should be connected to separate switches to support high availability. Figure 9: 1GbE management network, 10Gbe data network (includes replication), shows options of connecting the gateway to either the management network or data/replication network, best practice is to use dual switches 19

20 Template 2, 1GbE for data, replication and management Template 2 supports users who have a 1GbE network only. The same network is used for data and management. All IP addresses, including the gateway for replication, are on the same subnet. The 10GbE ports on each node are disabled. Again, two separate switches should be used to support high availability. Figure 10: 1GbE network only, adaptive load balancing The network connections for Template 2 use Mode 6 bonding. This mode enables Adaptive Load Balancing AND fault tolerance. If a network connection is disconnected the other takes over. As each node has four 1GbE network connections the load is shared over all the NICs. The bonding does NOT require any special network switch support, such as LACP. Template 2 will use only a single Gateway option because management, data and replication are all on the same subnet. 20

21 Template 3, 10GbE for data, replication and management Template 3 supports users who have a 10GbE network only. The same network is used for data and management. All IP addresses, including the gateway for replication, are on the same subnet. Again, two separate switches should be used to support high availability. Figure 11: 10GbE network only Template 3 will use only a single Gateway option because management, data and replication are all on the same subnet. 21

22 Template 4, two 1GbE subnets, one for management, the other for data Template 4 supports users who have two 1GbE networks. One 1 GbE network is used for data; the other is used for management. The gateway is normally in the same subnet as the network that is being used to connect to remote sites. This is shown as Option 1 on the diagram below. Each pair of bonded ports should be connected to separate switches to support high availability. Figure 12: Two 1GbE networks, showing the options of connecting the gateway to either the management network or data/replication network 22

23 Example of configuring the network The CLI command net set config templatex is entered to configure the network (where X is the template number). This task is normally carried out by HP support specialists at installation based on the answers the customer provides on the Installation Preparation and Planning checklists. Note: Ideally, allocate ALL IP addresses for ALL Nodes (even if all nodes are not present) because if additional IP addresses are added later, the HP B6200 Backup System will re-evaluate all the IP Addresses and existing NAS shares may be re-allocated to a different virtual IP address. Hence, the best practice is to pre-allocate all IP addresses at initial configuration, irrespective of the number of physical nodes present. See VIF address requirements below. Note: IP address range x.x is not allowed because it conflicts with the internal HP B6200 network allocation IP range. At the end of the net set config process the Management GUI and CLI IP address will be allocated, as will the VIFs (Virtual IP addresses) for the NAS and replication data paths. Currently, all virtual device and replication configuration must take place from the GUI. IP address allocation after net set config The following table shows an example Virtual IP address allocation after the net set config routine has been run on a 4-node, 2-couplet configuration. Note: The Virtual IP addresses are the addresses that the customer needs to connect to the Management Console and to configure NAS shares and replication. Virtual IP addresses are a requirement for failover support. (Physical IP addresses are only used by HP Support after installation.) Node # 1 GbE physical I GbE virtual (management) 10 GbE physical 10 GbE virtual (data and replication) Table 1: Allocated Virtual and Physical IP Addresses for 4-node system VIF address requirements The number of IP addresses that you require depends upon the template that you select (and its implementation of physical ports and VIF addresses), and the number of couplets that you have installed. It is strongly recommended that you allocate sufficient IP addresses to support a fully expanded tworack system (with 8 nodes), as shown in the following table. This means there is no need to reconfigure the network if you start with a one-rack system and subsequently expand it. If you have to re-configure your network, you may also have to re-configure backup targets and replication configurations. 23

24 Template # One couplet (2 nodes) Two couplets (4 nodes) Three couplets (6 nodes) Four couplets (8 nodes) 1 7 (3mgmt, 4data) 13 (5mgmt, 8data) 19 (7mgmt, 12data) 25 (9mgmt, 16data) 2 5 (1mgmt, 4data) 9 (1mgmt, 8data) 13 (1mgmt, 12data) 17 (1mgmt, 16data) 3 5 (1mgmt, 4data) 9 (1mgmt, 8data) 13 (1mgmt, 12data) 17 (1mgmt, 16data) 4 7 (3mgmt, 4data) 13 (5mgmt, 8data) 19 (7mgmt, 12data) 25 (9mgmt, 16data) Table 2: Number of Virtual IP addresses to be allocated (in advance) Physical Ethernet connection requirements When using the 10GbE network, Templates 1 or 3, no SFPs are provided as part of the standard installation. It is the responsibility of the customer to purchase these separately, having decided whether to use copper or fibre connection. 1 GbE connection type RJ45 CAT6 recommended (RJ45 CAT5 minimum) 10 GbE connection type SFP+ either Copper 10GbE SFP+ cables to max length of 7 metres - see B6200 quick spec for example HP cables or Optical 10GbE SFP+ devices (HP P/N B21) Table 3: Physical Ethernet connections The number of physical connections that are required varies depending upon the template selected and the number of couplets in the system. This number should not be confused with the number of VIF addresses required. Template Traffic type One couplet Two couplets Three couplets Four couplets 1 1GbE ports 2 per node Mgt GbE ports 2 per node Data GbE ports 4 per node Mgt & Data GbE ports N/A N/A N/A N/A N/A N/A 3 1GbE ports N/A N/A N/A N/A N/A N/A 10 GbE ports 2 per node Mgt & Data GbE ports 4 per node 2 Mgt, 2 Data 8 total (4 Mgt, 4 Data) 16 total (8 Mgt, 8 Data) 24 total (12 Mgt, 12 Data) 32 total (16 Mgt, 16 Data) 10 GbE ports N/A N/A N/A N/A N/A N/A Table 4: Number of physical Ethernet connections required on the various B6200 configurations 24

25 Scenario 2 - Configuring shares and libraries to align with backup job segmentation Once all virtual IP addresses are allocated, the next step is to use each node in a way that best matches the backup environment. Each node in an HP B6200 Backup System is similar to an existing HP D2D4324 Backup System in terms of capacity, throughput and device support. The existing best practices for single-node StoreOnce devices therefore apply. These can be downloaded from the website below One important difference to note is that the Ethernet network is used only for NAS shares and replication. Virtual tape libraries are always configured for Fibre Channel. Generic best practices A summary of best practices is included below. Most are common to both single-node D2D Backup Systems and the HP B6200 Backup System. 1. Always use the Sizer tool to size for performance. The Sizer tool uses mature store performance data and information only available within the sizing tool to size replication and housekeeping. 2. Always ensure the HP B6200 Backup System has the latest firmware updates because improvements are continually being integrated. 3. Understand that backup throughput depends on the number of concurrent backup jobs that can be configured to run simultaneously. If only single-stream backup is possible, tape may be faster. Re-educate customers into configuring multiple concurrent backups for best backup throughput. An HP StoreOnce B6200 Backup System device with 12 streams achieves 90% maximum throughput. 4. Make allowances for housekeeping. Every time a tape backup is overwritten or a NAS shares is re-opened, the deduplication store must be scanned and the hash code usage states updated. This is an I/O intensive operation and should be scheduled to occur in periods of low activity. On the HP B6200 there is a Housekeeping option in the Navigator that graphically displays the rate at which housekeeping jobs are being processed. The processing rate should always be higher than the incoming housekeeping job rate. See Scenario 6 - Monitoring the HP B6200 Backup System. 5. Make allowances for replication windows. Backup, replication and housekeeping should all be allocated their own window in which to execute and should not overlap. Once these activities overlap performance becomes unpredictable. Replication and housekeeping windows are configurable within the HP B6200 GUI. 6. Understand how to benefit from network bonding and Fibre Channel load balancing. 25

26 On the HP B6200 Backup System network ports are always bonded together for resilience on templates 1, 3 and 4. Template 2 uses adaptive load balancing mode 6 to provide up to 4 Gb/sec across two nodes. For FC VTL, all libraries and drives created should be load balanced equally across the two FC ports. If higher resilience is required, the same library robot can be presented on BOTH FC ports and then presented to two separate fabrics for additional resilience. For supported FC failover configurations please see Appendix A. 7. Do not send multiplexed data to HP StoreOnce B6200/D2D Backup Systems. Multiplexing data streams from different sources into a single stream in order to get higher throughput used to be a common best practice when using physical tape drives. This was a necessity in order to make the physical tape drive run in streaming mode, especially if the individual hosts could not supply data fast enough. But multiplexing is not required and is in fact a BAD practice if used with HP StoreOnce D2D or B6200 deduplication devices. Why multiplexing is a bad practice HP StoreOnce D2D and B6200 Backup Systems rely on very similar repetitive data streams in order to de-duplicate data effectively. When multiplexing is deployed the backup data streams are not guaranteed to be similar, since the multiplexing can jumble up the data streams from one backup to the next backup in different ways hence drastically reducing the deduplication ratios. There is no need for multiplexing to get higher performance quite the contrary, because the best way to get performance from HP StoreOnce D2D and B6200 Backup Systems is to send multiple streams in parallel. Sending only a single multiplexed stream actually reduces performance. Figure 13 shows a single backup job from multiple hosts, where the backup data is radically different from one backup job to the next. There is also only a single stream to the device on the D2D/B6200 Backup System. This configuration produces slow performance and poor deduplication ratios. Figure 13: Multiplexing produces slow performance and poor dedupe ratios 26

27 Instead of multiplexing data into a single stream and sending it to the HP StoreOnce D2D and B6200 Backup System, you should re-specify the multiplexed backup job to be either a single backup job using multiple devices or multiple jobs to separate devices. This will ensure better throughput and deduplication ratios. Finally, multiplexing creates SLOWER restores because data to an individual host has to be demultiplexed from the data stored on the device. NOT using multiplexing will actually improve restore performance. Figure 14 shows the recommended configuration where single or multiple jobs are streamed in parallel with little change between backup jobs. There are multiple streams to devices on the D2D/B6200, resulting in higher performance and good deduplication ratios. Figure 14: Recommended configuration using multiple streams VTL best practices The HP B6200 Backup System supports three configurations when creating a virtual tape library: 1. Via FC port 1 and 2 2. Via FC port 1 3. Via FC port 2 Option 1 is new to the HP B6200 architecture and is the recommended option to avoid single points of failure. It allows the robot to be presented to Port 1 and Port 2 and, thereafter, however many virtual drives are created are balanced equally across Port 1 and Port 2. By using this option the customer will present the Virtual Library robot to both ports of the associated node FC ports, which means the robot can be presented to two separate fabrics. Should one of the FC connections fail in any node, the robots and at least 50% of the drives configured on this node can be accessed via the other FC port, which is on the other FC card, without any customer interaction needed to manually reconfigure those libraries. 27

28 If this balance is not required, the user can use the GUI to re-distribute the virtual drives onto Port 1 or Port 2, as desired. The customer can also change the port assignment after creating the library. The customer usage model for backup and the infrastructure available determine which ports to use. The various options and trade off of FC port zoning are described in more detail in Appendix A. Other best practices are summarized in the following list. 1. Use a different VTL for different data types, such as Filesystem, Database, Other. Each VTL is a separate dedupe store, smaller dedupe stores give better performance. A better dedupe ratio results from groupings of similar data types in a single device. 2. Make use of HP flexible emulations. 1/8 autoloader and MSL emulations have strict geometry to match their physical equivalents, but if you choose EML, ESL and D2D Generic emulations you can construct your own geometry for the library to best match your backup environment requirements. You can configure more drives than you use, but optimal performance is with about 12 streams per VTL running concurrently. You can configure multiple VTLs on the same node, but it is best to balance VTLs across nodes. The D2DBS emulation type has an added benefit in that it is also clearly identified in most backup applications as a virtual tape library (rather than a physical tape library) and so is easier for supportability. HP B6200 StoreOnce Backup System VTL Emulations Per node Per couplet Per maximum cluster Number of Virtual Tape Libraries/NAS targets (combined) Maximum number of cartridges emulated per virtual tape library Maximum number of Virtual Tape Drives Recommended max tape drives in concurrent use per VTL configured Maximum number of tape cartridges emulated Tape drive emulations LTO-2/LTO-3/LTO-4/LTO-5 Library emulations 1/8 Autoloader, MSL, EML, ESL, D2DBS generic Table 5: Threshold limits for VTL emulations 3. Create one media pool for all the incremental backup jobs and a separate media pool for all the full backup jobs. This is a best practice because it makes housekeeping more predictable. If incrementals and fulls are mixed on the same media, a full tape might be overwritten with a small incremental backup but, because we are overwriting a full tape, a large housekeeping load is triggered when we don t expect it. The incremental media pool can utilize appends if required. Improvements in the B6200 replication target performance mean that it is acceptable to implement tape appends at the source sites (for example on the incremental media pool). There is no detrimental effect on replication performance because of this. 4. Use separate mirror backup jobs to back up direct to physical tape for best performance. 28

29 For best performance perform separate backup jobs direct to tape as well as to the HP B6200 Backup System at appropriate intervals. Reading data from the HP B6200 Backup System to copy to physical tape via backup software is the most popular method of offloading to physical tape, but transfers can only be via the media server. The more copy streams run in parallel, the faster the performance. Direct attached tape to the HP B6200 Backup System is not supported. HP StoreOnce replication may be used instead of physical tape to create a DR copy of the data. NAS best practices 1. Never use HP B6200 NAS in implementations that require large scale Write in Place functionality or produce large amounts of meta-files because performance will be less than optimal. Remember this is a NAS target for backup (sequential data), not for NAS files (random data). Some examples of NAS backup types to avoid are shown below: a. Do not create virtual synthetic fulls with HP Data Protector. (Synthetic fulls are OK) b. Do not use Backup Exec Granular recovery of single messages in mailboxes. c. Do not use large scale NAS file type drag & drop. d. Do not use NAS shares for snapshots that occur on a very regular basis (15-30 minutes) because this will cause a disproportionate amount of replication and housekeeping. 2. The threshold limits for B6200 NAS shares are shown below. There are some further restrictions on NFS. See Appendix B. Max NAS container files (produced by the ISV application) per share Recommended Max Concurrent Backup Streams per couplet (only NAS shares configured) 128 Recommended Max Concurrent Backup Streams per Share 12 Table 6: Threshold limits for NAS shares 3. NAS shares can have None, User defined or Active Directory validated permissions. Active Directory is recommended for Enterprise class environments where the HP B6200 Backup System is positioned. 4. The creation of NAS shares in the B6200 GUI is an easy process, but they then have to be linked to the ISV backup software and different ISVs present NAS shares to the user in different ways. HP has prepared a series of NAS implementation guides for HP Data Protector, Symantec Backup Exec, Symantec NetBackup and CommVault. These can be downloaded from D2D NAS Implementation guides. 5. By default, Windows CIFS timeout is set low for NAS backup implementations. This can cause various error messages and conditions such as lost connection to the share, unrecoverable write errors, or timeout problems resulting in backup job failures. It is recommended to add or to increase the SessTimeout value from the default of 45 seconds to 300 seconds (five minutes). For more details see Appendix E. 29

30 6. B6200 NAS shares can only authenticate against Active Directory domains that allow them write access. Understanding the maximum of devices supported per service set There is a maximum of 48 virtual devices per service set. This number may be split across VTL and NAS devices. The following table summarizes some possible configurations. VTL NAS Total 48 0 =48 (maximum) 0 48 =48 (maximum) =48 (maximum) 1 47 =48 (maximum) 47 1 =48 (maximum) =50 (>maximum, not supported) =36 (<maximum) 4 5 =9 (<maximum Table 7: Maximum number of virtual devices per service set When creating VTL devices, the initial FC port assignment for the media changer also impacts the validity of the device configuration. In summary: VTLs may be assigned to FC Port 1 & 2 (recommended), FC Port 1 or FC Port 2. The maximum number of devices per FC Port is 120. The maximum number of VTLs (media changers) is 48. The maximum number of drives used across all the VTLs cannot exceed 192 All drives cannot be allocated to a single FC port - they must be shared across both ports. For optimal performance it is recommended that the devices are evenly shared across both FC ports Per port the number of media changer devices + number of drives is not to exceed 120. For a more detailed discussion about port assignment and the number of devices supported refer to Fibre channel port presentations. Worked Example Using the configuration parameters listed in Appendix B let us use a worked example to illustrate how a customer s infrastructure and backup requirements should be mapped onto an HP B6200 Backup System. IMPORTANT: Transition to the HP B6200 should not be viewed as a simple mapping of existing media servers onto a series of new backup devices in the HP B6200 Backup System but rather as an opportunity to map different data types on to the B6200 architecture. This approach will require some level of backup job re-structuring in order to realize the benefits of increased deduplication ratios and optimal throughput. 30

31 Key considerations There is a limit of 48 devices per node. Mapping the same data type to the same device improves deduplication ratios, even if it comes from multiple media servers More than 1 device per data type reduces housekeeping load and improves throughput. Don t be afraid to create multiple devices. About 12 streams per device (VTL, NAS) is optimal for throughput. This is because, from a deduplication perspective, each device configured has its own dedupe store associated with it, and in terms of throughput 2 x12 stream VTLs will give better throughput than 1 x VTL with 24 streams. Ideally, aim to have about the same number of streams, capacity and devices (including replication target devices) on each node of the B6200 couplet. Gathering the data type s information from the media servers will be key to good planning and configuration. A tool such as HP Sizer Adviser may help in this area, or alternatively use ISV reporting capabilities. NAS shares need to be created based on the number of files created during the backup about 25,000 per share. Working out and applying the mapping Imagine a fictitious company that has seven media servers each with the backup volumes that correspond to the data retention capacities per data type shown in the following tables. Let us also assume that these backup jobs need to be completed in 12 hours. This dictates a certain throughput for each data type that is also dependent on the number of streams that the backup job is configured to deliver (assuming the host system is not the bottleneck to performance). These parameters are also illustrated in the tables below. Media server # File System Data VTL (FSVTL) TB data retained on VTL Throughput required in TB/hr per backup job Maximum number of N/A concurrent streams required File System Data NAS (FSNAS) TB data retained on NAS Throughput required in TB/hr per backup job Maximum number of concurrent streams required N/A 8 N/A 4 N/A 12 N/A Table 8: Summary of data requirements per media server for filesystem (FS) data type 31

32 Media server # Database Data VTL (DBVTL) TB data retained on VTL Throughput required in TB/hr per backup job Maximum number of 2 1 N/A N/A 1 N/A 16 concurrent streams required Database Data NAS (DBNAS) TB data retained on NAS Throughput required in TB/hr per backup job Maximum number of concurrent streams required N/A 8 N/A N/A 5 1 N/A Table 9: Summary of data requirements per media server for database (DB) data type Media server # Other Data VTL (OTHVTL) TB data retained on VTL Throughput required in TB/hr per backup job Maximum number of N/A N/A 5 N/A N/A N/A N/A concurrent streams required Other Data NAS (OTHNAS) TB data retained on NAS Throughput required in TB/hr per backup job Maximum number of concurrent streams required N/A N/A N/A N/A 5 N/A N/A Table 10: Summary of data requirements per media server for other (OTH) data type 32

33 We can model all this information into the HP B6200 Sizing tool (see Appendix C for more details). The input screen to the Sizing tool will look as follows. Figure 14: HP B6200 Sizing tool with all data input for this configuration scenario Results from Sizing tool When we ask the Sizing tool to size the solution it provides the following recommendations. Sizing Couplet Nodes Pairs1 Pairs2 Thrput Streams Capacity Performance Combined Where: Sizing - Performance is based on backup window and scheduling Couplet - Pair of nodes Nodes - Total number of nodes across all couplets 33

34 Pairs1 - Pair of disk shelves using 2TB raw disk type Pairs2 - Pair of disk shelves using 1TB raw disk type Thrput - Maximum potential backup throughput of this multi node configuration in MB/sec Streams - Total number of streams across all nodes - equal number of streams per node Note: The Sizing tool does multiple runs to work out requirements. The first pass is to size for capacity, second pass is to size for performance; whichever requires the biggest configuration is chosen. Sometimes to get the performance numbers without greatly increasing capacity, the Sizing tool has to split a 2TB shelf into 2 x 1TB shelves. In working out the physical configuration to match the requirements in our tool, the Sizing tool has configured a 6 node, 208 TB unit. Couplet 1 has 48 TB of storage per node (4 shelves) for maximum throughput Couplet 2 has 40 TB of storage per node (4 shelves) for maximum throughput Couplet 3 has 16 TB of storage per node (1 shelf) We must now map our data segmentation requirements onto the sized infrastructure using the constraints/best practices listed in Key considerations. This will require a number of passes. Pass 1 Pass1 is a manual mapping process. Naming conventions Each device type has been named appropriately: FS = filesystem, DB =database, OTH = other. For example: File System VTL1 = FSVTL1. Color coding The color-coded areas show whether the stream counts per device needs attention. GREEN: 12 concurrent streams per device gives us the optimal throughput YELLOW: is tolerable if it makes for easier segmentation RED: 19 concurrent streams per device is inadvisable as it will slow down overall throughput and create a large amount of housekeeping Looking at the required storage capacity From our tables we can see that Media Server 4 has the highest volume of data, 40 TB for VTL filesystem data. In Pass 1 we assign this to Service Set 1, naming it FSVTL1. Initially we assign the accumulated VTL database data (from all seven media servers) to Service Set 2 (DBVTL1). This totals 40 TB and balances the load on Service Set 1, but note this gives us 20 streams in Service Set 2. That leaves 55 TB of VTL filesystem data from the other six media servers. We assign 35 TB to Service Set 3 (FSVTL2) and 20 TB to Service Set 4 (FSVTL3). To balance the load on Service Set 4 with that on Service Set 3, we assign VTL Other data to Service Set 4 (OTHVTL1), as well as NAS Other data (OTHNAS1). The NAS filesystem data (FSNAS1) and the NAS database data (DBNAS1) go to Service Sets 5 and 6, but there are problems with the number of streams on these service sets, as illustrated in the following table. 34

35 Service Set 1 Service Set 2 Service Set 3 Service Set 4 Service Set 5 Service Set 6 Node Capacity PASS 1 VTL Device #1 retained Capacity Drives (streams) Throughput Name FSVTL1 DBVTL1 FSVTL2 OTHVTL1 Split 1 VTL Device #2 retained Capacity 20 Drives (streams) 5 FSVTL3 NAS Device #1 retained Capacity Streams Name OTHNAS1 FSNAS1 DBNAS1 Total retained capacity Total # streams Note: For Service Set 4 we have created three devices: OTHVTL1 with 5 streams, FSVTL3 with 5 streams and OTHNAS1 with 4 streams. These are itemized separately in the Total # streams row(s). Note how the retained capacity used per node in the same couplet is fairly well balanced, however there are problems with the number of streams in Service Sets 2, 5 and 6. Pass 2 To resolve this issue with streams we need to create a second DBVTL device and a second FSNAS device to turn the RED areas green. By creating DBVTL2 and FSNAS2 devices, we avoid overloading any device with too many streams and maintain good balance across nodes. 35

36 Service Set 1 Service Set 2 Service Set 3 Service Set 4 Service Set 5 Service Set 6 Node Capacity PASS 2 VTL Device #1 retained Capacity Drives (streams) Throughput Name FSVTL1 DBVTL1 FSVTL2 OTHVTL1 Split 1 VTL Device #2 retained Capacity 10 Drives (streams) 4 DBVTL2 NAS Device #1 retained Capacity Streams Name OTHNAS1 FSNAS1 DBNAS1 Split1 NAS Device #2 Capacity 20 6 Streams 4 12 Name FSVTL3 FSNAS2 Total retained capacity Total # streams Device set Total # streams Device set Total # streams Device set 3 4 The only area of concern remaining is that we are at maximum capacity on Service Set 6, so we could transfer the 2TB DB on NAS from Media Server 6 onto Service Set 4 to provide the optimum solution. Pass 3 A further consideration for a PASS 3 might be to split FSNAS1, DBNAS1 and FSNAS2 into more than a single NAS share. This is because there is a limit of 25,000 NAS container files per NAS shares. ISV backup applications can configure the size of the NAS container files from anywhere between 2 GB and 16 GB. By making the NAS Backup Segment file equal to 16 GB you can store more data in a single NAS share. If the default NAS segment size is 4 GB that means up to 100 TB could be stored in a single NAS share. Also note that FSNAS1 and FSNAS2 devices along with DBNAS1 do not require very high throughput so these have been mapped to the Service Set 5 and Service Set 6, which only have one shelf of drives and hence a lower performance compared to the other service sets. DBVTL1 has a very high throughput requirement and must reside on a service set that has 4 disk shelves and gives throughput capable of supporting 3.2 TB/hour. Note: HP strongly recommends that no node exceeds 50% of the storage available to it in order to maximize throughput performance. 36

37 The final mapping is shown below for the three different data types. Filesystem (FS) data Figure 16: Data Segmentation by data type file system data layout 37

38 Database (DB) data Figure 17: Data Segmentation by data type database system data layout 38

39 Other (OTH) data Figure 18: Data Segmentation by data type Other data data layout Room for growth From the above layout of devices it should be apparent that, if the system requires upgrading with more capacity or the device needs to be enhanced to support replication, the preferred service sets are Service Sets 5 and 6. So, this is where extra capacity would be added for replication. 39

40 Scenario 3 - How to get the best out of B6200 Replication A review of replication best practices The replication best practices that apply to HP StoreOnce D2D Backup Systems are equally as valid for the HP B6200 StoreOnce Backup System. A summary is provided below; for a more detailed discussion refer to EH , D2D Best Practices for VTL, NAS and Replication implementations. 1. Use the Sizing tool to size replication link speed and understand replication concurrency restraints. 2. It is important to understand that, whichever template you deploy for the HP B6200 Backup System, the Data channel is shared between NAS backup and replication. You must ensure that you have sufficient bandwidth for both NAS data and replication traffic. 3. The general rule of thumb for replication is 512 Kb/sec per replication job. For a list of the various replication parameters, fan-in, fan-out and replication concurrency please see Appendix B B6200 Key Configuration Parameters. 4. The same seeding methods are available. However, the volume of seeding data that the HP B6200 Backup System needs to send over the WAN link is likely to be much higher than previous StoreOnce D2D models. This is because the HP B6200 Backup System is a Data Center device with a large capacity and, hence, more likely to be a replication target. See also Seeding and the HP B6200 Backup System. 5. For best replication performance ensure no backups and no housekeeping jobs are running in the replication window. 6. Bandwidth limiting in the HP B6200 GUI is per device (VTL or NAS share created); throttling applies only to the source side (and on transmitted data only). 7. The use of concurrency control is not necessary if replication is run at separate times from backups and housekeeping. However, if the customer wants to run replication at the same time as backups or housekeeping, they can use concurrency control to limit the available bandwidth so as not to saturate the link with replication traffic 8. In order to reduce replication workload and bandwidth needs, consider replicating weekly backups only. This can be done by configuring different media pools for daily and weekly backups and only replication mapping the slots for the weekly media pool cartridges. 9. Mapping multiple source libraries into a single target library will generally give better dedupe but worse replication performance than if you map each source library to a separate target library. 40

41 Seeding and the HP B6200 Backup System Seeding is the name given to the initial synchronization of the source and target HP B6200 Backup Systems, where a large volume of data has to be sent to the HP B6200 units. After seeding the data volumes transferred are much less and are related to the amount of daily change rate in the data stored on the Source HP B6200 Backup System. The most likely methods for HP B6200 seeding are described below. Seeding options for large many to one replication scenarios are discussed in Scenario 4 Configuring Many-to-One replication. Co-location The racks are located at the source site and the 10Gbe replication link is used to connect source to target, thereby ensuring a quick seeding time. After seeding the target system is packed and shipped to the target site. Only the details (mainly IP address) of the target appliances that have been shipped need to be edited on the source appliance. This is because, when co-locating, the IP addresses for source and target would be in the same range but later, when the target system is shipped to its final home, its network range will change. See also Figure 19. Temporary increased WAN link speed If co-location is not practical (for example the devices are on different continents), most enterprise customers using Dense Wave Division Multiplexing, DWDM, or Multi Protocol Label Switching, MPLS, links between sites can request a temporary increase in site-to-site bandwidth for a small period of time during which the seeding can take place. This technique has the added benefit in that it is also well suited to Active/Active replication scenarios where replication is bi-directional. Many data centers have DWDM or MPLS links that make it easy to provision extra bandwidth for specific periods. See also Figure 19. Floating D2D4324 Because the HP B6200 Backup System is replication compatible with earlier generation D2D appliances a Floating D2D4324 can be used to transport seeding data between multiple HP B6200 systems. This may be more cost effective and flexible than co-location or a temporary increase in WAN link speed. See also Figure 20. Copy to physical tape A fourth but less likely option would be to copy data at the source HP B6200 to physical tape and transport the physical tapes to the target site where it can be copied onto the target HP B6200. This technique requires a physical tape infrastructure to be available at both sites and requires the ISV copy function. This is probably the slowest of all in total time required as well as manpower required. See also Figure

42 Figure 19: HP B6200 to HP B6200 seeding process using co-location or temporary higher speed WAN link Co-location (illustrated on the left of Figure 19) 1. Initial backup. 2. Local copy via replication over 10GbE at source site. 3. Ship appliance to target site. 4. Re-establish replication. Temporary high speed WAN link (illustrated on the right of Figure 19) 1. Initial backup. 2. Link provider increases link size for seeding period. 3. Configure replication. 4. Reduce link size after Seeding completes. 42

43 Figure 20: HP B6200 to HP B6200 seeding process using Floating D2D4324 or Physical Tape Floating D2D4324 seeding (illustrated on the left of Figure 20) 1. Initial backup. 2. Local copy via replication over 10GbE at source site to HP D2D4324 target. 3. Ship appliance to target site and set up D2D4324 to HP B6200 replication (seeding). 4. Re-establish replication. Physical tape seeding (illustrated on the right of Figure 20) 1. Initial backup. 2. Copy to tape library using media servers and ISV software. VTL and NAS shares may be copied to tape. 3. Ship tapes to target site and copy from tape to B6200 configured devices using ISV software and media servers. 4. Establish replication. 43

44 Implementing replication best practices with HP B6200 The main difference with the HP B6200 StoreOnce implementation is that replication is part of the service set. Each service set (associated to Node 1, Node 2, et cetera) can handle a maximum of 48 incoming concurrent replication jobs per node and can itself replicate OUT to up to 16 devices. If failover occurs, the replication load becomes incumbent on the remaining service set. The replication traffic will pause during the failover process and restart from the last checkpoint when failover has completed. This means that replication continues without the need for manual intervention but performance may deteriorate. Possible ways to improve this situation are: a) Dedicate a couplet as a replication target only (no backup targets). This will allow more resources to be dedicated to replication in the event of failover. b) Stagger the replication load across different nodes in different couplets. Try not to overload a couplet that is responsible for replication. Figure 21 shows an ideal situation where: Site B nodes are acting as replication targets only. Performance is guaranteed and all we have to do is enable the replication windows and make allowances each day for housekeeping. The replication load at Site B is balanced across two nodes. In the event of failure of a node at Site B, replication performance will not be adversely affected, especially if the nodes at Site B are less than 50% loaded. If there are local backups at Site B as well to VTL7 and NAS3, the arrangement shown in Figure 23 would be the best practice. Figure 22 shows local backup devices VT7 and NAS7 at Site B on Couplet 1. We are still dedicating nodes to act as replication targets, but they are now on Couplet 2 only. Because the load on the nodes in Couplet 2 is now increased, should a node fail in Couplet 2 on Site B there may be noticeable performance degradation on replication. This is because a single node has to handle a much larger load than in Figure 22. Careful sizing of the nodes in Couplet 2 on Site B to ensure they are less than 50% loaded will ensure that even in failover mode replication performance can be maintained. In Figure 23 we deliberately provide one node on each couplet that is dedicated to replication. This simplifies the management, and the loading and performance is easier to predict. The way the couplets are balanced also means that wherever a node fails over we do not lose all our replication performance. In the failover scenario the remaining node can still handle backup in one time window and replication in another time window so the overall impact of a failed node is not that damaging. 44

45 Using dedicated nodes for replication targets (Active/Passive replication) Figure 21: Using dedicated nodes for replication targets at the target site 45

46 Adding local backups to replication target nodes Figure 22: Using dedicated nodes for replication targets at the target site for Active Passive, along with backup sources at Site B 46

47 Active/Active configuration The diagram below shows a similar configuration for a full active/active configuration Figure 23: Using dedicated nodes for replication targets in an Active/Active configuration 47

48 Scenario 4 Configuring Many-to-One replication The other main usage model for the HP B6200 Backup System is in large scale remote office deployments where a fan-in of up to 384 replication jobs to a maximum-configuration HP B6200 Backup System is possible (one stream per device). The sources (remote offices) are more likely to be HP StoreOnce D2D Backup Systems. For a large number of remote sites co-location is impractical, instead the Floating D2D option is recommended. Physical tape and seeding over a WAN link both have difficulties, as explained in the following table. Table 11: Many-to-One seeding options Technique Best for Concerns Comments Floating D2D Many-to-one replication models with high fan-in ratios where the target must be seeded with several remote sites at once. Careful control over the device creation and co-location replication at the target site is required. Recommended option. This is really co-location using a spare D2D. Using the floating D2D approach means the device is ready to be used again and again for future expansion where more remote sites might be added to the configuration. Seed over the WAN link Many to 1 replication models with: Initial Small Volumes of Backup data OR Gradual migration of larger backup volumes/jobs to D2D over time As an example, the first 500GB full backup over a 5Mbit link will take 5 days (120 hours) to seed from a D2D2502 Backup System. This type of seeding should be scheduled to occur over weekends where at all possible. Seeding time over WAN is calculated automatically when using the StorageWorks Backup Sizing tool for D2D. This is time consuming unless multiple WAN link sizes can be temporarily increased during the seeding process time. It is perfectly acceptable for customers to ask their link providers for a higher link speed just for the period where seeding is to take place. Use of portable disk drives - backup application copy or drag and drop USB portable disks, such as HP RDX series, can be configured as Disk File Libraries within the backup application software and used for copies Best used for D2D NAS deployments, but not well suited to VTL device seeding. Multiple drives can be used single drive maximum capacity is about 3TB currently. This requires physical tape capability at a large number of sites. (VTL seeding) USB disks are typically easier to integrate into systems than physical tape or SAS/FC disks. RDX ruggedized disks are OK for easy shipment between sites and cost effective. 48

49 Implementing floating D2D seeding Figure 24: Floating D2D seeding model for Many-to-One replication models This floating D2D method is more complex because for large fan-in (many source sites replicating into a single target site) the initial replication setup on the floating D2D changes when it is transported to the Data Center, where the final replication mappings are configured. The sequence of events is as follows: 1. Plan the final master replication mappings from sources to target that are required and document them. Use an appropriate naming convention, such as SVTL1 (Source VTL1), SNASshare1, TVTL1 (Target VTL1), TNASshare1. 2. At each remote site perform a full system backup to the source D2D and then configure a 1:1 mapping relationship with the floating D2D device, such as: SVTL1 on Remote Site A - FTVTL1 on floating D2D where FTVTL1 = floating target VTL1. Seeding times at the remote site A will vary. If the D2D at site A is an HP D2D2500 Backup System, it is over a 1 GbE link and may take several hours. It will be faster if an HP D2D4312 or HP D2D4324 Backup System is used at the remote sites since these have 10 GbE replication links. 49

50 3. On the Source D2D at the remote site DELETE the replication mappings this effectively isolates the data that is now on the floating D2D. 4. Repeat the process steps 1-3 at Remote sites B and C. 5. When the floating D2D arrives at the central site, the floating D2D effectively becomes the Source device to replicate INTO the HP B6200 Backup System at the Data Center site. 6. On the floating D2D we will have devices (previously named as FTVTL1, FTNASshare 1) that we can see from the Management Console (GUI). Using the same master naming convention as we did in step 1, set up replication which will necessitate the creation of the necessary devices (VTL or NAS) on the B6200 at the Data Center site e.g. TVTL1, TNASshare This time when replication starts up the contents of the floating D2D will be replicated to the Data Center B6200 over the 10 GbE connection (if D2D4312 or D2D4324 are used as the floating D2D) at the Data Center site and will take several hours. In this example Remote Site A, B, C data will be replicated and seeded into the B6200. When this replication step is complete, DELETE the replication mappings on the floating D2D, to isolate the data on the floating D2D and then DELETE the actual devices on the floating D2D, so the device is ready for the next batch of remote sites. 8. Repeat steps 1-7 for the next series of remote sites until all the remote site data has been seeded into the HP B6200 Backup System. 9. Finally set up the final replication mappings using our agreed naming convention decided in Step 1. At the remote sites configure replication again to the Data Center site but be careful to replicate to the correct target devices, by using the agreed naming convention at the data center site e.g. TVTL1, TNASshare1 etc. This time when we set up replication the B6200 at the target site presents a list of possible target replication devices available to the Remote Site A. So, in this example, we would select TVTL1 or TNASshare1 from the list of available targets presented to Remote Site A when we are configuring the final replication mappings. This time when the replication starts almost all the necessary data is already seeded on the B6200 for Remote Site A and the synchronization process happens very quickly. In some scenarios where a customer has larger remote storage locations the floating D2D process can be used together with the smaller locations seeding over the WAN link. Another consideration is the physical logistics for some customers with 100+ locations, some being international locations. The floating D2D and co-location will be difficult, so the only option is to schedule the use of increased bandwidth connections along with their infrastructure needs. The schedule is used to perform seeding at timed, phased slots. Balancing Many-to-One replication For the many-to-one replication scenario, it is probably better to load balance the number of incoming replication sources across the available nodes as shown in the diagram below. In Figure 25 we show the many-to-one replication scenario where we have grouped remote sites (VTL and NAS) together into bundles and have them replicating into multiple dedicated replication target devices. The current recommendation with the HP B6200 Backup System is to keep the same relationship between remote site VTLs and replication target VTLs, namely a 1:1 mapping. The deployment illustrated has the following benefits: 50

51 Load balancing of remote sites: 40 sites are divided by 4 and then presented in bundles of 10 to the replication targets. As more remote sites come on line they are also balanced across the four replication target nodes. Site B backup devices can be managed and added to easily, and their loading on the node accurately monitored. Similarly, the replication target nodes have a single function (replication targets) which makes their behavior more predictable. In a failover situation, the performance impact on either backup or replication is likely to be lower because the backup load at Site B nodes and the replication load at Site B nodes are likely to run in separate windows at separate times. Figure 25: Balancing Many-to-one replication sources across all available nodes 51

52 Replication and load balancing The specification for a B6200 service set is that it can accept up to a maximum of 48 concurrent replication streams from external sources. If more than 48 streams are replicating into a B6200 node, some streams will be put on hold until a spare replication slot becomes available. Being a replication target is more demanding of resources than being a replication source, this is why we recommend allocating dedicated replication targets to specific nodes. The example detailed in Figure 26 on the following page shows a full system approach and is a good overview of what is required. Note that: Each service set on each node is relatively at the same load (load balancing is a manual process) Each node has a single function, VTL backup targets node, NAS backup targets node, Replication target. This makes management and load assessment easier FC SAN 1 with its larger number of hosts and capacities is spread over four nodes all with maximum storage capacity. There are at least eight streams per node to provide good throughput performance All the NAS target backups have been grouped together on node 5 and 6 on 10GbE these could be VMWare backups which generally require a backup to NAS target. Again all NAS targets are balanced equally across nodes 5 and 6 and, in the event of failover, performance would be well balanced at around 50% of previous performance for the duration of the failed-over period. FC SAN 2 has smaller capacity hosts connected via FC. Nodes 7 and 8 are the least loaded hosts, so this couplet is the obvious candidate for use as the replication target. Keep it simple and easy to understand that s the key. 52

53 Figure 26: Fully load balanced analysis of a typical implementation 53

54 Scenario 5 - How to get the best from HP autonomic failover Autonomic failover is a unique enterprise class feature of the HP B6200 StoreOnce Backup system. When integrated with various backup applications it makes it possible for the backup process to continue even if a node within a B6200 couplet fails. ISV scripts are usually required to complete this process. The failover process is best visualized by watching the video on: What happens during autonomic failover? At a logical level, all the virtual devices (VTL, NAS and replication) associated with the failing node are transferred by the B6200 operating system onto the paired healthy node of the couplet. The use of Virtual IP addresses for Ethernet and NPIV virtualization on the Fibre Channel ports are the key technology enablers that allow this to happen without manual intervention. NAS target failover is via the Virtual IP system used in the HP B6200 Backup System the service set simply presents the failed node Virtual IP address on the remaining node. FC (VTL device) failover relies on the customer s fabric switches supporting NPIV, and NPIV being enabled and the zones set up correctly. Here the situation is more complex as several permutations are possible. For more details see Appendix A FC failover supported configurations. Note: To prevent data corruption, the system must confirm that the failing node is shutdown before the other node starts writing to disk. This can be seen in the video where the service set is stopping. At a hardware level the active cluster manager is sending a shutdown command via the dedicated ilo3 port on the failing node. alerts and SNMP traps are also sent on node failure. The HP B6200 Backup System failover process can take approximately 15 minutes to complete. The following figure illustrates the failover timeline. Figure 27: Failover timeline 54

55 Failover support with backup applications Backup applications do not have an awareness of advanced features such as autonomic failover because they are designed for use with physical tape libraries and NAS storage. From the perspective of the backup application, when failover occurs, the virtual tape libraries and the NAS shares on the HP B6200 Backup System go offline and after a period of time they come back online again. This is similar to a scenario where the backup device has been powered off and powered on again. Each backup application deals with backup devices going offline differently. In some cases, once a backup device goes offline the backup application will keep retrying until the target backup device comes back online and the backup job can be completed. In other cases, once a backup device goes offline it must be brought back online again manually within the backup application before it can be used to retry the failed backups. In this section we shall briefly describe three popular backup applications and their integration with the autonomic failover feature. Information for additional backup applications will be published on the B6200 support documentation pages when it is available. HP Data Protector 6.21: job retries are currently supported by using a post-exec script. Download from B6200 support documentation. Symantec NetBackup 7.x: job retries are automatic, but after a period without a response from the backup device the software marks the devices as down. Once failover has completed and the backup device is responding again the software does not automatically mark the device as up again. A script is available from HP that continually checks Symantec device status and ensures that backup devices are marked as up. With this script deployed on the NetBackup media server, the HP B6200 Backup System failover works seamlessly. Download from B6200 support documentation. NetBackup can go back to the last checkpoint and carry on from there, if checkpointing has been enabled in the backup job. So, all the data backed up prior to failover is preserved and the job does not have to go right back to the beginning and start again. EMC Networker 7.x: VTL: Job retries are automatically enabled for scheduled backup jobs. No additional scripts or configuration are required in order to achieve seamless integration with the HP B6200 Backup System. In the event of a failover scenario, the backup jobs are automatically retried once the HP B6200 Backup System has completed the failover process. EMC Networker also has a checkpoint facility that can be enabled. This allows failed backup jobs to be restarted from the most recent checkpoint. NAS: The combination of Networker and NAS is not supported with autonomic failover and use could cause irrecoverable data loss. It is strongly recommended that all backup jobs to all nodes be configured to restart (if any action to do this is required) because there is no guarantee which nodes are more likely to fail than others. It is best to cover all eventualities by ensuring all backups to all nodes have restart capability enabled, if required. Whilst the failover process is autonomic, the failback process is manual because the replacement or repaired node must be brought back on line before failback can happen. Failback can be implemented either from the CLI or the GUI interface. 55

56 Restores are generally a manual process and restore jobs are typically not automatically retried because they are rarely scheduled. Designing for failover One node is effectively doing the work of two nodes in the failed over condition. There is some performance degradation but the backup jobs will continue after the autonomic failover. The following best practices apply when designing for autonomic failover support: The customer must choose whether SLAs will remain the same after failover as they did before failover. If they do, the solution must be sized in advance to only use up to 50% of the available performance. This is to ensure that there is sufficient headroom in system resources so that in the case of failover there is no appreciable degradation in performance after failover and the SLAs are still met. For customers who are more price-conscious and where failover is an exception condition the solution can be sized for cost effectiveness. Here most of the available throughput is utilized on the nodes. In this case when failover happens there will be a degradation in performance. The amount of degradation observed will depend on the relative imbalance of throughput requirements between the two nodes. This is another reason for keeping both nodes in a couplet as evenly loaded as possible. Ensure the correct ISV patches/scripts are applied and do a dry run to test the solution. In some cases a post execution script must be added to each and every backup job/policy. The customer can configure which jobs will retry in the event of failover (which is a temporary condition) in order to limit the load on the single remaining node in the couplet by:- o Only putting the post execution script to retry the job in the most urgent and important jobs, not all jobs. This is the method for HP Data Protector. o Modifying the bring device back on line scripts to only apply to certain drives and robots those used by the most urgent and important jobs. This is the method for Symantec NetBackup. Remember replication is also considered as a virtual device within a service set and replication fails over as well as backup devices For replication failover there are two scenarios: o Replication was not running that is, failover occurred outside the replication window, in which case replication will start when the replication windows is next open. o If replication was in progress when failover occurred, after failover has completed replication will start again from the last known good checkpoint (about every 10MB of replicated data). Failback (via CLI or GUI) is a manual process and should be scheduled to occur during a period of inactivity. Remember all failover related events are recorded in the Event Logs. 56

57 Scenario 6 - Monitoring the HP B6200 Backup System This section explains the options for monitoring the HP B6200 Backup System via inbuilt reporting, SNMP and alerts. Best practice is to monitor status and alerts on a regular basis to ensure that everything is operating correctly. Events reporting An Events summary is updated in the top left-hand corner of the GUI on a rolling 24-hour basis. Any red alerts should be investigated immediately. Select Events in the Navigator pane to display a history of all event logs that can be further analyzed. All events can be viewed or a category of Alerts, Warning, or Information may be selected. In the example below, Info Only has been selected and reports that a virtual library has been deleted by a user. Figure 28: Main event monitoring on the B6200 GUI Events generated if couplet storage fills Each node has its own allocated local storage within a couplet. Should this node s local storage fill up, it will utilize some of the other node storage. Events are sent to warn you of the local node storage filling up. They are generated at 90%, 92%, 94%, 95%, 96%, 97% and 98% as a warning 57

58 and 100% as a critical event. The actual messages are shown by segment (an internal storage term), so, for each percentage figure multiple events may be seen. When the storage is reaching capacity, for a VTL, a EWEOM flag (Early Warning End Of Media) is set on the cartridge it is ISV dependent as to how the backup application responds to this check condition being returned. Generally storage filling up can be as a result of: Undersizing the solution, in which case additional capacity can be added on a couplet by couplet basis Poor deduplication ratio, as a result of the unique nature of the data being backed up, for example data that is already compressed prior to being sent to the StoreOnce device. Bad practices, some of which are: Bad pooling of cartridges, creating excessive housekeeping (thus sub-optimal data storage at deduplication level) Poor deduplication ratio o as a result of pooling different data types into the same VTL / NAS share o as a result of multiplexing backup streams (we recommend setting the concurrency to 1), see also Why multiplexing is a bad practice Housekeeping being paused or not allowed sufficient time to run. This results in space not being reclaimed as fast as it should be. If storage reaches capacity, the customer should take the following actions: Please ensure that no further backups to VTL or NAS shares occur. If the device is being used as a replication target, pause replication jobs. Avoid using the StoreOnce B6200 in a failover mode unless necessary. Check the amount of housekeeping outstanding on the service sets and whether the housekeeping is enabled and running. If there is outstanding housekeeping, allow this to complete. Check the trend of capacity usage (within the GUI) over the past 30 days has it increased significantly? If required, purchase and schedule storage capacity upgrade. Deleting old cartridges may not necessarily reclaim storage. Consider adding more storage. At this time, to avoid creating a heavy housekeeping load, please DO NOT: Delete many cartridges at one time Reformat many cartridges at one time within the backup application Housekeeping load Another worthwhile check that should be performed is on the overall housekeeping load for each service set. Housekeeping is the process whereby space is reclaimed and is best scheduled to occur in quiet periods when no backup or replication is taking place. However, if insufficient time is allocated to housekeeping, there is a risk that housekeeping jobs will stack up effectively hogging capacity. To view overall housekeeping load and progress per service set, click Housekeeping in the Navigator, select the service set and the Overall tab in Housekeeping Statistics. This tab shows the housekeeping statistics for all devices configured on the service set. The screenshot below shows overall housekeeping statistics for Service Set 2. The top graph shows that the housekeeping load is low and all housekeeping jobs have been completed. The second graph shows the rate at which housekeeping jobs are being processed. 58

59 Figure 29: Monitoring housekeeping on the B6200 GUI Storage reporting Another important parameter to measure is that of storage growth, which is also done per service set. Should the storage occupancy of a service set hit 90%, an automatic alert is generated. If no further action takes place, the whole device is made read only when occupancy reaches 100%. To monitor the free space per service set and the overall deduplication ratio per service set, select Storage Report in the Navigator. This parameter can also be measured using HP Replication Manager 2.0 (see later in this section). Figure 30: Storage reporting on the B6200 GUI - free space 59

60 Hardware Problem Report Although issues over the past 24 hours will be logged in the general Events summary, the Hardware Problem Report should also be monitored for more detailed information. The following example shows that some network connections were down and a disk had gone down on MSA1 in location on couplet 1. Figure 31: Hardware Problem reporting on the B6200 GUI faulty disk Select an item in the list and click on Details to display more information about the problem. You can also go to the Hardware page, where additional buttons may be available. For example, a physical disk, you can turn on the beacon LED for a physical disk from this page to ensure the correct drive is replaced. Figure 32: Hardware reporting component details alerts Best practice is to configure alerts via the GUI, so that event notifications are automatically ed to the relevant people. A single event can generate a notification to multiple addresses. Also, different sets of events can generate notifications to different addresses. Select from the Navigator to configure automatic and provide the information needed to route the (SMTP server). Select Events from the Navigator to set up the addresses and associate events with destination addresses. The following example shows the Manage Notifications screen. Refer to the HP B6200 StoreOnce Backup System user guide for more information about configuring alerts. 60

61 Figure 33: Selective s driven by Event severity SNMP reporting The HP B6200 Backup System can be configured to provide SNMP traps, which are the lowest common denominator in terms of event monitoring capabilities. Many third party network monitoring applications use this protocol. Currently, the HP B6200 Backup System only supports SNMP VI. The SNMP trap location is set on the HP B6200 Backup System via a CLI command. Once configured, SNMP traps can be used to report B6200 Alarms to various system-wide monitoring software packages, such as HP Insight Remote Support and Microsoft SCOM. Figure 34: Use CLI on HP B6200 to set the SNMP trap address HP Insight Remote Support HP Insight Remote Support provides a free phone home capability for the HP B6200 StoreOnce Backup System. It is a no cost option to which customers can subscribe and enables pro-active support of HP servers and storage by allowing a customer s equipment to be monitored remotely by HP. At the 61

62 first sign of trouble an HP Support engineer can be dispatched with a replacement component sometimes before the customer even realizes there is a problem! The HP B6200 Backup System is supported in IRS V5.8 and the new platform IRS V7.05 (June 2012). More details can be found at HP Insight Remote Support Figure 35: Typical HP IRS console showing HP Storage devices monitored by HP Microsoft SCOM (System Center Operation Manager) The HP B6200 Backup System supports the Netcitizen MIB (Management information base). To obtain B6200 support, the customer must install SCOM v or later. 62

63 Figures 36 and 37: Microsoft SCOM monitoring a wide range of HP hardware including HP StoreOnce B6200 HP Replication Manager 2.0 Mainly targeted at solutions such as those shown in Scenario 4 (multiple remote offices), HP Replication Manager 2.0 is specifically designed and highly recommended as a best practice for single pane of glass management for large fan-in replication scenarios. Replication Manager 2.0 is available at no cost to anyone who purchases an HP B6200Backup System replication license and is downloadable from the HP Software Kiosk Replication Manager 2.0 provides the following functionality. Support for HP B6200 Backup Systems Basic trends analysis capacity, replication up times, utilization analysis Enhanced topology view Support for up to 400 devices Launching of multiple StoreOnce GUIs within Replication Manager Monitoring and reporting of CIFS, NAS, VTL Real time graphical views (auto refresh) Data base export (CSV) and historical data trending up to 1 year Active directory group setting (import AD groups) notification (digest) Windows 32 and 64 bit support Continued Gen 2 hardware support including D2D2500 series StoreOnce software identification Enhanced Command Line Interface for Replication Manager 63

64 Figure 38: Storage Trend report in Replication Manager 2.0 Figure 39: Replication Trend report in Replication Manager 2.0 The Topology Viewer shows Device Status, Name and Replication Status between Devices. A tool tip is available when mouse over a device. This tool tip contains additional information about this device. 64

65 Click on Show Legend to display the legend used in the Topology and use the page navigation buttons to move onto other islands. Refer to the Replication Manager 2.0 documentation for more information about using the screens illustrated in this section. Figure 40: Topology viewing using Replication Manager

66 Appendix A FC failover supported configurations Key Failover FC zoning considerations The same considerations apply when configuring Fibre Channel as did when configuring the network. Care must be taken to ensure there is no single point of failure in switch or fabric zoning that will negate the failover capabilities of the HP B6200 Backup System and its autonomic failover ability. Conformance to the following rules will help to ensure successful failover Fibre Channel switches used with HP StoreOnce must support NPIV. For a full list see Use WWPN zoning (rather than port based). In a single fabric configuration ensure the equivalent FC ports from each B6200 node in a couplet are presented to the same FC switch, see Scenario 1. In a dual fabric configuration ensure the equivalent FC ports from each B6200 node in a couplet are presented to the same fabric. However, they should present to separate switches within the fabric. Ensure the D2D diagnostic device WWNs (these will be seen in the switch name server and are associated with the physical ports) are not included in any fabric zones and, therefore, not presented to any hosts. Fibre channel port presentations When you create a virtual tape library on a service set you specify whether the VTL should be presented to: Port 1 and 2 Port 1 Port 2 Port 1 and 2 is the recommended option to achieve efficient load balancing. Only the robotics (medium changer) part of the VTL is presented to Port 1 and Port 2 initially, with the number of virtual tape drives defined, being presented 50% to Port 1 and 50% to Port 2. This also ensures that in the event of a fabric failure at least half of the drives will still be available to the hosts. (The initial virtual tape drive allocation to ports (50/50) can be edited later, if required. So, to create a library you need: 1 WWN for the robotics XX number of WWNs for your drives, depending on the required number of drives Although the universal configuration rule is a maximum of 255 WWNs per port, the HP B6200 Backup System applies a maximum of 120 WWNs per port and up to 192 drives per library. This is to ensure fabric redundancy and to enable failover to work correctly. For example, should Port 1 fail in any of the selected configurations, the WWNs associated with its service set will not exceed 120 and can be failed over safely to Port 2. To summarize: To create a library on one port only, the maximum number of devices that you can have is 120, of which 1 WWN is required for the robotics, so the total number of drives available = 119 drives 66

67 To create a library on Ports 1 and 2, the maximum number of drives is 96 per port (but this configuration is not recommended). This is a B6200 library limit and not a WWN limit. The following table illustrates various FC port configurations with VTL devices and the impact that the choice of FC ports has on the validity of the configuration. Table 12: Port assignment VTL configuration examples, illustrating maximum number of devices supported FC failover scenario 1, single fabric with dual switches, recommended Figure 40 illustrates the logical connectivity between the hosts and the VTLs and their FC ports. The arrows illustrate accessibility, not data flow. FC configuration Multiple switches within a single fabric All hosts can see the robots over two separate switches Zoning by WWPN Each zone to include a host and the required targets on the HP B6200 Backup System Equivalent ports from each node can see the same switch 67

68 B6200 VTL configuration Default library configuration is 50% drives presented to Port 1, 50% presented to Port 2 Up to 120 WWNs can be presented to Port 1 and Port 2 On B6200 failover all WWNs of failed node are automatically transferred to the corresponding port on the other node. This is transparent to the hosts. Figure 41: VTLs presented to two ports and into dual switches in a single Fabric, recommended configuration If FC switch1 fails, Host A and Host B lose access to their backup devices. Hosts C and D still have access to the media changers and to 50% of the drives on VTL2 and 50% of the drives on VTL1 B6200 failover between nodes is enabled. 68

69 FC failover scenario 2, single fabric with dual switches, not advised The FC configuration is the same in this scenario, but the VTLs are presented to a single port. This configuration is not advised because it compromises the B6200 autonomic failover facility. FC configuration Multiple switches within a single fabric All hosts can see the robots over two separate switches Zoning by WWPN Each zone to include a host and the required targets on the HP B6200 Backup System Equivalent ports on each node see different switches B6200 VTL configuration Library configuration is all drives presented entirely to a single port, either Port 1 or Port 2 Up to 120 WWNs can be presented to the individual port Loss of a port or switch means that all access is lost to the VTLs that are dedicated to that port B6200 failover nodes will not failover if we lose a FC port on the node Figure 42: Separate VTLs presented to separate ports and into different switches in different Fabrics, not recommended If FC switch1 fails, Host A and Host B lose access to their backup devices, even though B6200 failover is enabled because the physical configuration provides a point of failure. Hosts C and D still have access to the media changers and to 100% of the drives on VTL3 and VTL4. 69

70 FC failover scenario 3, dual fabric with dual switches, recommended This FC configuration has added complexity because it has two fabrics. The arrows illustrate accessibility, not data flow. FC configuration Dual fabrics Multiple switches within each fabric Zoning by WWPN Each zone to include a host and the required targets on the HP B6200 Backup System Equivalent ports from each node can see the same fabric, but are directed to different switches B6200 VTL configuration Default library configuration is 50% drives presented to Port 1, 50% presented to Port 2. Robot appears on Port 1 and Port 2 Up to 120 WWNs can be presented to Port 1 and Port 2 On B6200 failover all WWNs of failed node are automatically transferred to the corresponding port on the other node, which still has access to both fabrics. This is transparent to the hosts. Figure 43: Complex configuration with parts of different VTLs being presented to different fabrics, recommended configuration 70

71 What happens if a fabric fails? If Fabric 1 fails in the previous configuration, all VTL libraries and nodes on the HP B6200 Backup System still have access to Fabric 2. As long as Hosts A, B and C also have access to Fabric 2, then all backup devices are still available to Hosts A, B and C. The following diagram illustrates existing good paths after a fabric fails. Figure 44: Complex configuration with parts of different VTLs being presented to different fabrics, fabric 1 fails 71

72 Similarly, if Fabric 2 failed, all VTL libraries and nodes on the HP B6200 Backup System would still have access to Fabric 1. As long as Hosts D, E and F also have access to Fabric 1, then all backup devices are still available to Hosts D, E and F. The following diagram illustrates good existing paths after Fabric 2 fails. Figure 45: Complex configuration with parts of different VTLs being presented to different fabrics, fabric 2 fails 72

73 FC failover scenario 4, dual fabric with dual switches, not advised The FC configuration is the same as scenario 3, but the VTLs are presented to a single port, which means they are tied to a single switch within a single fabric. This configuration is not advised because it compromises the B6200 autonomic failover facility. FC configuration Dual fabrics Multiple switches within each fabric Zoning by WWPN Each zone to include a host and the required targets on the HP B6200 Backup System Equivalent ports from each node are connected to the same fabric, but are directed to different switches Each port is connected to only one switch within one fabric B6200 VTL configuration Library configuration is all drives presented entirely to a single port, either Port 1 or Port 2. Loss of a port, switch or fabric means that all access is lost to the VTLs that are dedicated to that port, switch or fabric Figure 46: Complex configuration with VTLs being presented to only one port and to a single fabric, not advised 73

74 Other factors to consider For each D2D FC port there is a Diagnostic Fibre Channel Device presented to the Fabric. There will be one per active FC physical port. This means there are 4 per couplet on an HP B62000 Backup System. The Diagnostic Fibre Channel Device can be identified by the following example text. Symbolic Port Name "HP D2D S/N-CZ J99 HP D2DBS Diagnostic Fibre Channel S/N-MY H Port-1" Symbolic Node Name "HP D2D S/N-CZ J99 HP D2DBS Diagnostic Fibre Channel S/N-MY H" Where: S/N-CZ J99 is an example string for D2D Gen2 S/N-hpd8d385af is an example string for an HP B6200 device. If this is node Port 1, the Node Name string will be as above but, if it is Port 2, the Node Name string will end with Port-2. Often the diagnostic device will be listed above the other virtual devices because it is logged in ahead of the virtual devices. The S/N-MY H string is an indication of the QLC HBA s serial number and not any serial number of an appliance/node. At this time these devices are part of the StoreOnce D2D VTL implementation and are not an error or fault condition. It is recommended that these devices be removed from the switch zone that is also used for virtual drives and loaders. Dual Fabric can be implemented in a single switch using Cisco V-SANs. In practice this might be used with a large Cisco Director class switch Some operating systems track resources through the FCID (N-Port ID) address instead of WWPN. This has potential for problems during failover. Examples are HP-UX 11.11, and in legacy mode, AIX. You would need to ensure FCID persistence is used to maintain the path. HP-UX (non-legacy mode) introduces a new Agile Addressing scheme which tracks WWN. AIX with dynamic tracking will bind to the target WWN rather than FCID. 74

75 Appendix B B6200 Key Configuration Parameters Table 13: Configuration parameters 1 couplet 2 couplets 3 Couplets 4 Couplets Devices Max Addressable Disk Capacity (TB) assuming 2TB drives Up to 128 Up to 256 Up to 384 Up to 512 Max Number Devices (VTL + NAS shares ) Total maximum concurrent streams ( backup/restores/inbound replication) Replication Max VTL Library Rep Fan Out Max VTL Library Rep Fan In Max Rep Fan Out Max Rep Fan In Max Concurrent Rep Jobs as Source Max Concurrent Rep Jobs Target VTL Max VTL drives (384) and medium changers (96) - (combined) Max VTL Drives Max Cartridge Size (TB) Max Slots Per Library (D2DBS, EML-E, ESL-E Lib Type) Max Slots Per Library (MSL2024, MSL4048,MSL8096 Lib Type) 24,48,96 24,48,96 24,48,96 24,48,96 Max virtual devices( drives & medium changers) configurable per FC ports Recommended Max Concurrent backup streams ( mix of VTL & NAS) Recommended Max Concurrent Backup Streams per Library NAS Max files per share Max number of streams if only CIFS target shares configured (no VTL) Max number of streams if only NFS target shares configured (no VTL) Recommended Max Concurrent Backup Streams (mix of VTL & NAS) Recommended Max Concurrent Backup Streams per Share Performance Max Aggregate Write Throughput (MB/s) Min streams required to achieve max aggregate throughput** ** Assumes no backup client performance limitations. 75

76 Appendix C B6200 Sizing Considerations The HP Storage sizing tool can be downloaded from It provides a useful starting point for designing any solution involving an HP B6200 Backup System. In this section we will size an HP B6200 to HP B6200 Active/Passive replication solution. The two main outputs of the sizing tool are:- A Bill of Materials (BOM) which indicates the likely cost of the solution A technical set of calculations (in html format), which shows how the solution was derived and also states other useful information, such as the link sizing and housekeeping allowances required. The same tool is used for a variety of Storage technologies at HP as well as a number of Nearline technologies under the Backup Calculators navigation tab (see above). When designing an HP B6200 Backup System into a replication solution (as opposed to a standalone device) the required Launch option is Design VLS/D2D Replication over WAN in the VTL Solution Calculators section. 76

77 Replication Designer wizard 1. Provide the Replication Designer wizard with the required inputs, such as: Replication configuration; Active/Passive, Active/Active and Many to One Replication Window (used to help size the replication link) Preferred type of replication target, e.g. HP B6200 (if no specific target device type is specified, it will size which D2D model is best suited. 2. Click Launch D2D Calculators. You will notice that in our example this has brought up two devices in the left-hand navigation section, HPD2D Source #1 and HPD2D Target # 2, which is in line with the replication environment that we specified in the Replication Design wizard. The next step is to input the Job Spec. 77

78 3. For this simple example let us assume the following jobs at the source B6200: Job1: Data base backup of 20 TB with 16 data streams available, a backup window of 12 hours, an estimated daily data change rate of 5% and a retention period of 3 months using Incrementals during the week and Fulls on Saturdays. HP Data Protector Software is used, data is 2:1 compressible. Job2: Filesystem backup of 5 TB with 5 data streams available, backup window of 12 hours, a daily change rate of 2% and a retention period of 3 months using Incrementals during the week and Fulls on Saturdays. HP Data Protector Software is used, data is 1.5:1 compressible. We expect all this data to be replicated in 12 hours, as previously specified in the Replication wizard). The important thing to note here is that the backup performance required will dictate the number of nodes and shelves required in the solution. The screen shots below show the data inputs for Job1. In our example we have chosen to force the Sizing tool to size the source device as a B6200 and to use Sizer intelligence to optimize the disk types (1 TB or 2TB) to get the correct performance and capacity requirement. 78

79 4. Repeat the process to add Job 2. 79

80 5. Then simply click the Solve/Submit button. A Bill of Materials and List pricing is produced (prices excluded below for commercial reasons). The B6200 Bill of Materials includes a rack, which is part of the EJ021A switch assembly part number. Note that the BOM has added a single couplet Replication license to the Replication Passive (Target) B6200. D2D Replication Active #1 Components Quantity Part Number Description List Price 1 EJ022A HP B TB StoreOnce Backup System 1 EJ022A 0D1 Factory integrated 1 EJ021A HP B6000 Switch Assembly Tape System Subtotal Cost at Quantity Discount Cost after Discount Comment /Notes D2D Replication Passive #2 Components Quantity Part Number Description List Price 1 EJ022A HP B TB StoreOnce Backup System 1 EJ022A 0D1 Factory integrated 1 EJ021A HP B6000 Switch Assembly 1 EJ026A HP B6200 StoreOnce Replication LTU Tape System Subtotal Cost at Quantity Discount Cost after Discount Comment /Notes 80

81 A technical assessment report is also output in html. This part of the reports shows: Capacity requirements at source and target WAN link for replication (replicated volume/replication window) in this case 186 Mbit/sec link Replication concurrency available at source and target 81

HP StoreOnce Catalyst and HP Data Protector 7 Implementation and Best Practice Guide Release 1

HP StoreOnce Catalyst and HP Data Protector 7 Implementation and Best Practice Guide Release 1 HP StoreOnce Catalyst and HP Data Protector 7 Implementation and Best Practice Guide Release 1 Executive Summary This guide is intended to enable the reader to understand the basic technology of HP StoreOnce

More information

Best practices for VTL, NAS and Replication implementations

Best practices for VTL, NAS and Replication implementations HP D2D Backup Systems Best practices for VTL, NAS and Replication implementations Table of contents Abstract... 4 Related products... 4 Validity... 4 Executive summary... 5 General D2D best practices at

More information

Protect Microsoft Exchange databases, achieve long-term data retention

Protect Microsoft Exchange databases, achieve long-term data retention Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...

More information

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP

More information

Protecting enterprise servers with StoreOnce and CommVault Simpana

Protecting enterprise servers with StoreOnce and CommVault Simpana Technical white paper Protecting enterprise servers with StoreOnce and CommVault Simpana HP StoreOnce Backup systems Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Using HP StoreOnce Backup systems for Oracle database backups

Using HP StoreOnce Backup systems for Oracle database backups Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce

More information

HP StoreOnce & Deduplication Solutions Zdenek Duchoň Pre-sales consultant

HP StoreOnce & Deduplication Solutions Zdenek Duchoň Pre-sales consultant DISCOVER HP StoreOnce & Deduplication Solutions Zdenek Duchoň Pre-sales consultant HP StorageWorks Data Protection Solutions HP has it covered Near continuous data protection Disk Mirroring Advanced Backup

More information

ROBO and regional data centre data protection solution scenarios using HP Data Protector software, HP VTL systems and Low Bandwidth replication

ROBO and regional data centre data protection solution scenarios using HP Data Protector software, HP VTL systems and Low Bandwidth replication ROBO and regional data centre data protection solution scenarios using HP Data Protector software, HP VTL systems and Low Bandwidth replication Executive Summary... 2 Scenario Summary... 4 Introduction:

More information

Peace of mind when you need it most the power of HP StoreOnce Catalyst when Disaster Strikes!

Peace of mind when you need it most the power of HP StoreOnce Catalyst when Disaster Strikes! Technical white paper Disaster recovery scenario guide for using HP StoreOnce Backup systems and Symantec NetBackup 7.x Peace of mind when you need it most the power of HP StoreOnce Catalyst when Disaster

More information

HP StoreOnce Product Line

HP StoreOnce Product Line 1 of 20 HP StoreOnce Product Line The HP StoreOnce product line was first announced in early 2007, providing both disk-to-disk and VTL backup. In June 2010, the line was updated to include HP StoreOnce

More information

Data Protection for businesses with remote offices across multiple locations

Data Protection for businesses with remote offices across multiple locations DEFEND Data Protection for businesses with remote offices across multiple locations Business white paper In today s information age, protecting critical data of an organization s branch offices, across

More information

Cost Effective Backup with Deduplication. Copyright 2009 EMC Corporation. All rights reserved.

Cost Effective Backup with Deduplication. Copyright 2009 EMC Corporation. All rights reserved. Cost Effective Backup with Deduplication Agenda Today s Backup Challenges Benefits of Deduplication Source and Target Deduplication Introduction to EMC Backup Solutions Avamar, Disk Library, and NetWorker

More information

WHITE PAPER BRENT WELCH NOVEMBER

WHITE PAPER BRENT WELCH NOVEMBER BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

Using HP StoreOnce D2D systems for Microsoft SQL Server backups

Using HP StoreOnce D2D systems for Microsoft SQL Server backups Technical white paper Using HP StoreOnce D2D systems for Microsoft SQL Server backups Table of contents Executive summary 2 Introduction 2 Technology overview 2 HP StoreOnce D2D systems key features and

More information

Continuous Data Protection. PowerVault DL Backup to Disk Appliance

Continuous Data Protection. PowerVault DL Backup to Disk Appliance Continuous Data Protection PowerVault DL Backup to Disk Appliance Continuous Data Protection Current Situation The PowerVault DL Backup to Disk Appliance Powered by Symantec Backup Exec offers the industry

More information

EMC Disk Library with EMC Data Domain Deployment Scenario

EMC Disk Library with EMC Data Domain Deployment Scenario EMC Disk Library with EMC Data Domain Deployment Scenario Best Practices Planning Abstract This white paper is an overview of the EMC Disk Library with EMC Data Domain deduplication storage system deployment

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

WHY DO I NEED FALCONSTOR OPTIMIZED BACKUP & DEDUPLICATION?

WHY DO I NEED FALCONSTOR OPTIMIZED BACKUP & DEDUPLICATION? WHAT IS FALCONSTOR? FalconStor Optimized Backup and Deduplication is the industry s market-leading virtual tape and LAN-based deduplication solution, unmatched in performance and scalability. With virtual

More information

HP D2D NAS Integration with HP Data Protector 6.11

HP D2D NAS Integration with HP Data Protector 6.11 HP D2D NAS Integration with HP Data Protector 6.11 Abstract This guide provides step by step instructions on how to configure and optimize HP Data Protector 6.11 in order to back up to HP D2D Backup Systems

More information

Implementing Storage Concentrator FailOver Clusters

Implementing Storage Concentrator FailOver Clusters Implementing Concentrator FailOver Clusters Technical Brief All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject to

More information

WHITE PAPER: customize. Best Practice for NDMP Backup Veritas NetBackup. Paul Cummings. January 2009. Confidence in a connected world.

WHITE PAPER: customize. Best Practice for NDMP Backup Veritas NetBackup. Paul Cummings. January 2009. Confidence in a connected world. WHITE PAPER: customize DATA PROTECTION Confidence in a connected world. Best Practice for NDMP Backup Veritas NetBackup Paul Cummings January 2009 Best Practice for NDMP Backup Veritas NetBackup Contents

More information

Reduced Complexity with Next- Generation Deduplication Innovation

Reduced Complexity with Next- Generation Deduplication Innovation Reduced Complexity with Next- Generation Deduplication Innovation Sean R Kinney Director, HP StoreOnce Hewlett-Packard Data Storage Priorities 2010 Data backup, Capacity growth, Disaster Recovery Top three

More information

: HP HP0-X02. : Designing & Implementing HP Enterprise Backup Solutions. Version : R6.1

: HP HP0-X02. : Designing & Implementing HP Enterprise Backup Solutions. Version : R6.1 Exam : HP HP0-X02 Title : Designing & Implementing HP Enterprise Backup Solutions Version : R6.1 Prepking - King of Computer Certification Important Information, Please Read Carefully Other Prepking products

More information

Introduction to MPIO, MCS, Trunking, and LACP

Introduction to MPIO, MCS, Trunking, and LACP Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the

More information

Testing conducted and report compiled by. Binary Testing Ltd Unit 33 Newhaven Enterprise Centre Denton Island Newhaven East Sussex BN9 9BA

Testing conducted and report compiled by. Binary Testing Ltd Unit 33 Newhaven Enterprise Centre Denton Island Newhaven East Sussex BN9 9BA 2 Introduction 3 Executive Summary 4 The HP StorageWorks D2D4 Backup System HP StorageWorks D2D4 Backup System A report and full performance test of Low Bandwidth Replication on Hewlett-Packard s SME data

More information

Keys to Successfully Architecting your DSI9000 Virtual Tape Library. By Chris Johnson Dynamic Solutions International

Keys to Successfully Architecting your DSI9000 Virtual Tape Library. By Chris Johnson Dynamic Solutions International Keys to Successfully Architecting your DSI9000 Virtual Tape Library By Chris Johnson Dynamic Solutions International July 2009 Section 1 Executive Summary Over the last twenty years the problem of data

More information

QuickSpecs. HP StoreOnce Backup. What's New. HP StoreOnce Backup. Overview

QuickSpecs. HP StoreOnce Backup. What's New. HP StoreOnce Backup. Overview Overview Does data growth leave you struggling with complex, distributed, and costly data protection? Is some of your data not being protected because backup windows aren't long enough or backup jobs are

More information

EMC DATA DOMAIN OVERVIEW. Copyright 2011 EMC Corporation. All rights reserved.

EMC DATA DOMAIN OVERVIEW. Copyright 2011 EMC Corporation. All rights reserved. EMC DATA DOMAIN OVERVIEW 1 2 With Data Domain Deduplication Storage Systems, You Can WAN Retain longer Keep backups onsite longer with less disk for fast, reliable restores, and eliminate the use of tape

More information

System Compatibility. Enhancements. Email Security. SonicWALL Email Security 7.3.2 Appliance Release Notes

System Compatibility. Enhancements. Email Security. SonicWALL Email Security 7.3.2 Appliance Release Notes Email Security SonicWALL Email Security 7.3.2 Appliance Release Notes System Compatibility SonicWALL Email Security 7.3.2 is supported on the following SonicWALL Email Security appliances: SonicWALL Email

More information

CA ARCserve Family r15

CA ARCserve Family r15 CA ARCserve Family r15 Rami Nasser EMEA Principal Consultant, Technical Sales Rami.Nasser@ca.com The ARCserve Family More than Backup The only solution that: Gives customers control over their changing

More information

e22-290 http://www.gratisexam.com/ Number: 000-000 Passing Score: 800 Time Limit: 120 min File Version: 1.0

e22-290 http://www.gratisexam.com/ Number: 000-000 Passing Score: 800 Time Limit: 120 min File Version: 1.0 e22-290 Number: 000-000 Passing Score: 800 Time Limit: 120 min File Version: 1.0 http://www.gratisexam.com/ EMC E22-290 EMC Data Domain Deduplication, Backup and Recovery Exam Version: 5.1 Exam A QUESTION

More information

E20-385. Data Domain Specialist Exam for Implementation Engineers. Version: Demo. Page <<1/7>>

E20-385. Data Domain Specialist Exam for Implementation Engineers. Version: Demo. Page <<1/7>> E20-385 Data Domain Specialist Exam for Implementation Engineers Version: Demo Page 11. A customer is looking for a solution to backup their 20 TB of image and video data on a Microsoft Windows

More information

The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5

The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5 Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway for System z solution is designed to provide all of the following EXCEPT: A. ESCON attach

More information

Bosch Video Management System High Availability with Hyper-V

Bosch Video Management System High Availability with Hyper-V Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Backup Exec Private Cloud Services. Planning and Deployment Guide

Backup Exec Private Cloud Services. Planning and Deployment Guide Backup Exec Private Cloud Services Planning and Deployment Guide Chapter 1 Introducing Backup Exec Private Cloud Services This chapter includes the following topics: About Backup Exec Private Cloud Services

More information

HP StorageWorks Data Protector Express white paper

HP StorageWorks Data Protector Express white paper HP StorageWorks Data Protector Express white paper Easy-to-use, easy-to-manage, backup and recovery software for smart office data protection Introduction... 2 Three-tier architecture for flexibility and

More information

Refreshing Your Data Protection Environment with Next-Generation Architectures

<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures 1 Refreshing Your Data Protection Environment with Next-Generation Architectures Dale Rhine, Principal Sales Consultant Kelly Boeckman, Product Marketing Analyst Program Agenda Storage

More information

CA ARCserve Backup: Protecting heterogeneous NAS environments with NDMP

CA ARCserve Backup: Protecting heterogeneous NAS environments with NDMP WHITE PAPER: CA ARCserve Backup Network Data Management Protocol (NDMP) Network Attached Storage (NAS) Option: Integrated Protection for Heterogeneous NAS Environments CA ARCserve Backup: Protecting heterogeneous

More information

HP StorageWorks D2D Backup Systems and StoreOnce

HP StorageWorks D2D Backup Systems and StoreOnce AUtOMATEyour data protection. HP StorageWorks D2D Backup Systems and StoreOnce The combination that right-sizes your storage capacity. Solution brief Regardless of size and industry, many of today s organizations

More information

Get Success in Passing Your Certification Exam at first attempt!

Get Success in Passing Your Certification Exam at first attempt! Get Success in Passing Your Certification Exam at first attempt! Exam : E22-290 Title : EMC Data Domain Deduplication, Backup and Recovery Exam Version : DEMO 1.A customer has a Data Domain system with

More information

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200

More information

EMC DATA PROTECTION. Backup ed Archivio su cui fare affidamento

EMC DATA PROTECTION. Backup ed Archivio su cui fare affidamento EMC DATA PROTECTION Backup ed Archivio su cui fare affidamento 1 Challenges with Traditional Tape Tightening backup windows Lengthy restores Reliability, security and management issues Inability to meet

More information

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key

More information

Citrix XenServer Design: Designing XenServer Network Configurations

Citrix XenServer Design: Designing XenServer Network Configurations Citrix XenServer Design: Designing XenServer Network Configurations www.citrix.com Contents About... 5 Audience... 5 Purpose of the Guide... 6 Finding Configuration Instructions... 6 Visual Legend... 7

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide April, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be

More information

HP D2D NAS Integration with CommVault Simpana 9

HP D2D NAS Integration with CommVault Simpana 9 HP D2D NAS Integration with CommVault Simpana 9 Abstract This guide provides step by step instructions on how to configure and optimize CommVault Simpana 9 in order to back up to HP StorageWorks D2D devices

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

CHAPTER 9. The Enhanced Backup and Recovery Solution

CHAPTER 9. The Enhanced Backup and Recovery Solution CHAPTER 9 The Enhanced Backup and Recovery Solution Based on the analysis performed on the existing backup and recovery environment in Chapter 8, this chapter covers the new backup and recovery solution.

More information

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2 vcenter Server Heartbeat 5.5 Update 2 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

EMC Data Domain Boost for Oracle Recovery Manager (RMAN)

EMC Data Domain Boost for Oracle Recovery Manager (RMAN) White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with

More information

Enterprise Backup and Restore technology and solutions

Enterprise Backup and Restore technology and solutions Enterprise Backup and Restore technology and solutions LESSON VII Veselin Petrunov Backup and Restore team / Deep Technical Support HP Bulgaria Global Delivery Hub Global Operations Center November, 2013

More information

White Paper. Overland REO SERIES. Implementation Best Practices

White Paper. Overland REO SERIES. Implementation Best Practices White Paper Overland REO SERIES Implementation Best Practices Using REO to Enhance Your Backup Process Organizations of all sizes are faced with the challenge of protecting increasing amounts of critical

More information

Flexible backups to disk using HP StorageWorks Data Protector Express white paper

Flexible backups to disk using HP StorageWorks Data Protector Express white paper Flexible backups to disk using HP StorageWorks Data Protector Express white paper A powerful and simple way to combine the advantages of disk and tape backups to improve backup efficiency, reduce data

More information

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Quantum DXi6500 Family of Network-Attached Disk Backup Appliances with Deduplication

Quantum DXi6500 Family of Network-Attached Disk Backup Appliances with Deduplication PRODUCT BRIEF Quantum DXi6500 Family of Network-Attached Disk Backup Appliances with Deduplication NOTICE This Product Brief contains proprietary information protected by copyright. Information in this

More information

Deduplication has been around for several

Deduplication has been around for several Demystifying Deduplication By Joe Colucci Kay Benaroch Deduplication holds the promise of efficient storage and bandwidth utilization, accelerated backup and recovery, reduced costs, and more. Understanding

More information

High Availability and MetroCluster Configuration Guide For 7-Mode

High Availability and MetroCluster Configuration Guide For 7-Mode Data ONTAP 8.2 High Availability and MetroCluster Configuration Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone:

More information

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest

More information

CXS-203-1 Citrix XenServer 6.0 Administration

CXS-203-1 Citrix XenServer 6.0 Administration Page1 CXS-203-1 Citrix XenServer 6.0 Administration In the Citrix XenServer 6.0 classroom training course, students are provided with the foundation necessary to effectively install, configure, administer,

More information

Symantec Backup Appliances

Symantec Backup Appliances Symantec Backup Appliances End-to-end Protection for your backup environment Stefan Redtzer Sales Manager Backup Appliances, Nordics 1 Today s IT Challenges: Why Better Backup is needed? Accelerated Data

More information

Isilon IQ Network Configuration Guide

Isilon IQ Network Configuration Guide Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

Symantec NetBackup OpenStorage Solutions Guide for Disk

Symantec NetBackup OpenStorage Solutions Guide for Disk Symantec NetBackup OpenStorage Solutions Guide for Disk UNIX, Windows, Linux Release 7.6 Symantec NetBackup OpenStorage Solutions Guide for Disk The software described in this book is furnished under a

More information

STORAGE. Buying Guide: TARGET DATA DEDUPLICATION BACKUP SYSTEMS. inside

STORAGE. Buying Guide: TARGET DATA DEDUPLICATION BACKUP SYSTEMS. inside Managing the information that drives the enterprise STORAGE Buying Guide: DEDUPLICATION inside What you need to know about target data deduplication Special factors to consider One key difference among

More information

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products that

More information

N5 NETWORKING BEST PRACTICES

N5 NETWORKING BEST PRACTICES N5 NETWORKING BEST PRACTICES Table of Contents Nexgen N5 Networking... 2 Overview of Storage Networking Best Practices... 2 Recommended Switch features for an iscsi Network... 2 Setting up the iscsi Network

More information

HP StoreOnce G2 Backup System user guide

HP StoreOnce G2 Backup System user guide HP StoreOnce G2 Backup System user guide Abstract This is the user guide for the following G2 and G2 E HP StoreOnce Backup Systems: HP D2D4300 Series: HP D2D4324 and HP D2D4312 HP D2D4100 Series: HP D2D4112

More information

Cisco Active Network Abstraction Gateway High Availability Solution

Cisco Active Network Abstraction Gateway High Availability Solution . Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and

More information

High Availability Solutions & Technology for NetScreen s Security Systems

High Availability Solutions & Technology for NetScreen s Security Systems High Availability Solutions & Technology for NetScreen s Security Systems Features and Benefits A White Paper By NetScreen Technologies Inc. http://www.netscreen.com INTRODUCTION...3 RESILIENCE...3 SCALABLE

More information

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

WHITEPAPER: Understanding Pillar Axiom Data Protection Options WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Clustering ExtremeZ-IP 4.1

Clustering ExtremeZ-IP 4.1 Clustering ExtremeZ-IP 4.1 Installing and Configuring ExtremeZ-IP 4.x on a Cluster Version: 1.3 Date: 10/11/05 Product Version: 4.1 Introduction This document provides instructions and background information

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.

More information

HP Store Once. Backup to Disk Lösungen. Architektur, Neuigkeiten. rené Loser, Senior Technology Consultant HP Storage Switzerland

HP Store Once. Backup to Disk Lösungen. Architektur, Neuigkeiten. rené Loser, Senior Technology Consultant HP Storage Switzerland HP Store Once Backup to Disk Lösungen Architektur, Neuigkeiten rené Loser, Senior Technology Consultant HP Storage Switzerland Copy right 2012 Hewlett-Packard Dev elopment Company, L.P. The inf ormation

More information

Data Sheet Fujitsu ETERNUS CS High End V5.1 Data Protection Appliance

Data Sheet Fujitsu ETERNUS CS High End V5.1 Data Protection Appliance Data Sheet Fujitsu ETERNUS CS High End V5.1 Data Protection Appliance Radically simplifying data protection ETERNUS CS Data Protection Appliances The Fujitsu ETERNUS CS storage solutions, comprising ETERNUS

More information

EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS

EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication.

More information

VERITAS Backup Exec 9.0 for Windows Servers

VERITAS Backup Exec 9.0 for Windows Servers WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS

More information

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture Continuous Availability Suite: Neverfail s Continuous Availability Suite is at the core of every Neverfail solution. It provides a comprehensive software solution for High Availability (HA) and Disaster

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 Michael Faden Partner Technology Advisor Microsoft Schweiz 1 Beyond Virtualization virtualization The power of many servers, the simplicity of one Every app,

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

XenData Archive Series Software Technical Overview

XenData Archive Series Software Technical Overview XenData White Paper XenData Archive Series Software Technical Overview Advanced and Video Editions, Version 4.0 December 2006 XenData Archive Series software manages digital assets on data tape and magnetic

More information

ClearPath Storage Update Data Domain on ClearPath MCP

ClearPath Storage Update Data Domain on ClearPath MCP ClearPath Storage Update Data Domain on ClearPath MCP Ray Blanchette Unisys Storage Portfolio Management Jose Macias Unisys TCIS Engineering September 10, 2013 Agenda VNX Update Customer Challenges and

More information

Symantec NetBackup 5220

Symantec NetBackup 5220 A single-vendor enterprise backup appliance that installs in minutes Data Sheet: Data Protection Overview is a single-vendor enterprise backup appliance that installs in minutes, with expandable storage

More information

Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option

Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option WHITE PAPER Optimized Performance for SAN Environments Backup Exec 9.1 for Windows Servers SAN Shared Storage Option 11/20/2003 1 TABLE OF CONTENTS Executive Summary...3 Product Highlights...3 Approaches

More information

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of

More information

EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS

EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication.

More information

Administrator Guide VMware vcenter Server Heartbeat 6.3 Update 1

Administrator Guide VMware vcenter Server Heartbeat 6.3 Update 1 Administrator Guide VMware vcenter Server Heartbeat 6.3 Update 1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

NDMP Backup of Dell EqualLogic FS Series NAS using CommVault Simpana

NDMP Backup of Dell EqualLogic FS Series NAS using CommVault Simpana NDMP Backup of Dell EqualLogic FS Series NAS using CommVault Simpana A Dell EqualLogic Reference Architecture Dell Storage Engineering June 2013 Revisions Date January 2013 June 2013 Description Initial

More information

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised

More information