Real Time Replication in the Real World
|
|
- Edgar Maxwell
- 8 years ago
- Views:
Transcription
1 Real Time Replication in the Real World Richard E. Baum and C. Thomas Tyler Perforce Software, Inc. Doc Revision 1.1
2 Table of Contents 1 Overview Definitions Solutions Involving Replication High Availability Solutions High Availability Thinking Disaster Recovery Solutions Disaster Recovery Thinking Read-only Replicas Read-only Replica Thinking Perforce Tools that Support Metadata Replication Journal Truncation p4jrep (deprecated) p4 replicate Replication to Journal File Replication to live DB files Filtering Example p4 export Report Generation from Scripts Report Generation from SQL Databases Full Replication (Metadata and Archive Files) Daily Security Reports Built-in Replication Tools - Summary Tools that Support Archive File Replication Rsync / Robocopy Filesystem or Block-Level Replication Perfect Consistency vs. Minimum Data Loss Putting it All Together Classic DR: Truncation + Commercial WAN Replicator The SDP Solution Using p4 replicate Failover A Read-only Replica for Continuous Integration Define how Users Will Connect to the Replica Use Filtered Replication Make Archive Files Accessible (read-only) to the Replica Operational Issues with Replication Obliterates: Replication vs. Classic Journal Truncation Summary References & Resources Copyright 2010 Perforce Software i
3 1 Overview There are myriad ways to configure a Perforce environment to allow for multiple, replicated servers. Configurations are chosen for a wide variety of reasons. Some provide high availability solutions. Some provide disaster recovery. Some provide read-only replicas that take workload off of a main server. There are also combinations of these. What you should deploy depends on your desired business goals, the availability of hardware and network infrastructure, and other a number of factors. The release of Perforce Server provides built-in tools to allow for near-real-time replication of metadata. These tools allow for much easier implementation of both readonly replica servers and high-availability/disaster-recovery solutions. This paper discusses some of the most common replication configurations, the ways they are supported by the Perforce Server, and characteristics of each. 2 Definitions A number of terms are used throughout this document. They are defined as follows: HA High Availability A system design protocol and associated implementation that ensures a certain degree of operational continuity during a given measurement period, even in the event of certain failures of hardware or software components. DR Disaster Recovery The process, policies and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or human-induced disaster. RPO Recovery Point Objective Describes an acceptable amount of data loss measured in time. RTO Recovery Time Objective The duration of time and a service level within which service must be restored after a disaster. Metadata Data contained in the Perforce database files (db.* files in the P4ROOT). Archive Files All revisions of all files submitted to the Perforce Server and currently shelved files. Read-only Replica A Perforce Server instance that operates using a copy of the Perforce metadata. Copyright 2010 Perforce Software Page 1
4 DRBD (Distributed Replicated Block Device) a distributed storage system available as of the kernel of Linux. DRBD runs over a network and works very much like RAID 1. 3 Solutions Involving Replication 3.1 High Availability Solutions High Availability solutions keep Perforce servers available to users, despite failures of hardware components. HA solutions are typically deployed in environments where there is little tolerance for unplanned downtime. Large sites with 24x7 uptime requirements due to globally distributed development environments strive for HA. Perforce has excellent built-in journaling capabilities. It is fairly easy to implement a solution that is tolerant to faults, prevents data loss in any single point of failure situation, and that limits data loss in more significant failures. With HA, prevention of data loss for any single point of failure is assumed to be accounted for, and the focus is on limiting downtime. HA solutions generally offer a short RTO and low RPO. They offer the fastest recovery from a strict hardware failure scenario. They also cost more, as they involve additional standby hardware at the same location as the main server. Full metadata replication is also used, and the backup server is typically located on the same local LAN as the main server. This results in good performance of replication processes. Due to the proximity of the primary and backup servers, these solutions offer do not offer much in the way of disaster recovery. A site-wide disaster or regional problem (earthquake, hurricane, etc.) can result in a total outage High Availability Thinking The following are sample thoughts that lead to the deployment of HA solutions: We re willing to invest in a more sophisticated deployment architecture to reduce unplanned downtime. We will not accept data loss for any Single Point of Failure (SPOF). Downtime is extremely expensive for us. We are willing to spend a lot to reduce the likelihood of downtime, and minimize it when it is unavoidable. 3.2 Disaster Recovery Solutions In order to offer a true disaster recovery solution, a secondary server needs to be located in a site that is physically separate from the main server. Full metadata replication provides a reliable failover server in a geographically separate area. Thus, if one site becomes unavailable due to a natural disaster, another can take its place. As WAN connections are Copyright 2010 Perforce Software Page 2
5 often considerably slower than local area network connections, these solutions tend to have a higher RTO and a longer RPO than HA solutions. Near real-time replication over the WAN is possible in some environments. Solutions that achieve this sometimes rely on commercial WAN replication to handle archive files, and p4 replicate to keep metadata up to date Disaster Recovery Thinking The following are sample thoughts that lead to the deployment of DR solutions: We re willing to invest in a more sophisticated deployment architecture to ensure business continuity in event of a disaster. We need to ensure access to our intellectual property, even in the event of a sudden and total loss of one of our data centers. 3.3 Read-only Replicas Read-only replicas are generally used to offload processing from live production servers. Tasks run against read-only replicas will not block read/write users from accessing the live production Perforce instance. One common use for this is with automated build farms, where large sync operations could otherwise cause users to have to wait to submit. Read-only servers are often created from a subset of depot metadata. They usually do not need db.have data, for example. That database tells the server which client workspaces contain which files information not needed for automated builds. Building a read-only replica typically involves using shared storage (typically a SAN) for archive files, so that the archive files written on the primary server are mounted read-only at the same location on the replica. In some cases, read-only replicas are run on the same server machine as the primary server. In that configuration, the replicas are run under a different login that does not have access to write to the archive files, ensuring they remain read-only. To obscure the details of the deployment architecture from users and keep things simple for them, p4broker can be used. When p4broker is used, humans and automated build processes set their P4PORT value to that of the broker rather than the real server. The broker implements heuristics to determine which requests need to be handled by the primary server, and which can be sent to the replica. For example, the broker might forward all sync command from the builder user to the replica, while the submit at the end of a successful build would go to the primary server Read-only Replica Thinking The following are sample thoughts that lead to the deployment of read-only replica solutions: Copyright 2010 Perforce Software Page 3
6 We have automation that interacts with Perforce, such as continuous integration build systems or reports, that impact performance on our primary server. We are willing to invest in a more sophisticated deployment architecture to improve performance and increase our scalability. 4 Perforce Tools that Support Metadata Replication In any replication scheme, both the depot metadata and the archive files must be addressed. Perforce supports a number of different ways to replicate depot metadata. Each has different uses. Some are more suited to certain types of operations. 4.1 Journal Truncation The classic way to replicate Perforce depot metadata is to truncate the running journal file, which maintains a log of all metadata transactions, and ship it to the remote server where it is replayed. This has several advantages. It results in the remote metadata being updated to a known point in time. It also allows the primary server to continue to run without any interruption. The truncated journal file can copied via a variety of methods, including rsync/robocopy, FTP, block-level replication software, etc. Automating such tasks is not generally very difficult. For systems where a low RPO is needed, however, the number of journal file fragments shipped over may make this approach impractical. Truncating the journal every few minutes can result in a huge number of journal files, and confusion in the event of a system problem. Extending the time between journal truncations, on the other hand, causes the servers to become out of sync for greater periods of time. This solution, therefore, tends to be more of a DR one rather than a HA one. 4.2 p4jrep (deprecated) An interim solution to the problem of repeated journal truncation was p4jrep. This utility, available from the Perforce public depot, provided a way to move journal data between servers in real time. Journal records were read and sent through a pipe from one server to another. This required some amount of synchronization, and servers needed to be connected via a stable network connection. It was not available for Windows. 4.3 p4 replicate While p4jrep demonstrated that near real-time replication was possible, it also showed the need for a built-in solution. Starting with the version of Perforce Server, the p4 replicate command provides the same basic functionality. It works on all server platforms, and is a fully supported component of Perforce. p4 replicate is also much more robust than its predecessor. Transaction markers within the journal file are used to ensure transactional integrity of the replayed data, and the data is passed from the server by Perforce and not some outside process. Additionally, as the command runs via the Perforce command line client, the servers it joins can easily be located anywhere. Copyright 2010 Perforce Software Page 4
7 4.3.1 Replication to Journal File Here is a sample p4 replicate command, wrapped in p4rep.sh: #!/bin/bash P4MASTERPORT=perforce.myco.com:1742 CHECKPOINT_PREFIX=/p4servers/master/checkpoints/myco P4ROOT_REPLICA=/p4servers/replica/root REPSTATE=/p4servers/replica/root/rep.state p4 -p $P4MASTERPORT replicate \ -s $REPSTATE \ -J $CHECKPOINT_PREFIX \ -o /p4servers/replica/logs/journal Replication to live DB files Here is a sample p4 replicate command, wrapped in p4rep.sh: #!/bin/bash P4MASTERPORT=perforce.myco.com:1742 CHECKPOINT_PREFIX=/p4servers/master/checkpoints/myco P4ROOT_REPLICA=/p4servers/replica/root REPSTATE=/p4servers/replica/root/rep.state p4 -p $P4MASTERPORT replicate \ -s $REPSTATE \ -J $CHECKPOINT_PREFIX -k \ p4d -r $P4ROOT_REPLICA -f -b 1 -jrc - This example starts a replication on the local host with P4ROOT=/p4servers/replica/root, pulling data from a (presumably remote) primary server on port perforce.myco.com:1742. This runs constantly, polling every 2 seconds (the default). Prior to the first execution of this program, some one-time initial setup is needed on the replica. That configuration would be: 1. Load the most recent checkpoint. 2. Create a rep.state file and in it store the journal counter number embedded in the name of the most recent checkpoint file. After this initialization procedure, the contents of rep.state file are maintained by the p4 replicate command with the journal counter number and offset, indicating the current state of replication. If replication is stopped and restarted hours later, it will know where to start from and which journal records to replay. Copyright 2010 Perforce Software Page 5
8 4.3.3 Filtering Example This example excludes db.have records from replication. #!/bin/bash P4MASTERPORT=perforce.myco.com:1742 CHECKPOINT_PREFIX=/p4servers/master/checkpoints/myco P4ROOT_REPLICA=/p4servers/replica/root REPSTATE=/p4servers/replica/root/rep.state p4 -p $P4MASTERPORT replicate \ -s $REPSTATE \ -J $CHECKPOINT_PREFIX -k \ grep --line-buffered -v '@db\.have@' \ p4d -r $P4ROOT_REPLICA -f -b 1 -jrc p4 export The p4 export command also provides a way to extract journal records from a live server. It extracts journal records in tagged format, more easily readable by humans. The tagged form also makes it easier to write scripts that parse the output. It has the ability to filter journal records. There are a variety of uses for p4 export. Some samples are the following: Report Generation from Scripts Report generation of all kinds can be done using using p4 export and tools like Perl, Python, and Ruby Report Generation from SQL Databases Using the P4toDB utility one can load Perforce metadata directly into an SQL database. A wide variety of reporting can be done from there. P4toDB is a tool that retrieves existing and updated metadata from the Perforce server and translates it into SQL statements for replay into a third-party SQL database. Under the covers, it uses p4 export Full Replication (Metadata and Archive Files) To replicate the archive files in addition to the metadata, p4 export can be called with - F table=db.rev to learn about archive changes (or F table=db.revsh to learn about shelved files). Then you can schedule specific files for rsync or some other external copying. This metadata-driven archive file update process can be very fast indeed Daily Security Reports If a daily security report of changes to protections or groups is desired, p4 export could facilitate this. Copyright 2010 Perforce Software Page 6
9 4.5 Built-in Replication Tools - Summary The p4 replicate and p4 export commands provide good alternatives to journal tailing, as they provide access to the same data via a p4 command. One typical use case for p4 replicate is to duplicate metadata into a journal file on a client machine. Another is to feed the duplicated metadata into a replica server via a variant of the p4d -jr command. Tagged p4 export output allows culling of journal records by specifying filter patterns. It is possible to filter journal records using p4 replicate with a wrapper script handling the culling, as opposed to using p4 export. 5 Tools that Support Archive File Replication There are no built-in Perforce tools that replicate the archive files. Such operations involve keeping directory trees of files in sync with each other. There are a number of tools available to do this. 5.1 Rsync / Robocopy Rsync (on Linux) and Robocopy (Windows) provide a way to keep the archive files synced across multiple machines. They operate at the user level, and are smart about how they copy data. For example, they analyze the content of the file trees, and the files themselves, sending only the files and file parts necessary to synchronize things. For smaller Perforce deployments, or large ones with a relaxed RPO (e.g. 24 hours in a disaster scenario), these tools are generally up to the task of replicating the archive files between Perforce servers. For a large tree of files, the computation involved and the time to copy the data can be quite long. Even if only a few dozen files have changed out of several hundred thousand, it may take some time to determine the extent of what changed. While the filesystem continues to be written to, rsync/robocopy is still figuring out which files to propagate. In some cases, parallelizing calls to rsync/robocopy can speed up the transfer process considerably, allowing these tools to be used with larger data sets. 5.2 Filesystem or Block-Level Replication There are commercial software packages available that mirror filesystem data at a low, disk block, level. As long as there is a fast network connection between the machines, data is replicated as it is written, in real-time. For Linux users, the open source DRBD system is an alternative to consider. 5.3 Perfect Consistency vs. Minimum Data Loss The archive file transfer process tends to be the limiting factor in defining RPO. It takes time to guarantee that the archive files are updated on the remote machine. Even if few files change, it can take a while for the tools to verify that and transfer just the right set of files. Copyright 2010 Perforce Software Page 7
10 It s nice to know that the metadata and archive files are in sync, and that you have a nice, clean recovery state. Thus, it is common for DR solutions to start at some point in time, create the truncated journal, and then hold off on replaying the journals until all corresponding archive files have been transferred. This approach favors consistency. Users will not see any Librarian errors after recovery. If an HA or DR solution is architected so that metadata replays are done as quickly as possible, instead of waiting for the typically slower transfer of archive files, journal records will reference archive files that have not yet been transferred. This can cause users to see Perforce Librarian errors after journal recovery. Some might consider this messy, but there are in fact advantages to this approach. The main advantage is that the metadata is as up-to-date as possible. More data, like pending changelist descriptions, fixes for jobs, changes to workspaces, users, labels all valuable information worth preserving is preserved. In the event of a disaster, the p4 verify errors provide extremely useful information. They point the administrator to the archive files that are missing. Submitted changelists point the administrator to the client workspaces where copies of these files may be found. As these file revisions are the most recent to be added to the repository, it is extremely likely that they will still exist in user workspaces. Which of these approaches is better depends on your organization. Perforce administrators may be loathe to restore files from build servers or user desktops. The need to do so complicates the recovery process and increases the downtime. Taking the additional downtime to recover files from client machines, however, will decrease the data loss, and ultimately simplify recovery for users. For large data sets, the effect on RPO of maintaining perfect consistency is much greater with rsync/robocopy than it is with faster block-level replication systems. Block-level replication systems or metadata-driven archive file transfers can, on many data sets, deliver near-zero RPO even in a worst-case, sudden disaster scenario. Running p4 verify regularly is a routine practice. Having known-clean archive files prior to a disaster can help in a recovery situation, since you can assume any new errors are the result of whatever fate befell the primary server. More importantly, sometimes errors provide an early warning, the harbinger of the disk or RAM chip failures. 6 Putting it All Together 6.1 Classic DR: Truncation + Commercial WAN Replicator In one enterprise customer deployment (running on a version earlier than ), Perforce Consulting deployed a DR solution using the classic journal truncation methodology. A business decision was made that an 8 hour RPO was acceptable in a true disaster scenario. This solution involved a commercial block-level replication solution that transferred archive files from the primary server to the DR machine. Journals were deposited in the Copyright 2010 Perforce Software Page 8
11 same volume monitored by the commercial replication solution, so the journal files went along for the ride. The core approach was very straightforward: 1. Run p4d -jj every 8 hours on the primary server, depositing the journal files on the same volume as the archive files (gaining the benefit of free file transfer). 2. On the DR server, replay any outstanding journals using p4d jr (using p4 counter journal to get the current journal counter and thus determine which are outstanding). 3. Keep the Perforce instance on the spare server up and running. Its daily job is running p4 verify. The actual implementation addressed various complexities: 1. The DR machine might be shut down for a while and restored to service. A scheduled task periodically replays only outstanding journal files. 2. Human intervention was chosen over a fully automated failover solution. This approach increases RTO compared to fully automated solutions by making it necessary for a human to be involved. It also reduces the likelihood of dangerous scenarios where there is a lack of clarity among humans and systems about which server is currently the live production server. 6.2 The SDP Solution Using p4 replicate After the release of , the Perforce Server Deployment Package (SDP) was upgraded to take advantage of the new p4 replicate command. The SDP solution is easy to set up, and provides real-time metadata replication. It does not provide transfer of the archive files. Actual deployments of the SDP use other means for handling transfer of archive files from the primary to the failover machine. The p4 replicate command is wrapped in either of two ways. For replication to a set of live db.* files, there is a p4_journal_backup_dr.sh script (.bat on Windows). For replication to a journal file on a spare machine (not replaying into db.* files), there is another script, p4_journal_backup_rep.sh. Both wrappers provide the command line and recommended options, and defines a standard location for state information file maintained by p4 replicate. Normally, if the primary server stops, replication also stops and must be restarted. The SDP wrapper provides a configurable window (10 minutes by default) in which replication will be automatically restarted, so that a simple server restart won t impact replication. Copyright 2010 Perforce Software Page 9
12 A script is provided to enable/disable replication, p4_drrep1_init, which starts and stops replication. Once enabled, replication runs constantly, polling the server at a defined interval. The SDP scripts use the default polling interval of 2 seconds effectively real-time Failover The SDP solution assumes the presence of a trained Perforce Administrator. It provides digital assets needed for recovery. It is not an automated failover solution. The documentation describes how to handle those aspects of failover related to the SDP scripts. Environment-specific failover tasks, such as DNS switchover, mounting SAN volumes, cluster node switches, etc., are left for the local administrator. 6.3 A Read-only Replica for Continuous Integration With Agile methodologies being all the rage, there is a proliferation of continuous integration build solutions used with Perforce. Many of these systems poll the Perforce server incessantly, always wanting to know if something new has been submitted to some area of the repository. Continuous integration builds are just one example of frequent read-only transactions that can keep a Perforce server machine extremely busy. A popular solution to performance woes cause by various automated systems that interact with Perforce is to use a read-only replica server. The core technology for setting up a read-only replica is straightforward: Define how Users Will Connect to the Replica. Either use multiple P4PORT values selected by users, or use a more sophisticated approach involving the p4broker Simple (for administrators): Modify build scripts to use appropriate P4PORT values, either for the live server or the replica, depending on whether they will be writing (either archive files or metadata), or doing purely read-only operations. This works well in smaller environments, where the team administering Perforce often also owns the build scripts Simple (for end users): Use p4broker to route requests to the appropriate P4PORT, either to the live server or the read-only replica. For example, route all requests from user builder that are not submit commands to the replica. End users always set their P4PORT to the broker. This approach is better suited to large enterprises with dedicated Perforce administrators. The fact that a replica is in use is largely transparent to end users Use Filtered Replication Call p4 replica from the replica machine while pointing to the live server. Use filtering to improve performance. Filter out db.have, and various other journal records. For a Copyright 2010 Perforce Software Page 10
13 simple case like db.have which are always single-line entries, grep can be used, but line buffering should be used to guarantee transactional integrity. A filtering line might look like: grep --line-buffered -v '@db\.have@' Some types of journal entries can span multiple lines. When working with those, you ll need a more advanced approach. For a more general case handling records that may span multiple lines, see this file in the Perforce public depot: //guest/michael_shields/src/p4jrep/awkfilter.sh Make Archive Files Accessible (read-only) to the Replica Make the archive files available to the replica, ensuring they are read-only to the replica: Use a SAN or other shared storage solution for archive files, and have the replica server access the same storage system, with archive file volumes mounted readonly on the replica. Thus, the archive files accessed by the read-only Perforce Server instance are the same physical archive files accessed by the live server. Alternately, if there are sufficient resources (CPU, RAM, multiple processor cores) on the primary server machine, the replica instance can be run on the same machine as the primary server. The replica instance runs under a different login account that can t write to the archive files. This negates the need to involve a shared storage solution. Maintain a separate copy of the archive files on the replica, and scan journal entries harvested by p4 export to determine when individual archive files need to be pulled from the live server. Metadata-driven archive file updates can be faster than high-level rsync commands run at the root of an entire depot. 7 Operational Issues with Replication There are risks to using a hot real-time replication approach. There are some failure scenarios that can foil these systems. It is possible to propagate bad data either because it s bad at the source or due to errors during transfer. Happily, p4 replicate provides insulation and fails safely if metadata is garbled in transfer. There may be cases where replication stops and needs to be restarted. To maintain low RPO, it is important to quickly detect that replication has stopped so that it can be corrected. Copyright 2010 Perforce Software Page 11
14 7.1 Obliterates: Replication vs. Classic Journal Truncation Replication is robust. However, there are some human error scenarios that are better handled with classic journal truncation. Compared with replication, periodic truncation of the journal will not keep the metadata on the standby machine as fresh. Perhaps the best solution is to use a combination of p4 replicate and classic journal truncation. Frequent journal truncations make journal files available for use with alternative journal replay options using p4d -jr. These journal files might come in handy if, for example, a p4 obliterate was done accidentally. Working with Perforce Technical Support, the deleted metadata could be extracted from journals prior to replaying it to a standby machine. This makes it possible to undo the effects of the obliteration in the metadata, while preserving other metadata transactions in the journal. This process requires downtime, but metadata recovery is complete. So long as the archive files are propagated in a way that does not propagate file deletions (e.g. with proper rsync or Robocopy options), the archive files can be restored as well, using p4 verify to identify files to restore. 8 Summary The p4 replicate and p4 export commands make advanced replication and journal scanning solutions more reliable and easier to implement. They are supported, crossplatform, reliable, and robust. The new tools in simplify a variety of replicationenabled solutions for Perforce, including High Availability, Disaster Recovery, and Readonly replicas. The solutions described herein are being implemented and used in the real world every day. They support a range of critical business tasks and can be tailored to almost every budget. The tools described here are all documented, and Perforce Technical Support can help you to use them. Perforce also has related resources in its knowledge base. Perforce Consulting is also available to implement its SDP and to provide hands-on design and implementation of a solution customized for your environment. Copyright 2010 Perforce Software Page 12
15 9 References & Resources Wikipedia was used as a resource for common definitions and illustrations for HA, DR, and DRBD. Perforce Metadata Replication: Relevant Perforce Documentation on replication: P4toSQL: Copyright 2010 Perforce Software Page 13
High Availability and Disaster Recovery Solutions for Perforce
High Availability and Disaster Recovery Solutions for Perforce This paper provides strategies for achieving high Perforce server availability and minimizing data loss in the event of a disaster. Perforce
More informationSimplified HA/DR Using Storage Solutions
Simplified HA/DR Using Storage Solutions Agnes Jacob, NetApp and Tom Tyler, Perforce Software MERGE 2013 THE PERFORCE CONFERENCE SAN FRANCISCO APRIL 24 26 2 SIMPLIFIED HA/DR USING STORAGE SOLUTIONS INTRODUCTION
More informationTABLE OF CONTENTS THE SHAREPOINT MVP GUIDE TO ACHIEVING HIGH AVAILABILITY FOR SHAREPOINT DATA. Introduction. Examining Third-Party Replication Models
1 THE SHAREPOINT MVP GUIDE TO ACHIEVING HIGH AVAILABILITY TABLE OF CONTENTS 3 Introduction 14 Examining Third-Party Replication Models 4 Understanding Sharepoint High Availability Challenges With Sharepoint
More informationHigh Availability and Disaster Recovery for Exchange Servers Through a Mailbox Replication Approach
High Availability and Disaster Recovery for Exchange Servers Through a Mailbox Replication Approach Introduction Email is becoming ubiquitous and has become the standard tool for communication in many
More informationHow To Write A Server On A Flash Memory On A Perforce Server
Simplified HA/DR Using Storage Solutions Tom Tyler, Perforce Software Agnes Jacob, NetApp 1 Introduction Major League Requirements Review: Perforce Server Storage Profile HA vs. DR Fault Tree Analysis,
More informationEMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER
White Paper EMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER Abstract The objective of this white paper is to describe the architecture of and procedure for configuring EMC Documentum xplore
More informationDemystifying Perforce Backup and Near-Real-Time Replication
Demystifying Perforce Backup and Near-Real-Time Replication Richard E. Baum Senior Technical Support Engineer Perforce Software, Inc. Overview Perforce is used as a storehouse for a company s most valuable
More informationHigh Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper
High Availability with Postgres Plus Advanced Server An EnterpriseDB White Paper For DBAs, Database Architects & IT Directors December 2013 Table of Contents Introduction 3 Active/Passive Clustering 4
More informationSQL Server 2012/2014 AlwaysOn Availability Group
SQL Server 2012/2014 AlwaysOn Availability Group Part 1 - Introduction v1.0-2014 - G.MONVILLE Summary SQL Server 2012 AlwaysOn - Introduction... 2 AlwaysOn Features... 2 AlwaysOn FCI (Failover Cluster
More informationNon-Native Options for High Availability
The Essentials Series: Configuring High Availability for Windows Server 2008 Environments Non-Native Options for High Availability by Non-Native Options for High Availability... 1 Suitability and Cost...
More informationDisaster Recovery for Oracle Database
Disaster Recovery for Oracle Database Zero Data Loss Recovery Appliance, Active Data Guard and Oracle GoldenGate ORACLE WHITE PAPER APRIL 2015 Overview Oracle Database provides three different approaches
More informationCA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server
CA RECOVERY MANAGEMENT R12.5 BEST PRACTICE CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server Overview Benefits The CA Advantage The CA ARCserve Backup Support and Engineering
More informationHigh Availability Solutions for the MariaDB and MySQL Database
High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment
More informationInformix Dynamic Server May 2007. Availability Solutions with Informix Dynamic Server 11
Informix Dynamic Server May 2007 Availability Solutions with Informix Dynamic Server 11 1 Availability Solutions with IBM Informix Dynamic Server 11.10 Madison Pruet Ajay Gupta The addition of Multi-node
More informationProtecting Microsoft SQL Server
Your company relies on its databases. How are you protecting them? Protecting Microsoft SQL Server 2 Hudson Place suite 700 Hoboken, NJ 07030 Powered by 800-674-9495 www.nsisoftware.com Executive Summary
More informationPerforce Backup Strategy & Disaster Recovery at National Instruments
Perforce Backup Strategy & Disaster Recovery at National Instruments Steven Lysohir National Instruments Perforce User Conference April 2005-1 - Contents 1. Introduction 2. Development Environment 3. Architecture
More informationDistributed Software Development with Perforce Perforce Consulting Guide
Distributed Software Development with Perforce Perforce Consulting Guide Get an overview of Perforce s simple and scalable software version management solution for supporting distributed development teams.
More informationHow To Fix A Powerline From Disaster To Powerline
Perforce Backup Strategy & Disaster Recovery at National Instruments Steven Lysohir 1 Why This Topic? Case study on large Perforce installation Something for smaller sites to ponder as they grow Stress
More informationDisaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper
Disaster Recovery Solutions for Oracle Database Standard Edition RAC A Dbvisit White Paper Copyright 2011-2012 Dbvisit Software Limited. All Rights Reserved v2, Mar 2012 Contents Executive Summary... 1
More informationMicrosoft SharePoint 2010 on VMware Availability and Recovery Options. Microsoft SharePoint 2010 on VMware Availability and Recovery Options
This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware
More informationReal-time Protection for Hyper-V
1-888-674-9495 www.doubletake.com Real-time Protection for Hyper-V Real-Time Protection for Hyper-V Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate
More informationHigh Availability for Citrix XenApp
WHITE PAPER Citrix XenApp High Availability for Citrix XenApp Enhancing XenApp Availability with NetScaler Reference Architecture www.citrix.com Contents Contents... 2 Introduction... 3 Desktop Availability...
More informationINUVIKA TECHNICAL GUIDE
--------------------------------------------------------------------------------------------------- INUVIKA TECHNICAL GUIDE FILE SERVER HIGH AVAILABILITY OVD Enterprise External Document Version 1.0 Published
More informationFive Secrets to SQL Server Availability
Five Secrets to SQL Server Availability EXECUTIVE SUMMARY Microsoft SQL Server has become the data management tool of choice for a wide range of business critical systems, from electronic commerce to online
More informationPerforce Disaster Recovery at Google. ! Google's mission is to organize the world's information and make it universally accessible and useful.
Perforce Disaster Recovery at Google Plans and Experiences Rick Wright Perforce Administrator Google, Inc. About Google! Google's mission is to organize the world's information and make it universally
More informationContents. SnapComms Data Protection Recommendations
Contents Abstract... 2 SnapComms Solution Environment... 2 Concepts... 3 What to Protect... 3 Database Failure Scenarios... 3 Physical Infrastructure Failures... 3 Logical Data Failures... 3 Service Recovery
More informationTraditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking
Network Storage for Business Continuity and Disaster Recovery and Home Media White Paper Abstract Network storage is a complex IT discipline that includes a multitude of concepts and technologies, like
More informationStorage and Disaster Recovery
Storage and Disaster Recovery Matt Tavis Principal Solutions Architect The Business Continuity Continuum High Data Backup Disaster Recovery High, Storage Backup and Disaster Recovery form a continuum of
More informationContingency Planning and Disaster Recovery
Contingency Planning and Disaster Recovery Best Practices Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge Date: October 2014 2014 Perceptive Software. All rights reserved Perceptive
More informationMicrosoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010 Better Together Writer: Bill Baer, Technical Product Manager, SharePoint Product Group Technical Reviewers: Steve Peschka,
More informationCisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
More informationHA / DR Jargon Buster High Availability / Disaster Recovery
HA / DR Jargon Buster High Availability / Disaster Recovery Welcome to Maxava s Jargon Buster. Your quick reference guide to Maxava HA and industry technical terms related to High Availability and Disaster
More informationDeploy App Orchestration 2.6 for High Availability and Disaster Recovery
Deploy App Orchestration 2.6 for High Availability and Disaster Recovery Qiang Xu, Cloud Services Nanjing Team Last Updated: Mar 24, 2015 Contents Introduction... 2 Process Overview... 3 Before you begin...
More informationORACLE DATABASE 10G ENTERPRISE EDITION
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
More informationMirror File System for Cloud Computing
Mirror File System for Cloud Computing Twin Peaks Software Abstract The idea of the Mirror File System (MFS) is simple. When a user creates or updates a file, MFS creates or updates it in real time on
More informationOnline Transaction Processing in SQL Server 2008
Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,
More informationA SURVEY OF POPULAR CLUSTERING TECHNOLOGIES
A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES By: Edward Whalen Performance Tuning Corporation INTRODUCTION There are a number of clustering products available on the market today, and clustering has become
More informationSQL Server Database Administrator s Guide
SQL Server Database Administrator s Guide Copyright 2011 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in retrieval system, or transmitted, in any form or by
More informationVirtual Infrastructure Security
Virtual Infrastructure Security 2 The virtual server is a perfect alternative to using multiple physical servers: several virtual servers are hosted on one physical server and each of them functions both
More informationEliminating End User and Application Downtime:
Eliminating End User and Application Downtime: Architecting the Right Continuous Availability and Disaster Recovery Environment March 2010 Table of Contents Introduction 3 Where to Start 3 Moving to Continuous
More informationBackups and Maintenance
Backups and Maintenance Backups and Maintenance Objectives Learn how to create a backup strategy to suit your needs. Learn how to back up a database. Learn how to restore from a backup. Use the Database
More informationDell High Availability and Disaster Recovery Solutions Using Microsoft SQL Server 2012 AlwaysOn Availability Groups
Dell High Availability and Disaster Recovery Solutions Using Microsoft SQL Server 2012 AlwaysOn Availability Groups Dell servers and storage options available for AlwaysOn Availability Groups deployment.
More informationVess A2000 Series HA Surveillance with Milestone XProtect VMS Version 1.0
Vess A2000 Series HA Surveillance with Milestone XProtect VMS Version 1.0 2014 PROMISE Technology, Inc. All Rights Reserved. Contents Introduction 1 Purpose 1 Scope 1 Audience 1 What is High Availability?
More informationPROTECTING MICROSOFT SQL SERVER TM
WHITE PAPER PROTECTING MICROSOFT SQL SERVER TM Your company relies on its databases. How are you protecting them? Published: February 2006 Executive Summary Database Management Systems (DBMS) are the hidden
More informationSolution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
More informationCA XOsoft Replication for Windows
CA XOsoft Replication for Windows Microsoft SQL Server Operation Guide r12.5 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the
More informationPervasive PSQL Meets Critical Business Requirements
Pervasive PSQL Meets Critical Business Requirements Pervasive PSQL White Paper May 2012 Table of Contents Introduction... 3 Data Backup... 3 Pervasive Backup Agent... 3 Pervasive PSQL VSS Writer... 5 Pervasive
More informationFeature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
More informationRed Hat Enterprise linux 5 Continuous Availability
Red Hat Enterprise linux 5 Continuous Availability Businesses continuity needs to be at the heart of any enterprise IT deployment. Even a modest disruption in service is costly in terms of lost revenue
More informationAvid MediaCentral Platform Disaster Recovery Systems Version 1.0
Avid MediaCentral Platform Disaster Recovery s Version 1.0 Contents Overview... 2 Operational Risks... 2 Disaster Recovery Metrics... 2 DRS Configurations... 3 Active / Passive... 4 Active / Active...
More informationBackup with synchronization/ replication
Backup with synchronization/ replication Peer-to-peer synchronization and replication software can augment and simplify existing data backup and retrieval systems. BY PAUL MARSALA May, 2001 According to
More informationConnectivity. Alliance Access 7.0. Database Recovery. Information Paper
Connectivity Alliance 7.0 Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Loss Business Impact... 6 2.2 Recovery Tools... 8 3 Manual Recovery Method...
More informationHigh Availability Guide for Distributed Systems
Tivoli IBM Tivoli Monitoring Version 6.2.2 Fix Pack 2 (Revised May 2010) High Availability Guide for Distributed Systems SC23-9768-01 Tivoli IBM Tivoli Monitoring Version 6.2.2 Fix Pack 2 (Revised May
More informationConnectivity. Alliance Access 7.0. Database Recovery. Information Paper
Connectivity Alliance Access 7.0 Database Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Database Loss Business Impact... 6 2.2 Database Recovery
More informationSQL SERVER ADVANCED PROTECTION AND FAST RECOVERY WITH DELL EQUALLOGIC AUTO SNAPSHOT MANAGER
WHITE PAPER SQL SERVER ADVANCED PROTECTION AND FAST RECOVERY WITH DELL EQUALLOGIC AUTO SNAPSHOT MANAGER Business critical applications depend on Relational Database Management Systems (RMS) to store and
More informationThe Benefits of Continuous Data Protection (CDP) for IBM i and AIX Environments
The Benefits of Continuous Data Protection (CDP) for IBM i and AIX Environments New flexible technologies enable quick and easy recovery of data to any point in time. Introduction Downtime and data loss
More informationBusiness Continuity: Choosing the Right Technology Solution
Business Continuity: Choosing the Right Technology Solution Table of Contents Introduction 3 What are the Options? 3 How to Assess Solutions 6 What to Look for in a Solution 8 Final Thoughts 9 About Neverfail
More informationMcAfee VirusScan and epolicy Orchestrator Administration Course
McAfee VirusScan and epolicy Orchestrator Administration Course Intel Security Education Services Administration Course Training The McAfee VirusScan and epolicy Orchestrator Administration course from
More informationMaximizing Data Center Uptime with Business Continuity Planning Next to ensuring the safety of your employees, the most important business continuity
Maximizing Data Center Uptime with Business Continuity Planning Next to ensuring the safety of your employees, the most important business continuity task is resuming business critical operations. Having
More informationSanovi DRM for Oracle Database
Application Defined Continuity Sanovi DRM for Oracle Database White Paper Copyright@2012, Sanovi Technologies Table of Contents Executive Summary 3 Introduction 3 Audience 3 Oracle Protection Overview
More informationMySQL Enterprise Backup
MySQL Enterprise Backup Fast, Consistent, Online Backups A MySQL White Paper February, 2011 2011, Oracle Corporation and/or its affiliates Table of Contents Introduction... 3! Database Backup Terms...
More informationDB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led
DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led Course Description Gain a deeper understanding of the advanced features of DB2 9 for Linux, UNIX, and Windows database environments
More informationEnd-to-End Availability for Microsoft SQL Server
WHITE PAPER VERITAS Storage Foundation HA for Windows End-to-End Availability for Microsoft SQL Server January 2005 1 Table of Contents Executive Summary... 1 Overview... 1 The VERITAS Solution for SQL
More informationBackup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases
Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side
More informationQuorum DR Report. Top 4 Types of Disasters: 55% Hardware Failure 22% Human Error 18% Software Failure 5% Natural Disasters
SAP High Availability in virtualized environments running on Windows Server 2012 Hyper-V Part 1: Overview Introduction Almost everyone is talking about virtualization and cloud computing these days. This
More informationVeritas Cluster Server from Symantec
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
More informationCreating A Highly Available Database Solution
WHITE PAPER Creating A Highly Available Database Solution Advantage Database Server and High Availability TABLE OF CONTENTS 1 Introduction 1 High Availability 2 High Availability Hardware Requirements
More informationFioranoMQ 9. High Availability Guide
FioranoMQ 9 High Availability Guide Copyright (c) 1999-2008, Fiorano Software Technologies Pvt. Ltd., Copyright (c) 2008-2009, Fiorano Software Pty. Ltd. All rights reserved. This software is the confidential
More informationPlanning and Implementing Disaster Recovery for DICOM Medical Images
Planning and Implementing Disaster Recovery for DICOM Medical Images A White Paper for Healthcare Imaging and IT Professionals I. Introduction It s a given - disaster will strike your medical imaging data
More informationBest practice: Simultaneously upgrade your Exchange and disaster recovery protection
Best practice: Simultaneously upgrade your Exchange and disaster recovery protection March 2006 1601 Trapelo Road Waltham, MA 02451 1.866.WANSync www.xosoft.com Contents The Value Proposition... 1 Description
More informationOVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 sales@cepoint.com Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
More informationData Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key
More informationStretching A Wolfpack Cluster Of Servers For Disaster Tolerance. Dick Wilkins Program Manager Hewlett-Packard Co. Redmond, WA dick_wilkins@hp.
Stretching A Wolfpack Cluster Of Servers For Disaster Tolerance Dick Wilkins Program Manager Hewlett-Packard Co. Redmond, WA dick_wilkins@hp.com Motivation WWW access has made many businesses 24 by 7 operations.
More informationDatabase Resilience at ISPs. High-Availability. White Paper
Database Resilience at ISPs High-Availability White Paper Internet Service Providers (ISPs) generally do their job very well. The commercial hosting market is segmented in a number of different ways but
More informationAvailability and Disaster Recovery: Basic Principles
Availability and Disaster Recovery: Basic Principles by Chuck Petch, WVS Senior Technical Writer At first glance availability and recovery may seem like opposites. Availability involves designing computer
More informationExplain how to prepare the hardware and other resources necessary to install SQL Server. Install SQL Server. Manage and configure SQL Server.
Course 6231A: Maintaining a Microsoft SQL Server 2008 Database About this Course Elements of this syllabus are subject to change. This five-day instructor-led course provides students with the knowledge
More informationSQL SERVER ADVANCED PROTECTION AND FAST RECOVERY WITH EQUALLOGIC AUTO-SNAPSHOT MANAGER
WHITE PAPER SQL SERVER ADVANCED PROTECTION AND FAST RECOVERY WITH EQUALLOGIC AUTO-SNAPSHOT MANAGER MANAGEMENT SERIES Business critical applications depend on Relational Database Management Systems (RDBMS)
More informationDeltaV Virtualization High Availability and Disaster Recovery
DeltaV Distributed Control System Whitepaper October 2014 DeltaV Virtualization High Availability and Disaster Recovery This document describes High Availiability and Disaster Recovery features supported
More informationAvailability Guide for Deploying SQL Server on VMware vsphere. August 2009
Availability Guide for Deploying SQL Server on VMware vsphere August 2009 Contents Introduction...1 SQL Server 2008 with vsphere and VMware HA/DRS...2 Log Shipping Availability Option...4 Database Mirroring...
More informationWindows Geo-Clustering: SQL Server
Windows Geo-Clustering: SQL Server Edwin Sarmiento, Microsoft SQL Server MVP, Microsoft Certified Master Contents Introduction... 3 The Business Need for Geo-Clustering... 3 Single-location Clustering
More informationDisaster Recovery Configuration Guide for CiscoWorks Network Compliance Manager 1.8
Disaster Recovery Configuration Guide for CiscoWorks Network Compliance Manager 1.8 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel:
More informationVeritas Cluster Server by Symantec
Veritas Cluster Server by Symantec Reduce application downtime Veritas Cluster Server is the industry s leading clustering solution for reducing both planned and unplanned downtime. By monitoring the status
More informationBusiness and IT Requirements for Continuous Data Protection. Protecting Information Assets with Enterprise Rewinder
Business and IT Requirements for Continuous Data Protection Protecting Information Assets with Enterprise Rewinder April 2006 Table of Contents Executive Summary... 3 Protecting Applications Today... 4
More informationStorage Backup and Disaster Recovery: Using New Technology to Develop Best Practices
Storage Backup and Disaster Recovery: Using New Technology to Develop Best Practices September 2008 Recent advances in data storage and data protection technology are nothing short of phenomenal. Today,
More informationMastering Disaster Recovery: Business Continuity and Virtualization Best Practices W H I T E P A P E R
Mastering Disaster Recovery: Business Continuity and Virtualization Best Practices W H I T E P A P E R Table of Contents Introduction.......................................................... 3 Challenges
More information126 SW 148 th Street Suite C-100, #105 Seattle, WA 98166 Tel: 877-795-9372 Fax: 866-417-6192 www.seattlepro.com
SharePoint 2010 Bootcamp This five-day course is designed to equip Systems Administrators, Integrators and Developers with a strong foundation for implementing solutions on Microsoft SharePoint 2010. Attendees
More informationBackup and Redundancy
Backup and Redundancy White Paper NEC s UC for Business Backup and Redundancy allow businesses to operate with confidence, providing security for themselves and their customers. When a server goes down
More informationEMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
More informationAchieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
More informationHow To Manage The Sas Metadata Server With Ibm Director Multiplatform
Manage SAS Metadata Server Availability with IBM Technology A SAS White Paper Table of Contents The SAS and IBM Relationship... 1 Introduction...1 Fault Tolerance of the SAS Metadata Server... 1 Monitoring
More informationWHITE PAPER. Best Practices to Ensure SAP Availability. Software for Innovative Open Solutions. Abstract. What is high availability?
Best Practices to Ensure SAP Availability Abstract Ensuring the continuous availability of mission-critical systems is a high priority for corporate IT groups. This paper presents five best practices that
More informationActive-Active and High Availability
Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark
More informationIBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
More informationFinding a Cure for Downtime
Finding a Cure for Downtime 7 Tips for Reducing Downtime in Healthcare Information Systems EXECUTIVE SUMMARY THE COST OF DOWNTIME IN HEALTHCARE According to research by Healthcare Informatics: Every minute
More informationEliminate SQL Server Downtime Even for maintenance
Eliminate SQL Server Downtime Even for maintenance Eliminate Outages Enable Continuous Availability of Data (zero downtime) Enable Geographic Disaster Recovery - NO crash recovery 2009 xkoto, Inc. All
More informationHRG Assessment: Stratus everrun Enterprise
HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at
More informationCA ARCserve Replication and High Availability for Windows
CA ARCserve Replication and High Availability for Windows Microsoft SQL Server Operation Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation")
More informationMaximum Availability Architecture
Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability
More informationManaging your Domino Clusters
Managing your Domino Clusters Kathleen McGivney President and chief technologist, Sakura Consulting www.sakuraconsulting.com Paul Mooney Senior Technical Architect, Bluewave Technology www.bluewave.ie
More informationMaintaining a Microsoft SQL Server 2008 Database
Maintaining a Microsoft SQL Server 2008 Database Course 6231A: Five days; Instructor-Led Introduction Elements of this syllabus are subject to change. This five-day instructor-led course provides students
More information