CA ehealth. High Availability and Disaster Recovery Administration Guide. r6.1
|
|
|
- Letitia Phillips
- 10 years ago
- Views:
Transcription
1 CA ehealth High Availability and Disaster Recovery Administration Guide r6.1
2 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user s informational purposes only and is subject to change or withdrawal by CA at any time. This Documentation may not be copied, transferred, reproduced, disclosed, modified or duplicated, in whole or in part, without the prior written consent of CA. This Documentation is confidential and proprietary information of CA and protected by the copyright laws of the United States and international treaties. Notwithstanding the foregoing, licensed users may print a reasonable number of copies of the Documentation for their own internal use, and may make one copy of the related software as reasonably required for back-up and disaster recovery purposes, provided that all CA copyright notices and legends are affixed to each reproduced copy. Only authorized employees, consultants, or agents of the user who are bound by the provisions of the license for the Product are permitted to have access to such copies. The right to print copies of the Documentation and to make a copy of the related software is limited to the period during which the applicable license for the Product remains in full force and effect. Should the license terminate for any reason, it shall be the user s responsibility to certify in writing to CA that all copies and partial copies of the Documentation have been returned to CA or destroyed. EXCEPT AS OTHERWISE STATED IN THE APPLICABLE LICENSE AGREEMENT, TO THE EXTENT PERMITTED BY APPLICABLE LAW, CA PROVIDES THIS DOCUMENTATION AS IS WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IN NO EVENT WILL CA BE LIABLE TO THE END USER OR ANY THIRD PARTY FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS DOCUMENTATION, INCLUDING WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL, OR LOST DATA, EVEN IF CA IS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE. The use of any product referenced in the Documentation is governed by the end user s applicable license agreement. The manufacturer of this Documentation is CA. Provided with Restricted Rights. Use, duplication or disclosure by the United States Government is subject to the restrictions set forth in FAR Sections , , and (c)(1) - (2) and DFARS Section (b)(3), as applicable, or their successors. All trademarks, trade names, service marks, and logos referenced herein belong to their respective companies. Copyright 2008 CA. All rights reserved.
3 Contact CA Contact Technical Support For online technical assistance and a complete list of locations, primary service hours, and telephone numbers, contact Technical Support at Provide Feedback If you have comments or questions about CA product documentation, you can send a message to [email protected]. If you would like to provide feedback about CA product documentation, please complete our short customer survey, which is also available on the CA Support website.
4 CA Product References This document may reference the following CA products: CA ehealth AdvantEDGE View CA ehealth Application Response CA ehealth Business Service Console (ehealth BSC) CA ehealth Distributed ehealth CA ehealth Fault Manager CA ehealth Live Health Application CA ehealth Response CA ehealth Service Availability CA ehealth SystemEDGE CA ehealth TrapEXPLODER CA ehealth Voice Quality Monitor (VQM) CA ehealth AIM for Apache CA ehealth AIM for Microsoft Exchange CA ehealth AIM for Microsoft IIS CA ehealth AIM for Microsoft SQL Server CA ehealth AIM for Oracle CA Insight AIM for CA ehealth CA Insight Database Performance Monitor for Distributed Databases (CA Insight DPM for Distributed Databases) CA ehealth Integration for Alcatel (ehealth - Alcatel) CA ehealth Integration for Cisco IP Solution Center (ehealth - Cisco ISC) CA ehealth Integration for Cisco WAN Manager (ehealth - Cisco WAN Manager) CA ehealth Integration for HP OpenView (ehealth - OpenView) CA ehealth Integration for Lucent (ehealth - Lucent) CA ehealth Integration for Netcool (ehealth - Netcool) CA ehealth Integration for Nortel Preside (ehealth - Nortel Preside) CA ehealth Integration for Nortel Shasta SCS GGSN (ehealth - Nortel GGSN) CA ehealth Integration for Psytechnics (ehealth - Psytechnics)
5 CA ehealth Integration for Starent (ehealth - Starent) CA SPECTRUM CA Unicenter Network and Systems Management (Unicenter NSM) CA etrust Identity and Access Management (etrust IAM) CA Embedded Entitlements Manager (CA EEM) Note: CA Embedded Entitlements Manager (CA EEM) is the new name for etrust IAM. This product will be rebranded throughout the documentation in a future release.
6
7 Contents Chapter 1: Introduction to High Availability and Disaster Recovery 11 Overview High Availability Clusterware High Availability Clusters High Availability Clusters and Distributed ehealth Clusters Failures Fail over How ehealth Works with High Availability How Monitoring Works What Clusterware Monitors Chapter 2: Sun Cluster High Availability Configuration 21 How to Configure Sun Cluster with High Availability Step 1: Install and Configure Sun Cluster Software Step 2: Register Clusterware Resource Types Step 3: Create Resource Types and the Resource Group Step 4: Install ehealth Step 5: Configure ehealth to Run in the High Availability Cluster Step 6: (Optional) Install CA XOsoft Replication Software Step 7: Create, Install, and Register ehealth Resource Types Step 8: Create and Add Clusterware Resources to the Resource Group Step 9: Create and Add ehealth Resources to the Resource Group Step 10: Start the High Availability Service How to Configure Multiple Instances of ehealth High Availability Cluster Management Tasks Install a Patch, Service Pack, or Upgrade Upgrade the Sun Cluster High Availability Package Load the Database after High Availability Configuration Initiate a Planned Failover How to Enable or Disable SSL for Sun Cluster Chapter 3: Veritas High Availability Configuration 37 Critical Resource Functionality Critical Resources Non-critical Resources Contents 7
8 How to Configure Veritas with High Availability Step 1: Install and Configure VCS Software Step 2: Install ehealth Step 3: Configure ehealth to Run in the High Availability Cluster Step 4: Create a Service Group Step 5: (Optional) Install CA XOsoft Replication Software Step 6: Create and Install ehealth Resource Types Step 7: Start the High Availability Service Multiple Instances of ehealth in the HA Cluster How to Add Multiple Instances of ehealth High Availability Cluster Management Tasks Install a Patch or Service Pack Load the Database after High Availability Configuration Initiate a Planned Failover How to Enable or Disable SSL for Veritas Chapter 4: Disaster Recovery 53 How ehealth Works with Disaster Recovery System Guidelines System Terminology How CA XOsoft Replication Software Works Scenarios Replication Failover in a Disaster Recovery Environment Downtime and Data Loss Configure ehealth for Disaster Recovery Before You Configure ehealth for DR Windows Disaster Recovery Configuration Solaris Disaster Recovery Configuration Start the Failover Process How to Upgrade Your Disaster Recovery Environment Disaster Recovery Management Tasks Initiate Failover from the Standby (Active) System to the Primary System Install a Patch or Service Pack after Configuring Disaster Recovery Appendix A: Disaster Recovery in a High Availability Cluster 75 Configure ehealth for Disaster Recovery in a High Availability Cluster Failover from High Availability Cluster to Standby Disaster Recovery System Failover From Standby System to a System in a High Availability Cluster High Availability and Disaster Recovery Administration Guide
9 Appendix B: Sun Cluster Commands 87 Appendix C: Veritas Cluster Service Commands 89 Index 91 Contents 9
10
11 Chapter 1: Introduction to High Availability and Disaster Recovery This section contains the following topics: Overview (see page 11) High Availability (see page 12) How ehealth Works with High Availability (see page 17) How Monitoring Works (see page 17) Overview ehealth can be configured to run in the following environments: High availability (Solaris 2.9 or 2.10) Disaster recovery (Windows 2003 and Solaris 2.9 or 2.10) Running ehealth in these environments helps ensure that internal system failures (such as hardware or software malfunctions) and external failures (such as a natural disaster or operator error) will minimize ehealth system downtime and data loss. High Availability-High availability (HA) is a system implementation based on levels of redundancy that helps ensure a system or application can quickly come back online in the event of a failure. Highly available systems are often characterized by the ability of their components to fail over to backup systems in the event of a failure. For a strong HA solution, many large system environments are configured into clusters using clusterware or other application support. These HA clusters consist of two or more nodes that work together to build hardware and software redundancy: if one system fails, another can become active and resume its duties. This functionality should not be confused with load balance (or web farm) clusters, where multiple machines run concurrently with identical hardware and software configurations. When one system fails, the others continue to function as before, picking up the load dropped by the failed machine. Chapter 1: Introduction to High Availability and Disaster Recovery 11
12 High Availability Disaster Recovery-Disaster recovery (DR) is a network configuration that becomes vital when systems fail due to disasters like floods or fires. The DR service helps recover critical data after such events occur. DR includes the continual backup of data from a primary system to a standby system in a geographically separate location. After a catastrophic failure causes the primary system to go down, the standby system is manually started and resumes the first system s duties. When planning your DR environments, keep in mind that to configure ehealth for DR, you must first install CA XOsoft Replication software. You can also configure an ehealth Solaris HA cluster for DR using either Sun Cluster software or Veritas Cluster Server software. Note: In the DR environment, failover is a manual process. High Availability This section describes: HA clusterware and associated concepts, including clusters, failures, and failover. The process to implement HA in your infrastructure so ehealth can function as a highly available data service. Clusterware Clusterware is software that monitors hardware and connectivity issues and builds redundancy into a system cluster to eliminate single points of failure. Clusterware must be installed and configured on all machines in the environment before you install ehealth. High Availability Clusters Clusterware enables you to configure several similar systems that use shared storage into a highly available cluster, and deploy instances of applications into that cluster. When a failure occurs, the clusterware helps ensure the applications continue to run by restarting the instances on that system or failing over to a backup system. 12 High Availability and Disaster Recovery Administration Guide
13 High Availability Build the High Availability Cluster You begin building an HA cluster when you install clusterware on the first machine, or node. When you install clusterware on the second node, you tell it to join the group that the first node is a member of. Continue adding nodes to the group until your cluster is built. You need at least two nodes in a cluster to achieve redundancy. HA clusters are considered failover pools; machines that operate, fail over, and share resources together. Each cluster node is a stand-alone server that runs its own processes. This HA cluster solution uses a shared-storage model so that all applications connected to the cluster have read and write access to the stored data. High Availability Clusters and Distributed ehealth Clusters ehealth systems are configured to run as stand-alone machines (a single workstation) or part of a Distributed ehealth Cluster (multiple ehealth systems located centrally or across a geographic range). For distributed ehealth members to be highly available, they can be configured to be members of an HA cluster. While they may work in the same environment, each of these cluster types has its own job, separate from the other. For example, the nodes of an HA cluster function normally (as described in High Availability Clusters (see page 12)) whether or not they share members with a Distributed ehealth Cluster. Also, a Distributed ehealth Cluster is not aware when its members become part of an HA cluster and continue to poll and process data across multiple ehealth systems. Failover behavior does not change in a Distributed ehealth Cluster sharing membership with an HA cluster. Systems can be added or removed from either cluster type without interrupting functionality. Note: For more information about Distributed ehealth Clusters, see the Distributed ehealth Administration Guide. Failures No matter how well a system is designed or maintained, there will always be system failures. There are various types of failures that can have an impact on the availability of ehealth servers and applications. These include hardware problems such as a failed CPU, a corrupted disk, and bad memory; also software problems such as mis-configuration of the operating system and incorrectly specified networking parameters. Chapter 1: Introduction to High Availability and Disaster Recovery 13
14 High Availability ehealth failures can cause the following types of outages: ehealth is inaccessible, with ehealth administrators unable to access it through the OneClick for ehealth interface. ehealth is unable to poll or save data to the ehealth database. ehealth fails if all elements return a bad poll. ehealth reports run through the web are inaccessible. Live Health and reporting applications are inaccessible. Scheduled ehealth reports are not run. Reports being run at the time of failure are lost. When you configure your ehealth environment to be highly available, ehealth servers and applications continue to run and provide access to data with minimal interruption, even when failures occur. Fail over Failover is a service that initiates when a node or application failure is detected. To achieve failover, an automatic series of steps is performed by clusterware to migrate ehealth resources and applications from a failing node to a backup node in the HA cluster, providing HA through redundancy. During failover in an ehealth environment, the clusterware will either try to restart the ehealth application or start and run the application on the backup system, making the backup system active. After the failed application or system recovers, the backup system can be manually failed back to the primary system, and the primary system becomes active again. Note: For information about failover in a DR environment, see Failover in a Disaster Recovery Environment. High Availability Failover Configurations ehealth continually monitors the status of the poller. Depending on a default time-out threshold, ehealth determines when to restart or failover its monitored applications. In general, the clusterware you use determines which HA configurations you can have in your ehealth environment. Sun Cluster and Veritas software support the following cluster configurations: Two nodes (1 + 1); one primary and one backup. Multiple nodes (N + 1); several primary with a single shared backup. Multiple nodes (N * N) with multiple backups. 14 High Availability and Disaster Recovery Administration Guide
15 High Availability The following graphic details how ehealth is configured in an HA cluster with two nodes. As shown, the cluster is running properly, ready in the event of failover. Chapter 1: Introduction to High Availability and Disaster Recovery 15
16 High Availability Downtime and Data Loss There may be some downtime and data loss when your ehealth system fails over. Downtime-Downtime is a period of time when you cannot access the ehealth system. For example, you cannot run a report or the system does not register a poll. Note: If you are connected to an ehealth client application using a shared (virtual) hostname, such as OneClick for ehealth, after the failover you can use the session you were using before failover without having to log back in to the application. After Failover Planned Failover Data Loss- Data loss is a period of time when the ehealth system does not collect data, or the data that was collected is lost. After an ehealth system fails over there will be a short data gap as the backup servers start and the poller initializes. If ehealth fails over while running a report, the process will be interrupted and the data will be lost. Scheduled jobs in the run queue will also be lost during failover. After failover, the system on which the failure occurred is offline and its processes have been migrated to a backup system and started there, making the backup system active. While the backup system is active, you should perform any necessary maintenance in order to bring the failed system back online. You can then keep the system as a backup system or plan to fail back to the original system. You can manually bring ehealth online and offline using the command line interface. This is helpful when you perform planned maintenance on a system running ehealth. You can essentially force a failover at a convenient time allowing you to perform maintenance to the hardware or operating system without interrupting HA functionality. After the primary system is functional, bring the system back online and transfer operations back to the original system. When you bring ehealth up on the backup system, ehealth will experience missed polls causing some downtown, often minimal, depending on the number of elements you are polling and your system configuration. Note: When a system or application is considered online, it is turned on and active. When a system or application is considered offline, it is physically shut down. 16 High Availability and Disaster Recovery Administration Guide
17 How ehealth Works with High Availability How ehealth Works with High Availability ehealth can be configured with HA clusterware to run as a data service in a highly available cluster, where, on a separate disk, there is one instance of the ehealth server and Oracle. In an HA configuration, a number of different machines will be able to mount the disk and have access to ehealth. This disk can be RAID 1 configured: if one part of the disk fails, another takes over and continues the ehealth service without interruption. This adds another level of redundancy, buffering your system from potential downtime and data loss. The ehealth HA cluster functionality includes: The nhhasetup command, which allows an existing ehealth system to be managed by third-party clusterware. The ability for an HA cluster to join an ehealth system in a Distributed ehealth Cluster. The ability for the clusterware to speak with ehealth so it can monitor the ehealth applications and let ehealth monitor polling status. The use of shared hostnames in the ehealth environment. During failover, the clusterware moves the shared hostname (or names) from the primary system to the backup system; this lets you continue accessing ehealth applications with very little interruption. Note: When you connect to the ehealth server, Oracle, or any ehealth applications in an HA cluster, you must use the shared hostname (or names) confirmed when you initially configured ehealth for HA using the nhhasetup command. The use of one ehealth license across multiple systems in an HA cluster. When a failover occurs, you will not have to change or update any license files. How Monitoring Works After ehealth is configured for HA, all systems and applications in the HA cluster are monitored, helping ensure early detection of hardware and software failures. ehealth and the clusterware have specific applications and systems they monitor. Chapter 1: Introduction to High Availability and Disaster Recovery 17
18 How Monitoring Works To communicate with the clusterware, all ehealth applications and services must be organized using the following: Resource Type For monitoring purposes, all ehealth applications and processes must be configured as resource types to be managed by the clusterware. A resource type is a group of parameters that helps define the application in an HA cluster, defining where it is and what to monitor. For example, when a resource fails it can either be restarted on the same system or all the resources in a group can be failed over, depending on the group s configuration. Resource A resource is one instance of a resource type defined across an HA cluster. Resource/Service Group A resource group or service group is a collection of resources that should all fail over at once. When all ehealth applications reside in a group, an association of dependence is created. For example, ehealth startup may depend on the Oracle startup. Also, if a resource has to restart on a backup system, all resources in the group will restart on that backup system. What Clusterware Monitors High Availability Cluster Node Failure Clusterware monitors the HA cluster to determine if all cluster nodes are functioning properly. It also monitors most ehealth resources. Clusterware monitors hardware and network connectivity problems in the HA cluster. Each node contacts the other nodes with a heartbeat message to verify that it is connected and functioning properly. When the nodes get a response, the cluster continues functioning normally. If a node does not respond, each node continues to send a monitored heartbeat message to the other nodes, confirming that there is a unresponsive node. It then fails over the node and migrates resources to a backup node. 18 High Availability and Disaster Recovery Administration Guide
19 How Monitoring Works Clusterware Resources The HA clusterware monitors the following resources: Note: Resource failures result in failover, unless otherwise noted. Host (IP address)-the clusterware validates that the shared hostname that you specify is connected properly to the network. A failure occurs if the local machine thinks the IP address is no longer valid. Disk-The ehealth and Oracle processes reside on one disk in the HA cluster. The clusterware confirms that the disk is correctly mounted. A failure occurs if your disk connection fails. Oracle and Oracle Listener-The clusterware validates that Oracle and Oracle Listener are running and the processes are up. A failure occurs if the Oracle processes stop running. See the clusterware documentation regarding any additional monitoring capability. Apache Web Server-The clusterware validates that the Apache processes are up and running. A failure occurs if Apache processes stop running. See the clusterware documentation regarding any additional monitoring capability. WANSync (if you configure the primary HA system to be configured for DR)-The clusterware validates that the principal (ehealth data) scenario is running and the WANSync daemons exist in the process table. A failure occurs if the principal scenario is stopped or in an otherwise non-running state, or the daemons are not in place. A WANSync resource failure results in the following actions, depending on your clusterware: Veritas Cluster Service-The WANSync resource shows a fault but does not initiate failover. Sun Cluster-The WANSync resource initiates failover. Note: When you use Sun Cluster software, the Host and Disk resources must be set up before ehealth is installed. Oracle, Oracle Listener, and Apache resources must be set up after ehealth is installed. Licensing-The clusterware validates that the license processes are running. A failure occurs is the processes shut down. TrapEXPLODER-The clusterware validates that the trap processes are running. A failure occurs if these processes shut down. A TrapEXPLODER resource failure results in the following actions, depending on your clusterware: Veritas Cluster Service-The TrapEXPLODER resource shows a fault but does not initiate failover. Sun Cluster-The TrapEXPLODER resource initiates failover. Chapter 1: Introduction to High Availability and Disaster Recovery 19
20 How Monitoring Works Servlet (Tomcat)-The clusterware validates that the Tomcat processes are running. A failure occurs if these processes stop. A Tomcat resource failure results in the following actions, depending on your clusterware: Veritas Cluster Service-The Tomcat resource shows a fault but does not initiate failover. Sun Cluster-The Tomcat resource initiates failover. Report Center (if installed)-the clusterware confirms that Report Center has a valid connection. A failure occurs when the clusterware does not detect a connection. A Report Center resource failure results in the following actions, depending on your clusterware: Veritas Cluster Service-The Report Center resource shows a fault but does not initiate failover. Sun Cluster-The Report Center resource initiates failover. What ehealth Monitors The Poller Monitor ehealth monitors itself for poller status as well as the status of the ehealth startup server. The monitoring reads a configuration file to determine what thresholds should be used to control monitoring. An integral part of the ehealth program is its ability to poll devices in your infrastructure and write data to the ehealth database. In an HA cluster, ehealth runs its monitoring process once every minute and can detect and validate serious polling errors. Failure Thresholds Failure of the ehealth system to write data to the database by the statistics, traffic, or import poller within a certain time frame will trigger the system to restart or failover. For example, if the poller is configured to poll once every five minutes and after fifteen minutes has not written data to the database, the ehealth monitoring program will return a failed message. Thresholds, parameters by which failure is measured, are used as part of the monitoring process. When you configure ehealth for HA, ehealth loads threshold defaults for each application. The thresholds determine when an ehealth system or application has failed. Thresholds should only be adjusted by Technical Support. 20 High Availability and Disaster Recovery Administration Guide
21 Chapter 2: Sun Cluster High Availability Configuration This section contains the following topics: How to Configure Sun Cluster with High Availability (see page 21) How to Configure Multiple Instances of ehealth (see page 31) High Availability Cluster Management Tasks (see page 32) How to Configure Sun Cluster with High Availability The following list outlines the process to configure ehealth for HA with Sun Cluster: 1. Install and configure Sun Cluster software. 2. Register clusterware resource types. 3. Create resource types and the resource group. 4. Install ehealth. 5. Configure ehealth to run in the HA cluster. 6. (Optional) Install CA XOsoft Replication software. 7. Create, install, and register ehealth resource types. 8. Create and add clusterware resources to the resource group. 9. Create and add ehealth resources to the resource group. 10. Start the high availability service. Note: In the following procedures, rgroup-1 represents the name of the unique resource group. Chapter 2: Sun Cluster High Availability Configuration 21
22 How to Configure Sun Cluster with High Availability Step 1: Install and Configure Sun Cluster Software For ehealth to be configured for HA, Sun Cluster software version 3.1 or 3.2 must first be installed and configured, creating an HA cluster that ehealth can be managed by. When planning your HA cluster, follow these guidelines: Configure the primary system to use the shared (or virtual) hostname. Add the shared hostname and IP address to the /etc/hosts file on the primary system by doing the following: 1. Log on to the primary system as the root user and open a command line interface. 2. In the /etc/inet/hosts and /etc/inet/ipnodes files, add the shared IP address and hostname. These files map IP addresses to hostnames. Use the procedures in the Sun Cluster Software Installation Guide to do the following: Install the Sun Cluster software on all systems that will make up the HA Cluster. Configure the HA cluster and setup disk sets for locally mounted, highly available storage. Configure IP Multipath Networking in the HA cluster so that shared hostnames can be used. Install the Sun Cluster Apache agent and the Sun Cluster Oracle (not Oracle RAC) agent on each node in the HA cluster. Step 2: Register Clusterware Resource Types Before you install ehealth you must register the resource types that the clusterware will monitor. Note: Complete all the procedures in this chapter from any node in the HA cluster. 22 High Availability and Disaster Recovery Administration Guide
23 How to Configure Sun Cluster with High Availability To register clusterware resource types 1. From any system in the HA cluster, open a CLI. 2. Enter the following commands to install the Sun Cluster resource types: > su root # scrgadm -a -t SUNW.apache # scrgadm -a -t SUNW.oracle_server # scrgadm -a -t SUNW.oracle_listener # scrgadm -a -t SUNW.HAStoragePlus The following resource types are installed and registered: Apache, Oracle, Oracle Listener, and Disk (HAStoragePlus). Step 3: Create Resource Types and the Resource Group When you create the resource group it must point to the instance of ehealth you will be installing. You must supply the name of the directory where ehealth will be installed so that the resource group can indicate the location of support tools to the rest of the ehealth resources. During this step the following actions occur: The Host (IP) resource type is created. The Disk resource type is created. This will be a highly available local file system. Remember this location as you will need to install ehealth on the path you specify for this resource. The ehealth resource types are created. The resource group is brought online so that the disks are accessible. Choose a unique name for the resource group. If you are configuring multiple instances of ehealth to run in the HA cluster, each must reside in a separate resource group with a unique name. Also, each resource type you create must be given a unique name. Chapter 2: Sun Cluster High Availability Configuration 23
24 How to Configure Sun Cluster with High Availability To create the resource types and resource group Note: The file system should not already be mounted. The file system will be mounted during this step when you run the Sun command scswitch. 1. Open a CLI. 2. Enter the following command: > su root # scrgadm -a -g rgroup-1 -y PathPrefix=/mountpoint # scrgadm -a -j ehealth-host -g rgroup-1 -L -l sharedhost # scrgadm -a -j ehealth-storage -g rgroup-1 -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/mountpoint \ -x AffinityOn=True # scswitch -Z -g rgroup-1 rgroup-1 Represents the unique resource group name. mountpoint Represents the file system mountpoint sharedhost Represents the shared hostname that must always be used whenever you access the ehealth server and applications. Step 4: Install ehealth Install ehealth using the ehealth Installation Guide and the following guidelines: Install ehealth into the storage directory you created in Step 3: Create Resource Types and the Resource Group (see page 23). Confirm that you install ehealth, Oracle, and create the ehealth database on the path specified in the Disk resource in Step 3. For example, install ehealth in /mountpoint/software, install Oracle in /mountpoint/oracle, and create the database in /mountpoint/database. Before you install the ehealth license file, license.dat, you need to configure the file to include all host IDs, including all possible backup systems by doing the following: 1. Send all host IDs to the ehealth product licensing team. You will receive back one license file. 2. Copy and paste the license keys into the license.dat file located on the mountpoint/software/lmgr directory. It will reside on the disk to be accessed by all the ehealth systems in the HA cluster. 24 High Availability and Disaster Recovery Administration Guide
25 How to Configure Sun Cluster with High Availability Note: If you are configuring ehealth for both HA and DR, to save time you can install the WANSync software concurrent to ehealth. Do not create the replication scenarios until ehealth has been configured for HA. For complete steps, see Configuring ehealth for Disaster Recovery in a High Availability Cluster (Solaris Only). Step 5: Configure ehealth to Run in the High Availability Cluster After you install ehealth, you must run the nhhasetup command to configure ehealth to be started and managed by Sun Cluster. To configure ehealth to run in the HA cluster 1. (Optional) If your system is already running as part of a Distributed ehealth Cluster, you must complete this step before you run nhhasetup. From the primary system, run the following command: nhmodifyclustermember -cluster -name physicalhost -newname sharedhost physicalhost Represents the name of the primary HA system prior to being HA enabled sharedhost Represents the shared hostname. This prepares the Distributed ehealth Cluster to allow the cluster members to communicate with the primary system using the shared hostname. Note: If your system is already running as part of a Distributed ehealth Cluster and you have already run nhhasetup, you must enter the following commands to establish communication between all ehealth cluster members, from a system in the Distributed ehealth Cluster that nhhasetup was not run on: nhmodifyclustermember -all -name physicalhost -newname sharedhost You will receive an error from the Distributed ehealth Cluster member configured for HA because the other systems cannot communicate with it until the command finishes running. 2. Open a CLI and start the ehealth database by entering the following command: $NH_HOME/bin/nhStartDb Chapter 2: Sun Cluster High Availability Configuration 25
26 How to Configure Sun Cluster with High Availability 3. Enter the following command to configure TrapEXPLODER to run in the HA cluster by copying /opt/trapx to a shared file system: > su root # mv /opt/trapx /mountpoint mountpoint Represents a shared file system mountpoint. 4. Enable and verify the shared hostname on the system: a. As root, in the command line interface, enter the following: ifconfig XXXX:n plumb ifconfig XXXX:n ipaddress up b. Verify the shared hostname is properly set by entering the following: ifconfig -a The names of the system interfaces should appear in the output. 5. Enter the following command to configure ehealth to support shared hostnames: > $NH_HOME/bin/nhHaSetup -hostname sharedhost On WANSync (if installed): # rm /etc/rc*.d/[sk]*wansync 6. Enter the following command to remove the startup files to prevent ehealth from starting automatically at startup time: > su root # rm /etc/rc*.d/[sk]*httpd # rm /etc/rc*.d/[sk]*nethealth # rm /etc/rc*.d/[sk]*nhreportcenter # rm /etc/rc*.d/[sk]*trapexploder Step 6: (Optional) Install CA XOsoft Replication Software Complete this step only if you want your HA cluster to be configured for DR. For the complete procedure, see Configuring ehealth for Disaster Recovery in a High Availability Cluster (Solaris Only). Step 7: Create, Install, and Register ehealth Resource Types The resource types necessary to monitor ehealth must be installed on every node in the HA cluster. They also need to be registered. 26 High Availability and Disaster Recovery Administration Guide
27 How to Configure Sun Cluster with High Availability To create, install, and register ehealth resource types 1. As the ehealth user, stop the ehealth processes by entering the following commands: $NH_HOME/bin/nhServer stop $NH_HOME/bin/nhReportCenter stop $NH_HOME/bin/nhHttpd stop $NH_HOME/bin/nhLmgr stop $NH_HOME/bin/nhStopDb The ehealth processes are stopped. 2. On the machine on which the resource group is currently online, enter the following command to access the Solaris package: > su root # cd $NH_HOME/sys # pkgadd -d CAehealthHA.pkg 3. Change to the next machine on which you want to install the resources by entering the following commands: > su root # scswitch -z -g rgroup-1 -h system_name system_name Represents the name of the machine on which you want to install the resource types. 4. Log on to the next machine and repeat Step 2 and 3. Continue this until the resource types have been installed on every node in the cluster. 5. On any machine, enter the following commands to register the ehealth resource types: > su root # scrgadm -a -t CA.ehealthLicense # scrgadm -a -t CA.ehealth # scrgadm -a -t CA.ehealthRpt # scrgadm -a -t CA.ehealthServlet # scrgadm -a -t CA.ehealthTrap # scrgadm -a -t CA.ehealthWansync # scrgadm -a -t CA.ehealthWansync is only needed if you are setting up HA and DR. The ehealth resource types are now created, installed, and registered. Note: To view all registered resource types, enter the following command: scrgadm -p Chapter 2: Sun Cluster High Availability Configuration 27
28 How to Configure Sun Cluster with High Availability Step 8: Create and Add Clusterware Resources to the Resource Group The clusterware resources must be created. To do this, you create the Apache and Oracle resources and add them to the resource group. To configure clusterware resources 1. If you have not done so, source the nethealthrc ehealth file as root for your shell environment using one of the following commands: Bourne:../nethealthrc.sh C: source nethealthrc.csh Korn:./nethealthrc.ksh The ehealth environment is set. 2. Enter the following command to create an Apache resource to monitor the web server: > su root # scrgadm -a -j ehealth-web -g rgroup-1 \ -t SUNW.apache -y Network_resources_used=ehealth-host \ -y Scalable=False -y Port_list=$NH_HTTP_PORT/tcp \ -x Bin_dir=$NH_HOME/web/httpd/bin \ -y resource_dependencies=ehealth-storage Note: If SSL has been enabled, create the ehealth-web resource by using the NH_HTTP_PORT value found in the $NH_HOME/web/webCfg/servers.properties file. For example, if the Port value for SSL is now 443, the command to create the resource should read scrgadm a j ehealth-web..port_list=443/tcp. If SSL is not enabled, continue to use Port_list=$NH_HTTP_Port/tcp. 3. Enter the following command to create an Oracle resource to monitor the database: # scrgadm -a -j ehealth-db -g rgroup-1 -t SUNW.oracle_server \ -x Connect_string=$NH_USER/$NH_USER -x ORACLE_SID=$ORACLE_SID \ -x ORACLE_HOME=$ORACLE_HOME \ -x Alert_log_file=$ORACLE_HOME/admin/bdump/ $ORACLE_SID/alert_$ORACLE_SID.log \ -x Restart_type=RESOURCE_GROUP_RESTART \ -y resource_dependencies=ehealth-storage When you create the Oracle resource, you may receive errors indicating that the ORACLE_HOME directory is not currently accessible on all of the nodes in the HA cluster. This is normal when using locally mounted, highly available file systems and can be ignored. 28 High Availability and Disaster Recovery Administration Guide
29 How to Configure Sun Cluster with High Availability Note: When you enter this command, you set the password for the Oracle database. If you plan to change this password at a future date, use the following command: scrgadm -c -j ehealth-db -x Connect_string=new_user/new_password 4. Enter the following command to create a Listener resource to monitor the Oracle Listener: # scrgadm -a -j ehealth-db-listener -g rgroup-1 \ -t SUNW.oracle_listener \ -x LISTENER_NAME=listener \ -x ORACLE_HOME=$ORACLE_HOME \ -y resource_dependencies=ehealth-storage 5. Enter the following command to bring the new resources online: # scswitch -Z -g rgroup-1 The clusterware resources are created and added to the resource group. Step 9: Create and Add ehealth Resources to the Resource Group The ehealth resources must be created and added to the resource group so that when failover occurs, all ehealth processes, not just individual applications, start on the backup system. Adding resources to a group also establishes a dependency relationship between the associated ehealth processes that help determine how they are managed by the clusterware. The resources in this procedure are the default ehealth resources. For example, you can choose not to add the WANSync resource to the ehealth resource group at this time but at a later date. To add ehealth resources to the group 1. As root, specify the time zone environment variable (TZ) in your nethealthrc file. Example: Bourne: TZ= US/Eastern ; export TZ C: setenv TZ US/Eastern Korn: TZ= US/Eastern export TZ Note: Do not omit this step or errors will occur. Chapter 2: Sun Cluster High Availability Configuration 29
30 How to Configure Sun Cluster with High Availability 2. Enter the following commands to add the ehealth resources to the resource group: > su root # scrgadm -a -j ehealth-license -g rgroup-1 -t CA.ehealthLicense \ -y Network_resources_used=ehealth-host \ -x NH_HOME=$NH_HOME \ -x NH_USER=$NH_USER \ -y resource_dependencies=ehealth-storage # scrgadm -a -j ehealth -g rgroup-1 -t CA.ehealth \ -y Network_resources_used=ehealth-host \ -x NH_HOME=$NH_HOME \ -x NH_USER=$NH_USER \ -y resource_dependencies=ehealth-db,ehealth-db-listener, ehealth-license,ehealth-storage # scrgadm -a -j ehealth-rptctr -g rgroup-1 -t CA.ehealthRpt \ -y Network_resources_used=ehealth-host \ -x NH_HOME=$NH_HOME \ -x NH_USER=$NH_USER \ -y resource_dependencies=ehealth-db,ehealth-db-listener, ehealth-storage,ehealth-web # scrgadm -a -j ehealth-servlet -g rgroup-1 -t CA.ehealthServlet \ -y Network_resources_used=ehealth-host \ -x NH_HOME=$NH_HOME \ -x NH_USER=$NH_USER \ -y resource_dependencies=ehealth-db,ehealth-db-listener, ehealth-storage # scrgadm -a -j ehealth-trap -g rgroup-1 -t CA.ehealthTrap \ -y Network_resources_used=ehealth-host \ -x NH_HOME=$NH_HOME \ -x Trap_dir=/mountpoint/trapx \ -y resource_dependencies=ehealth-storage,ehealth-license Step 10: Start the High Availability Service To enable the HA cluster and bring it online, start the fully-configured HA service by enabling the resource group and all resources in the group. To start the HA service Enter the following command to bring the resource group online: > su root scswitch -Z -g rgroup-1 Note: Running the scswitch command with the -Z argument brings the resource group online and enables every resource within the group, even if those resources were previously disabled using the -n argument. 30 High Availability and Disaster Recovery Administration Guide
31 How to Configure Multiple Instances of ehealth Using the -z Argument Use the following guidelines when using the -z (lowercase) argument: -z lets you to choose on which machine the resource group is brought online. -z starts only those resources currently enabled within the group, ignoring the resources that have been disabled using the -n argument. How to Configure Multiple Instances of ehealth If you have more than two nodes in your HA cluster, you can configure more than one instance ehealth, creating a Distributed ehealth Cluster that is highly available. For example, if you have three nodes, you can run two instances of ehealth on two different primary nodes while one backup node sits in reserve, ready to take over active duties if either primary machine fails. Note: When multiple instances of ehealth are running in the same HA cluster, you must confirm that no two instances of ehealth are active on the same node at the same time. ehealth depends on certain global environment settings, therefore running multiple ehealth systems simultaneously on the same node will cause conflicts and crashes. Only one ehealth resource group should be run at a time on a single machine. You can control which resource groups (and therefore which instances of ehealth) are run on a particular system. You do this by configuring a strong negative affinity between each resource group using the RG_affinities property. For example, if you have three ehealth resource groups: rgroup-1, rgroup-2, and rgroup-3, the correct affinities could be configured using the following commands: # scrgadm -c -g rgroup-1 -y RG_affinities=--rgroup-2,--rgroup-3 # scrgadm -c -g rgroup-2 -y RG_affinities=--rgroup-1,--rgroup-3 # scrgadm -c -g rgroup-3 -y RG_affinities=--rgroup-1,--rgroup-2 Following the above example: rgroup-2 and rgroup-3 can never be online at the same time on the same machine as rgroup-1. rgroup-1 and rgroup-3 can never be online at the same time on the same machine as rgroup-2. rgroup-1 and rgroup-2 can never be online at the same time on the same machine as rgroup-3. Chapter 2: Sun Cluster High Availability Configuration 31
32 High Availability Cluster Management Tasks High Availability Cluster Management Tasks Use the following tasks to help manage ehealth in an HA Sun cluster. Install a Patch, Service Pack, or Upgrade After ehealth is fully integrated into the HA cluster you may have to run a patch or service pack installation program to update the ehealth and Oracle services. You must tell the clusterware that the system on which you want to install the patch will be shut down and the monitoring stopped. If you do not do this, the clusterware will continue to monitor the system, misread system and application failures, and induce failover. You use clusterware commands to achieve this functionality. Note: An alternate procedure to installing a patch or service pack is to run the nhhasetup -disable command to avoid resrouces being monitored during the upgrades and updates. To install a patch or service pack 1. Enter the following commands, including the name of the service as appropriate: > su root scswitch -n -j ehealthlicense scswitch -n -j ehealth scswitch -n -j ehealthrpt scswitch -n -j ehealthservlet scswitch -n -j ehealthtrap scswitch -n -j ehealthdb scswitch -n -j ehbdlistener All ehealth services are brought offline. 2. Start the ehealth server manually. Enter the following command: nhserver start 3. Start all other ehealth services. 4. Run the patch or service pack installer as you normally would on a standalone system. Note: After the patch or service patch completes installing, confirm that ehealth and all applications are working properly before restarting the HA service. 32 High Availability and Disaster Recovery Administration Guide
33 High Availability Cluster Management Tasks 5. Shut down any services started by the patch or service pack installation program. 6. Start the HA service again. See Step 10: Start the High Availability Service (see page 30). The patch or service pack is installed. Upgrade the Sun Cluster High Availability Package If you have already configured ehealth for HA with Sun Cluster software, this release of ehealth upgrades CAehealthHA, the Sun Cluster Solaris package that allows ehealth to be integrated and managed by the Sun clusterware. The new version of this package includes new command attributes, and clusterware and ehealth resources. Note: You must upgrade to ehealth r6.1 before you can upgrade CAehealthHA. To upgrade the CAehealthHA cluster package 1. Enter the following command to disable resource monitoring, including the name of the service as appropriate, for example: > su root # scswitch -M -n -j ehealth-license # scswitch -M -n -j ehealth # scswitch -M -n -j ehealth-web # scswitch -M -n -j ehealth-servlet # scswitch -M -n -j ehealth-trap 2. On the machine on which the resources are currently online, enter the following command to access and upgrade the Solaris package: > su root # cd $NH_HOME/sys # pkgadd -d CAehealthHA.pkg 3. Re-enable monitoring for each resource that you disabled by entering the following command, including the name of the service as appropriate: > su root # scswitch -M -e -j ehealth-license # scswitch -M -e -j ehealth # scswitch -M -e -j ehealth-web # scswitch -M -e -j ehealth-servlet # scswitch -M -e -j ehealth-trap Note: If you have multiple instances of ehealth installed in your HA environment, upgrade the Solaris package on each system. Chapter 2: Sun Cluster High Availability Configuration 33
34 High Availability Cluster Management Tasks Load the Database after High Availability Configuration After ehealth is fully integrated into the HA cluster you may need to load a saved ehealth database. Before you do this, the clusterware on the system on which you want to load the database must be shut down and the monitoring stopped. If you do not do this, the clusterware will continue to monitor the system, misread system and application failures, and induce failover. You use clusterware commands to achieve this functionality. To load the database 1. Shut down ehealth. As root, enter the following command, including the name of the service as appropriate. For example: scswitch -n -j ehealth-license scswitch -n -j ehealth scswitch -n -j ehealth-rpt scswitch -n -j ehealth-servlet scswitch -n -j ehealth-trap All ehealth services are brought offline. 2. Load the saved database. Note: For more information about the ehealth database and reasons for using the data restore feature, see the ehealth Administration Guide. When you start or stop servers in any ehealth database procedures on machines in an HA cluster environment, always shut down the clusterware and stop the monitoring first. 3. Start the ehealth server manually. Enter the following command: nhserver start 4. Start all other ehealth services. 5. Start the HA service again. See Step 10: Start the High Availability Service (see page 30). The database is loaded and HA service is started. Initiate a Planned Failover When you initiate a planned failover, you manually migrate ehealth resources from one system to another. 34 High Availability and Disaster Recovery Administration Guide
35 High Availability Cluster Management Tasks To manually failover Enter the following command from the CLI of the target system: scswitch -z -g rgroup-1 -h hosttomigrateto hosttomigrateto Represents the system to which you want to migrate the ehealth resources. How to Enable or Disable SSL for Sun Cluster When you want to enable or disable SSL after ehealth and HA are already implemented and you decide to change HTTP/S modes, you must use the following process: 1. Disable all resources for Sun Cluster using the following: scswitch -n -j each resource name 2. Run nhwebprotocol. 3. Delete and recreate your web resources. 4. Re-enable all the resources. Chapter 2: Sun Cluster High Availability Configuration 35
36
37 Chapter 3: Veritas High Availability Configuration This section contains the following topics: Critical Resource Functionality (see page 37) How to Configure Veritas with High Availability (see page 38) Multiple Instances of ehealth in the HA Cluster (see page 46) High Availability Cluster Management Tasks (see page 48) Critical Resource Functionality In a Veritas HA cluster, resources have two default classifications: Critical resources Non-critical resources Critical Resources A critical resource causes a failover when it experiences a failure. Failures can cause outtages in ehealth accessability, polling, and other functionality. The following resources are considered critical: ehealth Oracle Oracle listener IP address Mount Disk Group (and Volume Manager, if applicable) License Manager Chapter 3: Veritas High Availability Configuration 37
38 How to Configure Veritas with High Availability Non-critical Resources A resource that is not designated as critical but belongs to a service group will come online or offline whenever the service group is taken online or offline. Though these resources may be necessary to provide the service, they will not fail over if they experience a fault. Instead, the resource appears as having experienced a fault and will need maintenance. The following resources are considered non-critical: TrapEXPLODER Report Center (if installed) Apache Servlet (Tomcat) WANSync (if installed) Note: The critical and non-critical resource functionality can be customized in the VCS Java Console. You can also use the CLI to specify a resource as critical by entering the following command: hares modify resource Critical 1 How to Configure Veritas with High Availability The following list outlines the process you must follow to configure ehealth for HA with VCS: 1. Install and configure VCS software. 2. Install ehealth. 3. Configure ehealth to run in the HA cluster. 4. Create a resource group. 5. (Optional) Install CA XOsoft Replication software. 6. Create and install ehealth resource types. 7. Start the HA service. 38 High Availability and Disaster Recovery Administration Guide
39 How to Configure Veritas with High Availability Step 1: Install and Configure VCS Software For ehealth to be configured for HA with Veritas, VCS software version 5.0 must first be installed and configured, creating an HA cluster by which ehealth can be managed. When planning your HA cluster with Veritas, follow these guidelines: Configure the primary system to use the shared (virtual) hostname when prompted. The system maps the IP addresses to hostnames. Configure IP Multipath Networking in the HA cluster so that shared hostnames can be used. Use the procedures in the Veritas Cluster Server Installation Guide 5.0 for Solaris to do the following: Install the VCS software on all systems that will make up the HA Cluster. Configure the HA cluster and setup disk sets for locally mounted, highly available storage. Confirm that you have the following installed on each system: Veritas packages VRTScscw and VRTScsocw Oracle agent and Oracle Listener agent During installation, a ClusterService group is created, along with all the resources needed to support the HA Cluster Manager Console and HA Cluster Manager Java User Interface. You will be prompted to supply a shared IP address. This address will be used to access the ClusterService group only. Note: Supply a unique shared IP address for each service group created. This address will be used whenever you access client applications in the ehealth cluster, such as OneClick for ehealth and Live Health. Chapter 3: Veritas High Availability Configuration 39
40 How to Configure Veritas with High Availability Step 2: Install ehealth Install ehealth r6.1 following the procedures in the ehealth Installation Guide and these guidelines: Mount the shared storage disk manually and install ehealth and Oracle on the device. Use the following command to mount the storage disk: mount devicetomount mount_point devicetomount Represents the shared storage disk device. mount_point Represents the system on which you are installing ehealth. For example, if you mount the device on /shared, ehealth may be installed under /shared/ehealth61, Oracle under /shared/oracle, and the Oracle database under /shared/oracledata. During installation, the Spool and Report directories are placed on the shared storage. You can move these to a separate, dedicated 73 GB disk. If you are configuring ehealth for both HA and DR, to save time you can install the WANSync software concurrent to ehealth. Do not create the replication scenarios until after ehealth has been configured for HA. For complete steps, see Configuring ehealth for Disaster Recovery in a High Availability Cluster (Solaris Only). Before you install the ehealth license file, license.dat, configure the file to include all host IDs including all possible backup systems by doing the following: a. Send all host IDs to the product licensing team. You will receive back one license file. b. Copy and paste the license keys into the license.dat file located on the mountpoint/lmgr directory. It will reside on the disk to be accessed by all the ehealth systems in the HA cluster. Note: If you are installing multiple instances of ehealth in the HA cluster, see Multiple Instances of ehealth in the HA Cluster (see page 46) for guidelines. Step 3: Configure ehealth to Run in the High Availability Cluster After you install ehealth, you must run the nhhasetup command to configure ehealth to be started and managed by VCS software. 40 High Availability and Disaster Recovery Administration Guide
41 How to Configure Veritas with High Availability To configure ehealth to run in the HA cluster 1. (Optional) If your system is already running as part of a Distributed ehealth Cluster, you must complete this step before you run nhhasetup. From the primary system, run the following commands: nhserver start nhmodifyclustermember -cluster -name physicalhost -newname sharedhost physicalhost Represents the name of the primary HA system prior to being HA enabled. sharedhost Represents the shared hostname. This prepares the Distributed ehealth Cluster to allow the ehealth cluster members to communicate with the primary system using the shared hostname. Note: If your system is already running as part of a Distributed ehealth Cluster and you have run nhhasetup, you must enter the following command to establish communication between all ehealth cluster members, from a system in the Distributed ehealth Cluster that nhhasetup was not run on: nhmodifyclustermember -all -name physicalhost -newname sharedhost You will receive an error from the Distributed ehealth Cluster member configured for HA because the other systems cannot communicate with it until the command finishes running. 2. Enable and verify the shared hostname: a. As root, in the command line interface, enter the following: ifconfig XXXX:n plumb ifconfig XXXX:n ipaddress up b. Verify the shared hostname is properly set by entering the following: ifconfig -a The names of the system interfaces should appear in the output. 3. Enter the following command to configure ehealth to support shared hostnames: > $NH_HOME/bin/nhHaSetup -hostname sharedhost sharedhost Specifies the name of the shared host where ehealth is running. Chapter 3: Veritas High Availability Configuration 41
42 How to Configure Veritas with High Availability 4. Enter the following command to configure TrapEXPLODER to run in the HA cluster by copying /opt/trapx to a shared file system: > su root # mv /opt/trapx /mountpoint mountpoint Represents a shared file system mountpoint. 5. Enter the following command to remove the startup files preventing ehealth from starting automatically at boot time: > su root # rm /etc/rc*.d/[sk]*httpd # rm /etc/rc*.d/[sk]*nethealth # rm /etc/rc*.d/[sk]*nhreportcenter # rm /etc/rc*.d/[sk]*trapexploder # rm /etc/rc*.d/[sk]*wansync (if installed) Step 4: Create a Service Group Veritas provides basic monitoring for the Oracle resources. Complete this step on each node in the HA cluster to create a service group that contains the following storage-related resources: Oracle Oracle Listener IP Address Network interface card (NIC) Disk Depending on the type of HA storage devices in your environment, additional storage resources may be created. For example, DiskGroup and Volume Manager. To create and configure the resource group 1. Open a CLI and start the ehealth database by entering the following command: $NH_HOME/bin/nhStartDb 2. Install the VCS Enterprise Oracle Agent on each node in the HA cluster if you have not already done so by entering the following command: > su root pkgadd -d VRTSvcsor 42 High Availability and Disaster Recovery Administration Guide
43 How to Configure Veritas with High Availability 3. Enter the following command to start the VCS Orac le Configuration Wizard. You will use this wizard to configure Oracle: > hawizard oracle Note: If the VCS Oracle Configuration Wizard does not run, confirm that the VRTScscw and VRTScsocw packages are installed on each system. Also, confirm that the 'had' daemon has started and the Oracle server and Listener are running. 4. Create a service group in the VCS Oracle Configuration Wizard. This group will contain all the resources that are required to fail over ehealth. Enter a unique group name, for example ehealth-sg-1. If you are configuring multiple instances of ehealth to run in the HA cluster, each must reside in a separate service group with a unique name. Also, each resource type you create must be given a unique name. Note: You can add these or additional resources to the group at a later time. The wizard discovers the running Oracle instance and Oracle Listener. 5. In the Instance Selection menu, select the Oracle instance and Oracle Listener as items to be configured. 6. Click next. 7. Accept the defaults for the remaining menus, then close the wizard. The service group with Oracle, Oracle listener, IP, NIC, and Mount resources is created and online. Note: To enable detail monitoring, follow the instructions in the Veritas High Availability Agent for Oracle Installation and Configuration Guide. Step 5: (Optional) Install CA XOsoft Replication Software Complete this step only if you want your HA cluster to be configured for DR. For the complete procedure, see Configuring ehealth for Disaster Recovery in a High Availability Cluster (Solaris Only). Step 6: Create and Install ehealth Resource Types The ehealth resource types must be created and added to the resource group so that the clusterware monitosr ehealth on every node in the HA cluster and so that when failover occurs, all ehealth processes, not just individual applications, start on the backup system. Adding resource types to a group also establishes a dependency relationship between the associated ehealth processes that help determine how they are managed by the clusterware. Chapter 3: Veritas High Availability Configuration 43
44 How to Configure Veritas with High Availability To install the resource types 1. As root, specify the time zone environment variable (TZ) in your nethealthrc file. Examples: Bourne: TZ= US/Eastern ; export TZ C: setenv TZ US/Eastern Korn: TZ= US/Eastern export TZ Note: Do not omit this step or errors will occur. 2. As the ehealth user, stop the ehealth processes by entering the following commands: $NH_HOME/bin/nhServer stop $NH_HOME/bin/nhHttpd stop $NH_HOME/bin/nhLmgr stop If Report Center is installed, run: $NH_HOME/bin/nhReportCenter stop If WANSync is installed, run: /etc/init.d/wansync stop The ehealth processes are stopped. 3. As root, enter the following command to stop the TrapEXPLODER process: /etc/init.d/trapexploder stop TrapEXPLODER is stopped. 4. Confirm that the service group created in Step 4: Create a Service Group (see page 42) appears in the VCS Java Console. 5. On the machine on which the storage service group is currently online, enter the following commands to access the Veritas Solaris package: > su root # cd $NH_HOME/sys # pkgadd -d CAehealthVcsHA.pkg The resources are created. Note: A Report Center resource is created by default. If you are not installing Report Center, you can remove or disable this resource. 44 High Availability and Disaster Recovery Administration Guide
45 How to Configure Veritas with High Availability 6. Change to the next machine on which you want to install the resources by entering the following commands (to help speed this process, take the Oracle and Oracle Listener resources offline): > su root # hagrp -switch ehealth-sg-1 -to system_name ehealth-sg-1 Represents the unique name given to the service group. system_name Represents the name of the machine on which you want to install the resource types. 7. Log on to the next machine and repeat 5 and 6. Continue until the resource types have been installed on every node in the HA cluster. 8. Change back to the system on which you first installed the ehealth resources. 9. If you have not done so, source the nethealthrc ehealth file as root for your shell environment using one of the following commands: Bourne:../nethealthrc.sh C: source nethealthrc.csh Korn:../nethealthrc.ksh 10. Open and make writable the VCS configuration file (main.cf) by entering the following command: # haconf -makerw 11. Verify that /opt/vrtsvcs/bin is in the PATH environment and launch the nhaddvcsresources script to add ehealth resources by entering the following commands: # cd /opt/caehealthvcsha/bin #./nhaddvcsresources Note: To make WANSync highly available, execute the nhdrconfigscenario script before the nhaddvcsresources script. If you already ran nhaddvcsresources, you can run it again to configure the WANSync resource. The following occurs: You are prompted to enter the name of the storage service group and some of the resources you have created. For example, the Oracle database resource, the IP resource, and the Mount resource. The script validates the entries. As each resource is added, output appears showing the status of the installation. Chapter 3: Veritas High Availability Configuration 45
46 Multiple Instances of ehealth in the HA Cluster The script terminates if it encounters a problem running the VCS commands. A message alerts you to reasons for the termination. After you resolve the issues, re-run the script. For the question about the trap exploder you must enter the full path of the trap exploder binary file: /shared_mountpoint/trapx/bin. 12. Save the configuration by entering the following command: # haconf -dump -makero Note: If the VCS restarts and you have not saved the configuration, repeat Step 11. Step 7: Start the High Availability Service To enable the HA cluster and bring it online, start the fully-configured HA service by enabling the service group and all resources in the group. To start the HA service Enter the following command on any system in the HA cluster to bring the service group online: > su root # hagrp -online ehealth-sg-1 This brings up the service group on the system that has priority in the SystemList attribute. Note: To bring a service group online on a particular HA cluster node, run the following command: # hagrp -online ehealth-sg-1 -sys system_name Multiple Instances of ehealth in the HA Cluster If you have more than two nodes in your HA cluster, you can configure more than one instance ehealth, creating a Distributed ehealth Cluster that is highly available. For example, if you have three nodes, you can run two instances of ehealth on two different primary nodes while one backup node sits in reserve, ready to take over active duties if either primary machine fails. 46 High Availability and Disaster Recovery Administration Guide
47 Multiple Instances of ehealth in the HA Cluster Pre-Online Trigger-Only one ehealth service group should be run at a time on a single machine. Veritas uses a pre-online trigger feature to control which resource groups (and therefore, which instances of ehealth) can be running simultaneously on a single system. The trigger is a script called by the Veritas engine before it attempts to bring a service group online. The script checks whether any ehealth processes are already running. If so, it prevents a second service group from coming online. Note: ehealth depends on certain global environment settings, therefore running multiple ehealth systems simultaneously on the same node causes conflicts and crashes. How to Add Multiple Instances of ehealth When you have more than one instance of ehealth in an HA cluster you create a service group, then create and add resources to it. The following process outlines the steps needed to help ensure your environment is correctly configured for ehealth on two or more HA nodes: 1. Confirm that the first service group has been created and all resources are created and installed. 2. On the second node, install ehealth and Oracle. See Step 2: Install ehealth (see page 40). Note: The VCS Oracle Configuration Wizard uses the Oracle SID and Listener name to generate the resource names for Oracle and Oracle Listener. Therefore, when installing an additional ehealth instance you must specify a SID that is different from that of the first ehealth instance. 3. Change the Oracle Listener name (see procedure below). 4. Configure ehealth to run in a HA cluster. See Step 3: Configure ehealth to Run in the High Availability Cluster (see page 40). 5. Set the DISPLAY environment variable to a running X Server, if necessary, by entering the following command: setenv DISPLAY hostname: Use the VCS Oracle Configuration Wizard to create the second service group and create and add the Oracle and storage-related resources. See Step 4: Create a Service Group (see page 42). 7. Use the CLI to create and install the ehealth resources and start the HA service. See Step 6: Create and Install ehealth Resource Types (see page 43) and Step 7: Start the High Availability Service (see page 46). Chapter 3: Veritas High Availability Configuration 47
48 High Availability Cluster Management Tasks Use the following guidelines: All service groups created with the Veritas software must have unique names. All resource names must be uniquely named as well, whether in the same service group or a different group. Resource names are based on the name of the group they are in. As long as the service group names are unique, the resource names will be unique. If you get a Disk Resource Type error when using the wizard, this message can be ignored. Change the Oracle Listener Name After ehealth and Oracle are installed, you must alter the listener name; you cannot have two instances of the Oracle Listener with the same name. This can be done anytime after Oracle is installed, but before the group is brought online and the HA service is started. To change the Oracle Listener name 1. Start the Oracle database if is not running by entering the following command: nhstartdb 2. Stop the current Oracle Listener process by entering the following command: nhconfigdbnet -stoplistener 3. Alter the Oracle Listener name by entering the following command: nhconfigdbnet -addlistener -listenername listenername Represents the new name for Oracle Listener. 4. Set the NH_ORA_LISTENER_NAME to the listenername in the following files: nethealthrc.sh.usr nethealthrc.csh.usr High Availability Cluster Management Tasks Use the following tasks to help manage ehealth in an HA cluster. For additional information about Veritas HA cluster commands, see Clusterware Commands Used to Manage ehealth and the VCS software documentation. 48 High Availability and Disaster Recovery Administration Guide
49 High Availability Cluster Management Tasks Install a Patch or Service Pack After ehealth is fully integrated into the HA cluster you may have to run a patch or service pack installation program to update the ehealth and Oracle services. To do this you tell the clusterware that the system on which you want to install the patch is going to be shut down and the monitoring stopped. If you do not do this, the clusterware will continue to monitor the system, misread system and application failures, and induce failover. To install a patch or service pack 1. Shut down ehealth. Enter the following command: > su root hares -offline ehealth-license -sys system_name hares -offline ehealth -sys system_name hares -offline ehealth-rptctr -sys system_name hares -offline ehealth-servlet -sys system_name hares -offline ehealth-trap -sys system_name hares -offline ehealth-wansync -sys system_name hares -offline ehealth-web -sys system_name hares -offline ehealth-lmgr -sys system_name hares -offline ehealth-db -sys system_name hares -offline ehealth-db-listener -sys system_name system_name Represents the name of the system on which the resource is currently running. All ehealth services are brought offline. 2. Stop the Oracle resources. Enter the following command: nhstopdb 3. Start the ehealth server manually. Enter the following command: nhserver start 4. Start all other ehealth services. 5. Run the patch or service pack installer as you normally would on a standalone system. Note: After the patch or service patch completes installing, confirm that ehealth and all applications are working properly before restarting the HA service. 6. Shut down any services started by the patch or service pack installation program. 7. Start the HA service again. See Step 7: Start the High Availability Service (see page 46). The patch or service pack is installed. Chapter 3: Veritas High Availability Configuration 49
50 High Availability Cluster Management Tasks Load the Database after High Availability Configuration After ehealth is fully integrated into the HA cluster you may need to load a saved ehealth database. Before you load the database, the clusterware on the system on which you want to load the database must be shut down and the monitoring stopped. If you do not do this, the clusterware will continue to monitor the system, misread system and application failures, and induce failover. You use clusterware commands to achieve this functionality. To load the database 1. Shut down ehealth. Enter the following command: > su root hares -offline ehealth-license -sys system_name hares -offline ehealth -sys system_name hares -offline ehealth-rptctr -sys system_name hares -offline ehealth-servlet -sys system_name hares -offline ehealth-trap -sys system_name hares -offline ehealth-wansync -sys system_name hares -offline ehealth-web -sys system_name hares -offline ehealth-lmgr -sys system_name system_name Represents the name of the system on which the resource is currently running. All ehealth services are brought offline. 2. Load the saved database. Note: For more information about the ehealth database and reasons for using the data restore feature, see the ehealth Administration Guide. When you start or stop servers in any ehealth database procedures on machines in an HA cluster environment, always shut down the clusterware and stop the monitoring. 3. Start the ehealth server manually. Enter the following command: nhserver start 4. Start all other ehealth services. 5. Start the HA service again. See Step 7: Start the High Availability Service (see page 46). The database is loaded and HA service is started. 50 High Availability and Disaster Recovery Administration Guide
51 High Availability Cluster Management Tasks Initiate a Planned Failover You use the same command whenever you initiate a planned failover, as they amount to the same action in the HA cluster: manually migrating ehealth resources from one system to another. To manually failover Enter the following command from the CLI of the target system: hagrp switch ehealth-sg-1 to hosttomigrateto hosttomigrateto Represents the system to which you want to migrate the ehealth resources. How to Enable or Disable SSL for Veritas When you want to enable or disable SSL after ehealth and HA are already implemented and you decide to change HTTP/S modes, you must use the following process: 1. Run the following command: nhaddvcsresources -replace 2. Enable the HA system first if you are enabling SSL. 3. Enable or disable SSL by running nhwebprotocol mode https (or http for non-ssl) hostname <shared hostname>. 4. Re-run /opt/caehealthvcsha/bin/nhaddvcsresources. Chapter 3: Veritas High Availability Configuration 51
52
53 Chapter 4: Disaster Recovery This section contains the following topics: How ehealth Works with Disaster Recovery (see page 53) System Guidelines (see page 54) How CA XOsoft Replication Software Works (see page 55) Failover in a Disaster Recovery Environment (see page 56) Downtime and Data Loss (see page 56) Configure ehealth for Disaster Recovery (see page 56) Start the Failover Process (see page 67) How to Upgrade Your Disaster Recovery Environment (see page 71) Disaster Recovery Management Tasks (see page 72) How ehealth Works with Disaster Recovery You can configure your ehealth environment to integrate with the DR replication software CA XOsoft Replication. This chapter assumes that you are installing ehealth for the first time. If you have already installed ehealth, you must upgrade to ehealth r6.1 and then configure your system for DR. The replication software copies all ehealth, Oracle, and ehealth database files over the network from the active ehealth system to a standby system, often in another physical location. When you update a file or directory on the active system, the changes are automatically replicated to the standby system. When there is a critical failure (data file corruption) or a disaster (hurricane, earthquake) on the active system, you manually failover to the standby system, which then becomes the active system. Your ehealth systems can be stand-alone or configured in a Distributed ehealth Cluster. Note: For information about how to install and configure the CA XOsoft Replication software, see the WANSync User Guide, the Disaster Recovery and High Availability with WANSync Architectural Overview guide, and the WANSync Solaris Server Operations Guide available from the CA XOsoft Replication product website at You will need these guides to install the software and help create replication scenarios. Chapter 4: Disaster Recovery 53
54 System Guidelines System Guidelines For replication, an ehealth system that is polling and regularizing data for 50K elements while also running reports will require network bandwidth of 16 Mbits/sec. While there is no specific bandwidth requirement for the initial synchronization, the time it takes to complete will depend on the volume of data to be transmitted and the available bandwidth. As a best practice, when you configure your ehealth environment for DR, keep the spool and report directories on a separate, dedicated disk. (Solaris only) Confirm that your /opt directory is at least 1GB in size. WANSync software stores log files in this directory. Confirm that the user ID (UID) and group identifier (GID) are the same on the primary and standby systems. Synchronize the time on your primary and standby systems using time synchronization software such as NTP or XNTP. Time sync software should be installed with your operating system. You can install WANSync on Windows 2003 servers and Solaris 2.9 and 2.10 systems as follows: Windows: Install ehealth on both the active system and the standby system. Solaris: Install ehealth on only the active system. Note: DR scenarios cannot be configured from an active Windows system to a standby UNIX system, or conversely from an active UNIX system to a standby Windows system. Add the shared hostname and IP to the Windows\system32\drivers\etc\hosts to avoid getting error messages about no web server connection when using the Business Service COnsole (BSC) with DR enabled. System Terminology The following describes common terms for DR systems: Primary-The machine normally used as the active ehealth machine. Also known as the master machine. Standby-An idle machine that takes over for the primary machine in the event of a disaster. Also known as the replica machine. Active-The machine currently running ehealth. May be the primary machine (normal case) or the standby machine (in the event of a disaster). 54 High Availability and Disaster Recovery Administration Guide
55 How CA XOsoft Replication Software Works How CA XOsoft Replication Software Works The CA XOsoft Replication software continuously replicates ehealth files from the primary active system to a remote standby system. In the event of a disaster, the standby system is manually started to become active, with a current configuration and complete database. The following WANSync components must be installed in your ehealth environment: XOsoft Manager-XOsoft Manager is an interface you use to create replication scenarios, and monitor and manage the individual servers in the DR configuration. Install this software on a Windows system separate from your DR configuration but with an TCP/IP connection to the primary and standby systems. XOsoft Engine-XOsoft Engine is software used to help support the administration of the DR environment. Install this software on the primary and standby DR systems. Note: For information about where to find the latest version of XOsoft software, see the Release Notes. Scenarios Scenarios are definitions you create specifying which data to replicate. Scenarios contain information about which files and directories are to be copied, and where and when replication occurs. You create scenarios to synchronize and replicate the data that you want to copy from a primary system to the standby system. To create scenarios, use the WANSync Manager software after the WANSync Engine and ehealth have been installed and configured. Replication Replication is the copying of data from the primary (active) system to the standby system and is the basis of the ehealth DR service. This replication is continuous and helps ensure the standby system remains current. Any changes made on the primary system are automatically propagated to the standby system. Chapter 4: Disaster Recovery 55
56 Failover in a Disaster Recovery Environment Failover in a Disaster Recovery Environment Failover in the DR environment is a manual process that involves starting ehealth on the standby system when the primary system goes down. After failover, the standby system becomes the active system and runs the ehealth services and collects data. When problems on the primary system have been resolved, another DR scenario can be created and replication can begin in the opposite direction: from the standby (active) system to the primary system. Downtime and Data Loss Downtime and data loss in the event of a disaster cannot be avoided. When a disaster brings the primary system offline, the extent of downtime and data loss you experience depends on how quickly you start the ehealth server, the Oracle database, and all ehealth applications on the standby system. After the standby system becomes active, ehealth resumes normal service, polling devices and writing data to the database. Configure ehealth for Disaster Recovery The procedures in this section must be followed precisely to successfully configure ehealth for DR on Windows and Solaris systems. Before You Configure ehealth for DR Before you install the WANSync software and ehealth, you must secure a shared (or virtual) IP address and shared hostname which you will use during the configuration process. After you complete the configuration, use the shared hostname and IP address whenever you access ehealth applications such as OneClick for ehealth and Report Center. If the primary system goes down, you must enable the shared IP address on the standby system. This lets you continue to access ehealth applications. During failover, both the shared hostname and IP address may need to be changed depending on your system configurations. Note: In most DR configurations you will use the same shared hostname on both primary and standby systems. Because of the standard physical distance between systems, they will usually have different IP addresses. The configurations described in these procedures assume this scenario. 56 High Availability and Disaster Recovery Administration Guide
57 Configure ehealth for Disaster Recovery Windows Disaster Recovery Configuration This procedure includes the tasks to configure ehealth for DR on Windows systems. To configure disaster recovery on Windows 1. Install WANSync Manager and WANSync Engine software. Use the following guidelines: As a best practice, install the WANSync Manager software on a machine that will not have an ehealth server installed: the WANSync Manager software should be separate from ehealth so that in the event of a disaster you can access the WANSync user interface. Install the WANSync Engine software on the primary and standby systems. Do not create any DR scenarios until after ehealth is installed. 2. Add the shared IP address and subnet mask to the IP settings to configure the primary system to use the shared IP address: a. On the primary system, open Control Panel. Double-click Network Connections, Local Area Connection Properties. b. Select Internet Protocol (TCP/IP), Properties. c. The Internet Protocol (TCP/IP) Properties dialog opens. Click Advanced. The Advanced TCP/IP Settings dialog opens. d. Click the IP Settings tab. Click Add and enter the shared IP address and subnet mask. Click add, and then OK, and close the remaining dialogs. The primary system is configured to use the shared IP address. 3. Install ehealth r6.1. Use the following guidelines: Install ehealth on the primary and standby machines. It is important to install ehealth on both machines so that the necessary registries, services, and supported software is configured correctly. During installation, select the same options on both machines. For example, the ehealth, Oracle, and Oracle database directories must be located in the same directory on the standby system and the primary system. The ehealth Administrator must be the same on both machines. Chapter 4: Disaster Recovery 57
58 Configure ehealth for Disaster Recovery Before you install the ehealth license file, license.dat, configure the file to include all host IDs, including all possible standby systems, by doing the following: a. Send all host IDs to the ehealth product licensing team. You will receive back one license file. b. Copy and paste the license keys into the license.dat file located on the NH_HOME/Imgr directory. Note: The Enter License Interface is no longer used to install ehealth licenses on your ehealth system. Modify the standby machine configuration: a. On the Standby system, click Start, Control Panel; double-click Administrative Tools, Services. The Services window opens. b. Right-click the following services to stop them, then double-click each service and set the Startup type to manual: ehealth ehealth httpd ehealth tomcat ehealth report center The following Oracle services: OracleEHORA10TNSListener and OracleServiceehealth Microsoft Distributed Transaction Coordinator FLEXlm license server TrapEXPLODER Note: For more information about installing and licensing ehealth, see the Installation Guide. 58 High Availability and Disaster Recovery Administration Guide
59 Configure ehealth for Disaster Recovery 4. Configure ehealth to use a shared hostname: a. (Optional) If your system is already running as part of a Distributed ehealth Cluster, you must complete this step before you run nhdrsetup. From the primary system, run the following command: nhmodifyclustermember -cluster -name physicalhost -newname sharedhost physicalhost sharedhost Represents the name of the primary system prior to being DR enabled. Represents the shared hostname. This prepares the Distributed ehealth Cluster to allow the ehealth cluster members to communicate with the primary system using the shared hostname. Note: If your system is already running as part of a Distributed ehealth Cluster and you have run nhdrsetup, you must enter the following command to establish communication between all ehealth cluster members, from a system in the Distributed ehealth Cluster that nhdrsetup was not run on: nhmodifyclustermember -all -name physicalhost -newname sharedhost You will receive an error from the Distributed ehealth Cluster member configured for DR because the other systems cannot communicate with it until the command finishes running. Ignore the message. a. On the primary system, start the ehealth database (if not already started), and stop the ehealth server and Report Center. b. Open a command prompt window and enter the following command: nhdrsetup -hostname sharedhostname sharedhostname Represents your shared hostname. Messages appear confirming when each ehealth interface has been changed to use the shared hostname. a. Start the ehealth server and all applications. b. (Optional) If your system will be part of a Distributed ehealth Cluster, run the nhjoinclustermember command and use the shared hostname. 5. Create DR scenarios using the WANSync Manager software. For ehealth to be properly configured for DR, you must create three scenarios: A principal scenario that synchronizes and continuously replicates all of the ehealth data. A secondary scenario that synchronizes the libraries and executables. Chapter 4: Disaster Recovery 59
60 Configure ehealth for Disaster Recovery A third scenario that synchronizes only the TrapEXPLODER file. Use the following guidelines when creating scenarios: To avoid confusion and help ensure a smooth failover process, directories must be replicated from the primary system to the same disk names and ehealth, ehealth database, and Oracle sub directory names on the standby system. In WANSync Manager, choose Create New Scenario and Next. On the Select Scenario Type screen, under the Tasks on Replica section, choose None. Create the Principal Scenario-To create the principal scenario, in WANSync Manager do the following: Select the top-level ehealth directory. Expand this directory and deselect the following directories to exclude them from this scenario: $NH_HOME/bin $NH_HOME/jre $NH_HOME/lib $NH_HOME/lmgr/bin Select the top-level Oracle software directory. Expand this directory and deselect the following directories to exclude them from this scenario: $ORACLE_HOME/bin $ORACLE_HOME/ctx $ORACLE_HOME/lib Select the top-level Oracle database directory or directories: all the files in this directory (or directories) are included in this scenario and should be selected. Select the following registry settings and all environment variables under the registry paths: /HKEY_LOCAL_MACHINE/SOFTWARE/Apache SoftwareFoundation/ Tomcat Service Manager /HKEY_LOCAL_MACHINE/SOFTWARE/ConcordCommunications/eHe alth /HKEY_LOCAL_MACHINE/SOFTWARE/FirstSense /HKEY_LOCAL_MACHINE/SOFTWARE/ORACLE /HKEY_LOCAL_MACHINE/SYSTEM/ControlSet001/Control/Session Manager/Environment /HKEY_LOCAL_MACHINE/SYSTEM/ControlSet002/Control/Session Manager/Environment /HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/Sessi on Manager/Environment 60 High Availability and Disaster Recovery Administration Guide
61 Configure ehealth for Disaster Recovery In the Host Replication Tree, select the primary system. In the Property pane under Replication, select Automatic Synchronization Type and choose Block Synchronization. The primary scenario is created. Create the Secondary Scenario-To create the secondary scenario, in WANSync Manager do the following: Select the top-level ehealth directory. Expand this directory and deselect all the directories excluding the following: $NH_HOME/bin $NH_HOME/jre $NH_HOME/lib $NH_HOME/lmgr/bin Select the top-level Oracle database directory, or directories if more than one, and deselect all the directories excluding the following: $ORACLE_HOME/bin $ORACLE_HOME/ctx $ORACLE_HOME/lib Set the Replication properties for the primary system by doing the following: Set the Replication Mode property value to Scheduling. Set the Attach to Open Files on Run property value to Off. The secondary scenario is created. Create the Third Scenario-To create the third scenario, in WANSync Manager do the following: Select the C:\WINDOWS\system32\trapexploder.cf file. Select the standby machine in the middle pane. In the right pane, under Properties, turn on the following options: Keep deleted files during synchronization Keep deleted files during replication All scenarios are created. 6. Use WANSync Manager to run the scenarios you created in Step 5 in the following order: a. Run the secondary scenario: select Tools, Synchronize, then select File Synchronization. This sets the secondary scenario to synchronize the directories in the scenario. b. Run the third scenario: select Tools, Synchronize, then select File Synchronization. This sets the third scenario to synchronize the file in the scenario. Chapter 4: Disaster Recovery 61
62 Configure ehealth for Disaster Recovery c. Run the principal scenario: select Tools, Run, then select Block Synchronization. This sets the principal scenario to synchronize and continually replicate its contents to the standby system. An initial synchronization and replication process begins. The duration of this process depends on the speed of the network between the primary and standby systems and the amount of data in your database. Solaris Disaster Recovery Configuration This procedure includes the tasks to configure ehealth for DR on Solaris systems. To configure disaster recovery on Solaris 1. Install the WANSync Manager (on a separate Windows system) and WANSync Engine software. Use the following guidelines: Install the WANSync Manager software on a machine that will not have an ehealth server installed: the WANSync Manager software should be separate from ehealth so that in the event of a disaster you can access the WANSync user interface and begin failover. Install the WANSync Engine software on the primary and standby systems. Do not create any DR scenarios until after ehealth is installed. 2. Configure the primary system to use the shared hostname. Add the shared hostname and IP address to the /etc/hosts file on the primary system. a. Log on the primary system as the root user and open a command line interface. b. In the /etc/hosts file, add the shared IP address and hostname as the last entry in the file. 3. Create the following file: /etc/hostname.xxxx:n XXXX n Represents the interface name. Represents the number of IP addresses (usually n=1 because counting starts at 0 and :0 is not shown). 4. Add the shared hostname as the file contents. To determine the interface name, use the following command: ls /etc/hostname* 62 High Availability and Disaster Recovery Administration Guide
63 Configure ehealth for Disaster Recovery 5. Enable the shared hostname: a. As root, in the command line interface, enter the following: ifconfig XXXX:n plumb ifconfig XXXX:n ipaddress up XXXX n ipaddress Represents is the interface name. Represents the number of IP addresses from Step 3. Represents is the shared IP address. a. Verify the shared hostname is properly set up by entering the following: ifconfig -a The names of the system interfaces should appear in the output. 6. Install ehealth r6.1. Use the following guidelines: Install ehealth on the primary system. You need only one instance of ehealth when configuring for DR on Solaris systems. The mount points on both the primary and standby systems must be named the same. Configure both the primary and standby systems to have the same kernel parameters. Before you install the ehealth license file, license.dat, configure the file to include all host IDs, including all possible standby systems, by doing the following: a. Send all host IDs to the ehealth product licensing team. You will receive back one license file. b. Copy and paste the license keys into the license.dat file located on the NH_HOME/Imgr directory. Note: The Enter License Interface is no longer used to install ehealth licenses on your ehealth system. For more information about installing and licensing ehealth, see the ehealth Installation Guide. 7. Configure ehealth to use the shared hostname: a. (Optional) If your system is already running as part of a Distributed ehealth Cluster, you must complete this step before you run nhdrsetup. From the primary system, run the following command: nhmodifyclustermember -cluster -name physicalhost -newname sharedhost Chapter 4: Disaster Recovery 63
64 Configure ehealth for Disaster Recovery physicalhost Represents the name of the primary system prior to being DR enabled. sharedhost Represents the shared hostname. This prepares the Distributed ehealth Cluster to allow the ehealth cluster members to communicate with the primary system using the shared hostname. Note: If your system is already running as part of a Distributed ehealth Cluster and you have run nhdrsetup, you must enter the following command to establish communication between all ehealth cluster members, from a system in the Distributed ehealth Cluster that nhdrsetup was not run on: nhmodifyclustermember -all -name physicalhost -newname sharedhost You will receive an error from the Distributed ehealth Cluster member configured for DR because the other systems cannot communicate with it until the command finishes running. Ignore the message. a. On the primary system, start the ehealth database (if not already started), and stop the ehealth server and Report Center. b. Enter the following command: nhdrsetup -hostname sharedhostname sharedhostname Represents the shared hostname. Messages appear confirming when each ehealth interface has been changed to use the shared hostname. a. Start the ehealth server and all applications. 8. Create DR scenarios using the WANSync Manager software. For ehealth to be properly configured for DR, you must create three scenarios: A principal scenario that synchronizes and continuously replicates all of the ehealth data. A secondary scenario that synchronizes the ehealth and Oracle libraries and executables. A third scenario that synchronizes the /etc files only. 64 High Availability and Disaster Recovery Administration Guide
65 Configure ehealth for Disaster Recovery Use the following guidelines when creating scenarios: To avoid confusion and help ensure a smooth failover process, directories must be replicated from the primary system to the same disk names and ehealth, ehealth database, and Oracle sub directory names on the standby system. In WANSync Manager, choose Create New Scenario and Next. On the Select Scenario Type screen, under the Tasks on Replica section, choose None. Create the Principal Scenario-To create the principal scenario, in WANSync Manager do the following: Select the top-level ehealth directory. Expand this directory and deselect the following directories to exclude them from this scenario: $NH_HOME/bin $NH_HOME/jre $NH_HOME/lib $NH_HOME/lmgr/bin Select the top-level Oracle software directory. Expand this directory and deselect the following directories to exclude them from this scenario: $ORACLE_HOME/bin $ORACLE_HOME/ctx $ORACLE_HOME/lib Select the top-level Oracle database directory or directories: all the files in this directory (or directories) are included in this scenario and should be selected. In the Host Replication Tree, select the primary system. In the Property pane under Replication, select Automatic Synchronization Type and choose Block Synchronization. The primary scenario is created. Create the Secondary Scenario. To create the secondary scenario, in WANSync Manager do the following: Select the top-level ehealth directory. Expand this directory and deselect all the directories excluding the following: $NH_HOME/bin $NH_HOME/jre $NH_HOME/lib $NH_HOME/lmgr/bin Select the contents of the following directory (all files should be selected): /opt/trapx () Chapter 4: Disaster Recovery 65
66 Configure ehealth for Disaster Recovery Select the top-level Oracle database directory, or directories if more than one, and deselect all the directories, excluding the following: $ORACLE_HOME/bin $ORACLE_HOME/ctx $ORACLE_HOME/lib Set the Replication property for the primary system by doing the following: Set Replication Mode to Scheduling. Set the Attach to Open Files on Run to Off. The secondary scenario is created. Create the Third Scenario. To create the third scenario, in WANSync Manager do the following: Expand the /etc/init.d directory. In the right pane, select the following files in the following order to add them to the scenario: httpd httpd.sh nethealth nethealth.sh nhreportcenter trapexploder Expand the /etc directory. In the right pane, select the following files in the following order to add them to the scenario: /etc/nh.install.cfg /etc/trapexploder.cf These configuration files are used to automatically start ehealth and Oracle. Edit the scenario by deleting the /etc/init.d sub directory from the scenario list. Select the standby machine in the middle pane. In the right pane, under Properties, turn on the following options: Keep deleted files during synchronization Keep deleted files during replication Note: If you rerun the nhdrsetup -disable or nhdrsetup -hostname commands again for any reason, you must rerun this third scenario to synchronize the /etc files. 66 High Availability and Disaster Recovery Administration Guide
67 Start the Failover Process 9. Use WANSync Manager to run the scenarios you created in the following order: Run the secondary scenario: select Tools, Synchronize, then select File Synchronization. This sets the secondary scenario to synchronize the directories in the scenario. Run the third scenario: select Tools, Synchronize, then select File Synchronization. This sets the third scenario to synchronize the files in the scenario. Run the principal scenario: select Tools, Run, then select Block Synchronization. This sets the principal scenario to synchronize and continually replicate its contents to the standby system. An initial synchronization and replication process begins. The duration of this process depends on the speed of the network between the primary and standby systems and the amount of data in your database. Start the Failover Process When your primary system goes offline because of a critical failure or disaster, perform the tasks in this section to manually bring the standby system online. Note: If your primary system is part of an HA cluster, see Appendix A for the procedure to manually bring the standby system online. As a best practice, test the manual failover procedure before a disaster occurs. This helps confirm the following: The ehealth servers and the database start without a problem. The replication scenarios are complete and all the necessary files are being copied. In WANSync Manager, you can turn on synchronization and replication scenario reports for the primary system. These reports list all replicated files. You can log on to the ehealth clients after failover. Chapter 4: Disaster Recovery 67
68 Start the Failover Process To start failover on a Windows system 1. Stop the replication scenario on WANSync Manager, or confirm that it has been stopped on the primary and standby systems. 2. Stop the replication scenario on the primary and standby systems as follows: a. Open Control Panel. In Administration Tools, open the Services window. b. Right-click the XOsoft Engine service and stop it. c. Double-click the XOsoft Engine service and set the Startup type to manual. d. Delete the config_25000 subdirectory from both the primary and standby systems or the replication will start as soon as the system is restarted. 3. If the primary system is functional, stop the ehealth servers and applications including: FLEXlm, Oracle, Report Center, TrapEXPLODER, and the web server. 4. If the primary system is functional, do the following: a. Remove the shared IP address and subnet mask from the IP settings. b. In the system Services window, right-click the following services to stop them, then double-click each service and set the Startup type to manual: ehealth ehealth httpd ehealth tomcat ehealth report center (if installed) The following Oracle services: OracleEHORA10TNSListener and OracleServiceehealth Microsoft Distributed Transaction Coordinator FLEXlm license server TrapEXPLODER Note: If the primary system is not functional, you must still perform this step after the system has been repaired but before it becomes the active system. 68 High Availability and Disaster Recovery Administration Guide
69 Start the Failover Process 5. On the standby system, add the shared IP address and subnet mask to the IP settings to configure the system to use the shared IP address: a. Open Control Panel. Double-click Network Connections, Local Area Connection Properties. b. Select Internet Protocol (TCP/IP), Properties. The Internet Protocol (TCP/IP) Properties dialog opens. c. Click Advanced. The Advanced TCP/IP Settings dialog opens. d. Click the IP Settings tab. Click Add and enter the shared IP address and subnet mask. Click add, and then OK, and close the remaining dialogs. The Standby system is configured to use the shared IP address. 6. On the standby system, start the ehealth database. 7. Update the Domain Name System (DNS) of the shared hostname to the correct IP address, if it is in a different subnet than the primary system. 8. Enter the following command on the standby system to configure it to run ehealth: nhdrsetup -configurestandby 9. On the standby system, start the ehealth servers, the web server, and the FLEXlm and TrapEXPLODER applications. 10. On the standby system, set all services Startup type to Automatic so they will restart. The standby system is now online and failover is complete. To start failover on a Solaris system 1. On the standby system, create a symbolic link from /opt/ehealth to the location where ehealth has been replicated to by entering the following command: ln -s $NH_HOME /opt/ehealth 2. If the primary system is functional, stop the ehealth servers and applications including: FLEXlm, Oracle, Report Center, TrapEXPLODER, and the web server. 3. Stop the replication scenario on WANSync Manager, or confirm that it has been stopped. To confirm the scenario has stopped, enter the following command on the primary system, if it is functional, and the standby system: /etc/init.d/wansync stop Chapter 4: Disaster Recovery 69
70 Start the Failover Process 4. If the primary system is functional, remove the shared hostname and IP address entries from the /etc/hosts file and remove the /etc/hostname.xxxx:n file. 5. If the primary system is functional, remove the symbolic link from the following files in the /etc/rc0.d, /etc/rc1.d, /etc/rc2.d, /etc/rc3.d, and /etc/rcs.d directories: nethealth nhreportcenter trapexploder httpd Note: If the primary system is not functional, you must still perform Steps 4 and 5 after the system has been repaired but before it becomes the active system. 6. On the primary system, disable the shared hostname, as root, by entering the following commands: ifconfig XXXX:n ipaddress down ifconfig XXXX:n unplumb XXXX n Represents the interface name. Represents the number of IP addresses (usually n=1 because counting starts at 0 and :0 is not shown). ipaddress Represents the shared IP address. Verify the shared hostname is disabled by entering the following: ifconfig -a Confirm that the shared hostname is not in the output. 7. On the standby system, log on as the root user. In the /etc/hosts file, add the shared IP address and hostname. 8. On the standby system, create the following file: /etc/hostname.xxxx:n Add the shared hostname as the file contents. To determine the interface name, enter the following command: ls /etc/hostname* 70 High Availability and Disaster Recovery Administration Guide
71 How to Upgrade Your Disaster Recovery Environment 9. On the standby system, enable and verify the shared hostname: a. As root, in the command line interface, enter the following: ifconfig XXXX:n plumb ifconfig XXXX:n ipaddress up b. Verify the shared hostname is properly set by entering the following: ifconfig -a The names of the system interfaces should appear in the output. 10. On the standby system, start the ehealth database. 11. Update the Domain Name System (DNS) of the shared hostname to the correct IP address, if it is in a different subnet than the primary system. 12. On the standby system, enter the following command to configure the system to run ehealth: nhdrsetup -configurestandby 13. On the standby system, start the ehealth servers, the web server, and the FLEXlm and TrapEXPLODER applications. How to Upgrade Your Disaster Recovery Environment When you upgrade from ehealth r6.0 service packs to ehealth r6.1, the WANSync scenario should be stopped before the upgrade or activation(if the system is part of an ehealth cluster). To upgrade DR systems to ehealth r6.1, follow this process: 1. Stop the WANSync scenario. 2. Stop the WANSync engine on both the primary and standby systems. 3. Delete the config_2500 subdirectory where WANSync was installed. 4. Upgrade the primary system by making certain the primary system has been upgraded and activated properly. 5. If your Disaster Recovery systems are on Windows servers, perform the following steps: a. Uninstall ehealth on your standby system. b. Perform a fresh ehealth r6.1 installation of the standby machine. 6. Upgrade your XOSoft Replication Software: a. On the primary and standby systems upgrade the WANSync software to the appropriate release. b. Verify WANsyn engine is running. Chapter 4: Disaster Recovery 71
72 Disaster Recovery Management Tasks 7. Upgrade the XOSoft Manager. 8. Modify the WANSync scenarios: a. Modify the scenarios to copy the files under the new $NH_HOME and oracle 10G. b. Start the WANSync scenarios as specified when preparing a system for DR. (see page 62) (All three scenarios must be run after a system is upgraded.) Disaster Recovery Management Tasks After you configure your ehealth environment for DR, you may need to perform these additional tasks. Initiate Failover from the Standby (Active) System to the Primary System After you fail over from the primary system to the standby system, the standby system becomes active, running the ehealth server and applications. During this time you can do the following: 1. Perform maintenance on the primary system to resolve problems related to the disaster instance and confirm the primary system is running properly. 2. Create new scenarios to replicate data from the standby (active) system back to the primary system. 3. Fail over from the standby (active) system back to the primary system, thereby making the primary system the active system. Note: On primary Windows systems, ehealth must be reinstalled so that there are no conflicts when the standby system begins replication back to the primary system. Install a Patch or Service Pack after Configuring Disaster Recovery When you need to install an ehealth or Oracle patch or service pack in your DR environment, use the following guidelines: Install the patch or service pack to the primary system only. The WANSync replication software will propagate the changes to the standby system. If the standby system is currently the active system, it is recommended that you wait until the disaster instance on the primary system has been repaired and the system is active again before installing patches and service packs. 72 High Availability and Disaster Recovery Administration Guide
73 Disaster Recovery Management Tasks To install a patch or service pack 1. Stop ehealth, Oracle, the network connection, and all replication scenarios. 2. Stop the WANSync Engine software on both the primary and standby systems by doing the following: Windows- In the Services window, right-click the XOsoft Engine service and stop it Delete the scenario subdirectory generally stored in C:\Program Files\XOsoft\WANSync\config_ Solaris- As root user, enter the following command: /etc/init.d/wansync stop Delete the scenario subdirectory /opt/wansync/bin/config_ Perform the patch or service pack installation. 4. Start the ehealth server and applications and confirm they are running properly. 5. Start the network connection. 6. Start the WANSync Engine software on both the primary and standby systems by doing the following: Windows-In the Services window, right-click the XOsoft Engine service and start it. Solaris-As root user, enter the following command: /etc/init.d/wansync start 7. Run the scenarios. 8. Start the WANSync scenarios as specified when preparing a system for DR. (see page 62)(All three scenarios must be run after a system is upgraded.) Chapter 4: Disaster Recovery 73
74
75 Appendix A: Disaster Recovery in a High Availability Cluster This section contains the following topics: Configure ehealth for Disaster Recovery in a High Availability Cluster (see page 75) Failover from High Availability Cluster to Standby Disaster Recovery System (see page 78) Failover From Standby System to a System in a High Availability Cluster (see page 80) Configure ehealth for Disaster Recovery in a High Availability Cluster This procedure includes the steps to configure the following: DR on a primary Solaris ehealth system that is part of an HA cluster. A standby DR system outside the HA cluster to receive data replicated from the HA cluster. To configure disaster recovery on Solaris 1. Install and configure ehealth r6.1 in the HA cluster (if not already installed). 2. Install the WANSync Engine software. Use the following guidelines: Install the WANSync Engine software on all primary systems in the HA cluster, and the backup system in the DR environment. (Optional) Install the WANSync Manager software on a separate Windows system that is outside the HA cluster and will not have an ehealth server installed. Note that you use only the CLI to start, stop, and edit DR replication scenarios, unless you need to replicate data to an HA cluster after failing over to a DR cluster, in which case you use the WANSync Manager. See Failover From Standby System to a System in a High Availability Cluster (see page 80) for more information. Do not start and stop WANSync processes as root. For security purposes, choose another user account. Enter Yes at the following prompt: Create WANSync group? Enter No at the following prompt: Enable Oracle Support? Appendix A: Disaster Recovery in a High Availability Cluster 75
76 Configure ehealth for Disaster Recovery in a High Availability Cluster 3. Configure the primary and standby systems using the following guidelines: Configure the standby DR system on a stand-alone machine outside of the HA cluster. Configure both the primary HA and standby DR systems to have the same kernel parameters. Confirm that the user ID (UID) and group identifier (GID) are the same on both the primary HA and standby DR systems. 4. Bring the Mount and IP resources online on the primary system by entering the following commands: > su root On Veritas: hares -online Mount -shared -sys system_name hares -online ipaddress -sys system_name shared Represents the name of the shared directory. system_name Represents the name of the primary system. ipaddress Represents the shared IP address. On Sun Cluster : # scswitch -z -g rgroup-1 -h system_name rgroup1 Represents the resource group name created earlier. system_name Represents the name of the machine in the HA cluster on which you want to install the WANSync software. 5. Install and license the WANSync package on all nodes in the HA cluster. 6. Edit the /etc/group file by appending the $NH_USER user to the end of the /etc/group "wansync" entry. Example: If NH_USER is ehuser, then the entry in /etc/group would be: wansync::60000:ehuser 76 High Availability and Disaster Recovery Administration Guide
77 Configure ehealth for Disaster Recovery in a High Availability Cluster 7. Configure the WANSync DR replication scenarios for ehealth by entering the nhdrconfigscenario command. The command is located under one of the following directories, depending on your clusterware: Sun Cluster: /opt/caehealthha/bin Veritas: /opt/caehealthvcsha/bin Enter the WANSync user password for the user that you created in Step Run nhaddvcsresources. Note: If WANSync has been installed and you try to run nhaddvcsresources before running nhdrconfigscenario, the script will exit with the following error, "Failed to determine the WANSync scenario directory. Please run nhdrconfigscenario before running nhaddvcsresources." 9. Enter the name of the DR standby system. The scenarios are created and placed on the shared storage area. Note: By default, the reports and spool directories are placed in the shared storage area. You can use the CLI to change this location, for example, to a dedicated disk on the standby system. 10. (Optional) If you plan to configure ehealth for DR in an HA cluster, enter the following command to add the WANSync resource to the resource group: # scrgadm -a -j ehealthwansync -g ehealthgrp-1 -t CA.ehealthWansync \ -x WS_HOME=/opt/WANSync \ -x NH_HOME=$NH_HOME -x NH_USER="$NH_USER" \ -x NHDR_FILE="haWsUser.dat" -x NHDR_SCENARIO_P="ehealthWansync-Primary" \ -x NHDR_SCENARIO_S="ehealthWansync-Secondary" \ -x NHDR_SCENARIO_T="ehealthWansync-Third" \ -y Network_resources_used=ehealth-host \ -y resource_dependencies=ehealth-storage The ehealth resources now reside in the ehealth resource group. 11. Create a symbolic link for the Disaster Recovery scenarios directory in /opt/wansync/bin to the Disaster Recovery directory (ws_scenarios) under the shared mount point. 12. Stop WANSync from starting the ws_rep process by entering the following command on each node: /etc/init.d/wansync stop Appendix A: Disaster Recovery in a High Availability Cluster 77
78 Failover from High Availability Cluster to Standby Disaster Recovery System 13. Repeat Steps 4, 5, 10 and 11 for the remaining nodes in the HA cluster. ehealth is configured for DR in an HA cluster. 14. Add the WANSync resource on Veritas using the command nhaddvcsresources. Failover from High Availability Cluster to Standby Disaster Recovery System When the primary system in the HA cluster goes offline because of a critical failure or disaster, complete this procedure to manually bring the standby system in the DR environment online. As a best practice, test the manual failover procedure before a disaster occurs. This helps confirm the following: The ehealth servers and the database start without a problem. The replication scenarios are complete and all the necessary files are being copied. You can turn on synchronization and replication scenario reports for the primary system; these reports list all replicated files. You can log on to the ehealth clients after failover. To fail over from an HA system to a DR environment 1. On the standby DR system, create a symbolic link from /opt/ehealth to the location where ehealth has been replicated to by entering the following command: ln -s $NH_HOME /opt/ehealth 2. If the primary HA system is functional, stop the ehealth servers and applications including: FLEXlm, Oracle, Report Center, TrapEXPLODER, WANSync, and the web server by entering one of the following commands for each, depending on your clusterware: Sun Cluster: scswitch -F -g rgroup-1 Veritas: hagrp offline ehealth-sg-1 sys system_name Enter one of the following commands to verify the resources are offline, depending on your clusterware: Sun Cluster: scstat Veritas: hagrp state ehealth-sg-1 3. On the standby DR system, log on as the root user. In the /etc/hosts file, add the shared IP address and hostname. 78 High Availability and Disaster Recovery Administration Guide
79 Failover from High Availability Cluster to Standby Disaster Recovery System 4. On the standby DR system, create the following file: /etc/hostname.xxxx:n XXXX n Represents the interface name. Represents the number of IP addresses. Note: n usually equals 1 because counting starts at 0 and :0 is not shown. Add the shared hostname as the file contents. To determine the interface name, enter the following command: ls /etc/hostname* 5. On the standby DR system, enable and verify the shared hostname: a. As root, in the command line interface, enter the following commands: ifconfig XXXX:n plumb ifconfig XXXX:n ipaddress up b. Verify the shared hostname is properly set up by entering the following command: ifconfig -a The names of the system interfaces should appear in the output. 6. On the standby DR system, log in as $NH_USER and source the $NH_HOME/nethealthrc.csh file. 7. Start the ehealth database by entering the following command: nhstartdb 8. Update the Domain Name System (DNS) of the shared hostname to the correct IP address if it is in a different subnet than the HA cluster. Your system administrator may need to complete this step for you. 9. On the standby DR system, enter the following commands to configure the standby system to run ehealth: nhhasetup -disable nhdrsetup -hostname shared_hostname nhdrsetup -configurestandby 10. On the standby DR system, start the ehealth servers by entering the nhserver start command. 11. Start the following applications on the standby DR system: FLEXlm (nhlmgr start), TrapEXPLODER (/etc/init.d/trapexploder start), and the web server (nhhttpd start). Do not start Report Center, if installed. Appendix A: Disaster Recovery in a High Availability Cluster 79
80 Failover From Standby System to a System in a High Availability Cluster 12. (Optional) If Report Center is installed, rebuild the Report Center configuration file and start Report Center on the standby DR system by entering the following commands: nhrptctrconfig -action importconfig nhrptctrconfig -action updatedatabaseaccess Failover From Standby System to a System in a High Availability Cluster After failover from the primary HA system to the standby DR system, the standby DR system becomes active, running the ehealth server and applications. You may choose to keep this active DR system primary, or fail back to the standby system in the HA cluster after problems on the original system are resolved. Use the following procedure to manually failover from the standby DR system back to a system in the HA cluster. You must fail back if you want to resume replicating data from a primary system in the HA cluster to the standby DR system. To fail over from an active standby DR system to an HA system 1. Verify that all clusterware and ehealth resources are offline. If they are still online, enter one of the following commands, depending on your clusterware: Sun Cluster: scswitch -F -g rgroup-1 Veritas: hagrp offline ehealth-sg-1 sys system_name 2. On any system in the HA cluster, as root user, mount the directory or directories (for example, /export/share2) that contain the following: $NH_HOME $ORACLE_HOME Oracle datafiles 3. Create DR scenarios using the WANSync Manager software. For ehealth to be properly configured for DR, you must create three scenarios: A principal scenario that synchronizes and continuously replicates all of the ehealth data. A secondary scenario that synchronizes the ehealth and Oracle libraries and executables. 80 High Availability and Disaster Recovery Administration Guide
81 Failover From Standby System to a System in a High Availability Cluster A third scenario that synchronizes the /etc files only. Use the following guidelines when creating scenarios: All scenarios must be configured using the HA cluster's shared (logical) hostname as the DR primary system and the physical node name as the DR standby system. When you create the scenarios, you must use the physical node names (not the shared name) of the standby DR system and the primary HA system where you have mounted the shared file system. To avoid confusion and help ensure a smooth failover process, directories must be replicated from the primary system to the same disk names and ehealth, ehealth database, and Oracle sub directory names on the standby system. When creating the scenarios, choose Create New Scenario and Next. On the Select Scenario Type screen, under the Tasks on Replica section, choose None. Create the Principal Scenario-To create the principal scenario, in WANSync Manager do the following: Select the top-level ehealth directory. Expand this directory and deselect the following directories to exclude them from this scenario: $NH_HOME/bin $NH_HOME/jre $NH_HOME/lib $NH_HOME/lmgr/bin Select the top-level Oracle software directory. Expand this directory and deselect the following directories to exclude them from this scenario: $ORACLE_HOME/bin $ORACLE_HOME/ctx $ORACLE_HOME/lib Select the top-level Oracle database directory or directories: all the files in this directory (or directories) are included in this scenario and should be selected. In the Host Replication Tree, select the primary system. In the Property pane under Replication, select Automatic Synchronization Type and choose Block Synchronization. You must define what directories to use on the HA cluster machine by selecting the HA machine in the middle pane (under Hosts - Replication Tree), and then selecting which directories to use in the right-pane. The principal scenario is created. Appendix A: Disaster Recovery in a High Availability Cluster 81
82 Failover From Standby System to a System in a High Availability Cluster Create the Secondary Scenario-To create the secondary scenario, in WANSync Manager do the following: Select the top-level ehealth directory. Expand this directory and deselect all the directories excluding the following: $NH_HOME/bin $NH_HOME/jre $NH_HOME/lib $NH_HOME/lmgr/bin Select the contents of the trap exploder directory (all files should be selected). Select the top-level Oracle database directory (or directories, if more than one) and deselect all the directories excluding the following: $ORACLE_HOME/bin $ORACLE_HOME/ctx $ORACLE_HOME/lib Set the Replication property for the primary system by doing the following: Set Replication Mode to Scheduling. Set the Attach to Open Files on Run to Off. You must define what directories to use on the HA cluster machine by selecting the HA machine in the middle pane (under Hosts - Replication Tree), and then selecting which directories to use in the right-pane. The secondary scenario is created. Create the Third Scenario-To create the third scenario, in WANSync Manager do the following: Expand the /etc/init.d directory. In the right pane, select the following files in the following order to add them to the scenario: httpd httpd.sh nethealth nethealth.sh nhreportcenter trapexploder Expand the /etc directory. In the right pane, select the following files in the following order to add them to the scenario: /etc/nh.install.cfg /etc/trapexploder.cf These configuration files are used to automatically start ehealth and Oracle. 82 High Availability and Disaster Recovery Administration Guide
83 Failover From Standby System to a System in a High Availability Cluster Right-click on the /etc/init.d directory, and select Remove Directory from the righthand pane. Select the standby machine in the middle pane. In the right pane, under Properties, turn on the following options: Keep deleted files during synchronization Keep deleted files during replication You must define what directories to use on the HA cluster machine by selecting the HA machine in the middle pane (under Hosts - Replication Tree), and then selecting which directories to use in the right-pane. Note: If you rerun the nhdrsetup -disable or nhdrsetup -hostname commands again for any reason, you must also recreate this third scenario to synchronize the /etc files again. All scenarios are created. 4. Use WANSync Manager to run the scenarios you created in the following order: Run the secondary scenario: select Tools, Synchronize, then select File Synchronization. This sets the secondary scenario to synchronize the directories in the scenario. Run the third scenario: select Tools, Synchronize, then select File Synchronization. This sets the third scenario to synchronize the files in the scenario. Run the principal scenario: select Tools, Run, then select Block Synchronization. This sets the principal scenario to synchronize and continually replicate its contents to the standby system. An initial synchronization and replication process begins. The duration of this process depends on the speed of the network between the primary and standby systems and how much data is in your database. On the HA cluster system, create a symbolic link from /opt/ehealth to the location where ehealth has been replicated to by entering the following command: ln -s $NH_HOME /opt/ehealth 5. If the standby DR system is functional, stop the ehealth servers by entering the nhserver stop command. Stop the following applications: FLEXlm (nhlmgr stop), Report Center (nhreportcenter stop), Oracle (nhstopdb), TrapEXPLODER (/etc/init.d/trapexploder stop), and the web server (nhhttpd stop). Appendix A: Disaster Recovery in a High Availability Cluster 83
84 Failover From Standby System to a System in a High Availability Cluster 6. Stop the replication scenario on WANSync Manager, or confirm that it has been stopped. To verify WANSync has stopped, enter the following command on both the HA cluster system used in Step 2, and the standby DR system: /etc/init.d/wansync stop Delete the scenario subdirectory found in /opt/wansync/bin/config_ Note: Perform Steps 7 and 8 after the HA system has been repaired but before it becomes the active system. 7. On the standby DR system, remove the shared hostname and IP address entries from the /etc/hosts file and remove the /etc/hostname.xxxx:n file. 8. On the standby DR system, remove the symbolic link for the following files in the /etc/rc0.d, /etc/rc1.d, /etc/rc2.d, /etc/rc3.d, and /etc/rcs.d directories: nethealth nhreportcenter trapexploder httpd 9. On the standby DR system, disable the shared hostname, as root, by entering the following commands: ifconfig XXXX:n ipaddress down ifconfig XXXX:n unplumb XXXX n Represents the interface name. Represents the number of IP addresses. ipaddress Represents the shared IP address. Note: n usually equals 1 because counting starts at 0 and :0 is not shown. 10. Verify the shared hostname is disabled by entering the following command: ifconfig -a Confirm that the shared hostname is not in the output. 84 High Availability and Disaster Recovery Administration Guide
85 Failover From Standby System to a System in a High Availability Cluster 11. Unmount the directory (or directories) mounted in Step 2. For example, as root, enter the following command depending on the directory: umount /export/share2 If the umount command is not allowed because it is busy, enter the following command, as root: fuser -ck /export/share2; umount -f /export/share2 12. On the HA cluster system, the following resources must be disabled before the ehealth resource group is started. To do this, enter the scswitch -n -j resource command for each resource in the following order: ehealth report center ehealth trap ehealth servlet ehealth ehealth license ehealth web (Apache) To determine the resource names, enter the scstat command. 13. On the HA cluster system, start the ehealth group by entering one of the following commands, depending on your clusterware: Sun Cluster: scswitch -z -g rgroup-1 -h hosttomigrateto Veritas: hagrp online ehealth-sg-1 sys system_name Enter one of the following commands to confirm that the resources are online: Sun Cluster: scstat Veritas: hagrp state ehealth-sg On the standby DR system, update the DNS system to map the hostname to the correct IP address if it is in a different subnet than the HA cluster. Your system administrator may need to complete this step for you. 15. From the HA cluster system, configure the standby DR system to run ehealth by enter the following commands: nhparameter -set highavailabilityenabled yes $NH_HOME/bin/sys/nhiHttpdCfg -user $NH_USER -grp group of NH_USER -nhdir $NH_HOME -outfile $NH_HOME/web/httpd/conf/httpd.conf nhwebutil -setuptomcat Note: Confirm that the active session has been sourced so that the sourced version of ehealth matches the /opt/ehealth link set in Step 5. Appendix A: Disaster Recovery in a High Availability Cluster 85
86 Failover From Standby System to a System in a High Availability Cluster 16. On the HA cluster system, re-enable the following resources. To do this, enter the scswitch -e -j resource command for each of the following resources in the following order: ehealth web (Apache) ehealth license ehealth servet ehealth ehealth trap ehealth report center To determine the resource names, enter the scstat command. Failover to a primary HA system is complete. 86 High Availability and Disaster Recovery Administration Guide
87 Appendix B: Sun Cluster Commands The commands in the following table are supplied by the Sun Cluster software and are used by the HA administrator to manage ehealth in the HA cluster. Command scswitch -z -g rgroup-1 -h hosttomigrateto Description Brings the resource group online on a particular cluster node, where hosttomigrateto is the target machine you want to bring online. Use this command to manually transfer operations from a backup system to the original primary system, or to initiate a planned failover for maintenance purposes. scswitch -F -g rgroup-1 Shuts down the resource group. scswitch -S -h hosttovacate Moves all resource groups off of a cluster node, where hosttovacate is the name of the machine you are moving from. scswitch -n -j resource Brings a resource in the resource group offline, leaving all other resources enabled. For example, use this command to shut down the servers for maintenance or upgrade the Solaris package. scswitch -n -M -j resource Disables monitoring of a resource in a resource group without bringing that resource offline. scswitch -e -j resource Re-enables a resource in a resource group after it has been taken offline. scswitch -e -M -j resource scstat scrgadm -p Re-enables monitoring of a resource in a resource group. Lists the current status in the HA cluster including which nodes are part of the cluster and which applications are online and offline. Lists all registered resource types in the HA cluster. Appendix B: Sun Cluster Commands 87
88
89 Appendix C: Veritas Cluster Service Commands The commands in the following table are supplied by the Veritas Cluster Service software and are used by the HA administrator to manage ehealth in the HA cluster. These actions can also be accomplished using the VCS Java Console. Command hagrp offline ehealthsg-1 sys system_name hagrp online ehealthsg-1 sys system_name hagrp online ehealthsg-1 haconf -makerw haconf -dump -makero hagrp state ehealthsg-1 hares modify resource Critical 1 hares -offline resource -sys system_name hares -online resource -sys system_name hagrp switch ehealthsg-1 to hosttomigrateto Description Brings a service group offline on a particular cluster node, where ehealth-sg-1 is the service group and system_name is the name of the system. Brings a service group online on a particular cluster node. Brings a service group online on the system that has priority in the SystemList attribute. Opens the VCS configuration file. Use when adding resources to a service group. Saves the VCS configuration file with all resource information. Verifies the current status of the service group. Specifies a resource as critical. A critical resource causes a failover when it experiences a failure. Brings a resource offline, leaving all other resources enabled. system_name is the name of the system on which the resource is currently running. Brings a resource online. system_name is the name of the system on which the resource is currently running. Brings the resource group online on a particular cluster node, where hosttomigrateto is the target machine you want to bring online. Use this command to manually transfer operations from a backup system to the original primary system, or to initiate a planned failover for maintenance purposes. Appendix C: Veritas Cluster Service Commands 89
90
91 Index C Commands nhhasetup 17 D Disaster Recovery configuration, in a high availability clust 75 failover from standby to primary 72 introduction 11 E ehealth configuration, disaster recovery in a high availabil 75 installation guidelines, disaster recovery on Solari 62 installation guidelines, disaster recovery on Window 57 F R Resource Groups 17 Resource Types 17 Resources 17 T Thresholds failure, high availability 20 X XOsoft Replication software installation guidelines, Solaris 62 installation guidelines, Windows 57 Failover from standby to primary, disaster recovery 72 Failures thresholds 20 G Guidelines ehealth installation on Solaris, disaster recovery 62 ehealth installation on Windows, disaster recovery 57 XOsoft Replication software installation, Solaris 62 XOsoft Replication software installation, Windows 57 H High Availability resource groups 17 resource types 17 resources 17 Index 91
CA Spectrum and CA Embedded Entitlements Manager
CA Spectrum and CA Embedded Entitlements Manager Integration Guide CA Spectrum Release 9.4 - CA Embedded Entitlements Manager This Documentation, which includes embedded help systems and electronically
Upgrade Guide. CA Application Delivery Analysis 10.1
Upgrade Guide CA Application Delivery Analysis 10.1 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is
CA ARCserve Backup for Windows
CA ARCserve Backup for Windows Agent for Microsoft SharePoint Server Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation") are for
Unicenter NSM Integration for BMC Remedy. User Guide
Unicenter NSM Integration for BMC Remedy User Guide This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user s informational
CA XOsoft Replication for Windows
CA XOsoft Replication for Windows Microsoft SQL Server Operation Guide r12.5 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the
ehealth Psytechnics Integration for User Guide r6.0 SP3
ehealth Psytechnics Integration for User Guide r6.0 SP3 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user s informational
CA Change Manager Enterprise Workbench r12
CA Change Manager Enterprise Workbench r12 Database Support for Microsoft SQL Server 2008 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation")
CA Nimsoft Monitor. Probe Guide for Performance Collector. perfmon v1.5 series
CA Nimsoft Monitor Probe Guide for Performance Collector perfmon v1.5 series CA Nimsoft Monitor Copyright Notice This online help system (the "System") is for your informational purposes only and is subject
CA Spectrum and CA Service Desk
CA Spectrum and CA Service Desk Integration Guide CA Spectrum 9.4 / CA Service Desk r12 and later This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter
CA ARCserve Replication and High Availability for Windows
CA ARCserve Replication and High Availability for Windows Microsoft SQL Server Operation Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation")
CA Cloud Service Delivery Platform
CA Cloud Service Delivery Platform Customer Onboarding Version 01.0.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the
etrust Audit Using the Recorder for Check Point FireWall-1 1.5
etrust Audit Using the Recorder for Check Point FireWall-1 1.5 This documentation and related computer software program (hereinafter referred to as the Documentation ) is for the end user s informational
BrightStor ARCserve Backup for UNIX
BrightStor ARCserve Backup for UNIX Disaster Recovery Option Guide r11.5 D01200-1E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the end
CA ARCserve Backup for Windows
CA ARCserve Backup for Windows Agent for Sybase Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA Performance Center
CA Performance Center Release Notes Release 2.3.3 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is for
CA ARCserve Replication and High Availability
CA ARCserve Replication and High Availability Installation Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation") are for your informational
CA Nimsoft Monitor. Probe Guide for Active Directory Response. ad_response v1.6 series
CA Nimsoft Monitor Probe Guide for Active Directory Response ad_response v1.6 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to change
WANSync SQL Server. Operations Guide
WANSync SQL Server Operations Guide This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user s informational purposes only
BrightStor ARCserve Backup for Linux
BrightStor ARCserve Backup for Linux Agent for MySQL Guide r11.5 D01213-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the end user's
CA ARCserve Backup for Windows
CA ARCserve Backup for Windows Agent for Sybase Guide r16.5 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA Process Automation
Communications Release 04.1.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is for your informational
How To Install Caarcserve Backup Patch Manager 27.3.2.2 (Carcserver) On A Pc Or Mac Or Mac (Or Mac)
CA ARCserve Backup Patch Manager for Windows User Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA Spectrum. Microsoft MOM and SCOM Integration Guide. Release 9.4
CA Spectrum Microsoft MOM and SCOM Integration Guide Release 9.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA Workload Automation Agent for Remote Execution
CA Workload Automation Agent for Remote Execution Release Notes r11.3.1 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the
CA ARCserve Replication and High Availability
CA ARCserve Replication and High Availability Installation Guide r16 This documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA ehealth. Voice Over IP (VoIP) Deployment and Quick Reference Guide. r6.1
CA ehealth Voice Over IP (VoIP) Deployment and Quick Reference Guide r6.1 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end
CA NetQoS Performance Center
CA NetQoS Performance Center Install and Configure SSL for Windows Server 2008 Release 6.1 (and service packs) This Documentation, which includes embedded help systems and electronically distributed materials,
Arcserve Cloud. Arcserve Cloud Getting Started Guide
Arcserve Cloud Arcserve Cloud Getting Started Guide This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is
CA Nimsoft Monitor. Probe Guide for Java Virtual Machine Monitoring. jvm_monitor v1.4 series
CA Nimsoft Monitor Probe Guide for Java Virtual Machine Monitoring jvm_monitor v1.4 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to
CA Nimsoft Monitor. Probe Guide for Lotus Notes Server Monitoring. notes_server v1.5 series
CA Nimsoft Monitor Probe Guide for Lotus Notes Server Monitoring notes_server v1.5 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to
CA Cloud Service Delivery Platform
CA Cloud Service Delivery Platform Service Level Manager Version 01.0.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the
CA Nimsoft Monitor. Probe Guide for Internet Control Message Protocol Ping. icmp v1.1 series
CA Nimsoft Monitor Probe Guide for Internet Control Message Protocol Ping icmp v1.1 series CA Nimsoft Monitor Copyright Notice This online help system (the "System") is for your informational purposes
CA ARCserve Replication and High Availability
CA ARCserve Replication and High Availability Microsoft SharePoint Server Operation Guide r16.5 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter
CA Workload Automation Agent for Databases
CA Workload Automation Agent for Databases Implementation Guide r11.3.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the
CA Nimsoft Monitor. Probe Guide for DNS Response Monitoring. dns_response v1.6 series
CA Nimsoft Monitor Probe Guide for DNS Response Monitoring dns_response v1.6 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to change
CA Technologies SiteMinder
CA Technologies SiteMinder Agent for Microsoft SharePoint r12.0 Second Edition This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to
CA Nimsoft Monitor. Probe Guide for CA ServiceDesk Gateway. casdgtw v2.4 series
CA Nimsoft Monitor Probe Guide for CA ServiceDesk Gateway casdgtw v2.4 series Copyright Notice This online help system (the "System") is for your informational purposes only and is subject to change or
CA Unified Infrastructure Management
CA Unified Infrastructure Management Probe Guide for IIS Server Monitoring iis v1.7 series Copyright Notice This online help system (the "System") is for your informational purposes only and is subject
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Serverless Backup Option Guide r11.5 D01182-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the
Nimsoft Monitor. dns_response Guide. v1.6 series
Nimsoft Monitor dns_response Guide v1.6 series CA Nimsoft Monitor Copyright Notice This online help system (the "System") is for your informational purposes only and is subject to change or withdrawal
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Tape RAID Option Guide r11.5 D01183-1E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the end user's
CA Workload Automation Agent for Microsoft SQL Server
CA Workload Automation Agent for Microsoft SQL Server Release Notes r11.3.1, Second Edition This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter
CA ARCserve Replication and High Availability for Windows
CA ARCserve Replication and High Availability for Windows Microsoft Exchange Server Operation Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the
CA Unified Infrastructure Management
CA Unified Infrastructure Management hyperv Release Notes All series Copyright Notice This online help system (the "System") is for your informational purposes only and is subject to change or withdrawal
CA ARCserve Backup. UNIX and Linux Data Mover Guide. r16
CA ARCserve Backup UNIX and Linux Data Mover Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation )
CA Nimsoft Monitor. Probe Guide for Microsoft Exchange Server Response Monitoring. ews_response v1.1 series
CA Nimsoft Monitor Probe Guide for Microsoft Exchange Server Response Monitoring ews_response v1.1 series CA Nimsoft Monitor Copyright Notice This online help system (the "System") is for your informational
CA Clarity PPM. Connector for Microsoft SharePoint Release Notes. v2.0.00
CA Clarity PPM Connector for Microsoft SharePoint Release Notes v2.0.00 This documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the
CA ARCserve Replication and High Availability
CA ARCserve Replication and High Availability Installation Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA SMF Director. Release Notes. Release 12.6.00
CA SMF Director Release Notes Release 12.6.00 This documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is for your
CA ARCserve Backup for Windows
CA ARCserve Backup for Windows Enterprise Option for SAP R/3 for Oracle Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation") are
CA Nimsoft Monitor. Probe Guide for iseries System Statistics Monitoring. sysstat v1.1 series
CA Nimsoft Monitor Probe Guide for iseries System Statistics Monitoring sysstat v1.1 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to
CA VPN Client. User Guide for Windows 1.0.2.2
CA VPN Client User Guide for Windows 1.0.2.2 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is for your
CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server
CA RECOVERY MANAGEMENT R12.5 BEST PRACTICE CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server Overview Benefits The CA Advantage The CA ARCserve Backup Support and Engineering
Connector for CA Unicenter Asset Portfolio Management Product Guide - On Premise. Service Pack 02.0.02
Connector for CA Unicenter Asset Portfolio Management Product Guide - On Premise Service Pack 02.0.02 This Documentation, which includes embedded help systems and electronically distributed materials (hereinafter
How To Backup An Org Database On An Org Server On A Pc Oracle Server On Anorora (Orora) With A Backup And Restore Option On A Windows 7.5.2 (Ororora).Org (Orroboron
CA ARCserve Backup for Windows Enterprise Option for SAP R/3 for Oracle Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred
CA ehealth. Administration Guide. r6.1
CA ehealth Administration Guide r6.1 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user s informational purposes only
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Agent for Microsoft SQL Server r11.5 D01173-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the
CA ehealth. Monitoring the Cisco BTS 10200 Softswitch User Guide. r6.1
CA ehealth Monitoring the Cisco BTS 10200 Softswitch User Guide r6.1 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user
Unicenter Patch Management
Unicenter Patch Management Best Practices for Managing Security Updates R11 This documentation (the Documentation ) and related computer software program (the Software ) (hereinafter collectively referred
CA Unified Infrastructure Management Server
CA Unified Infrastructure Management Server CA UIM Server Configuration Guide 8.0 Document Revision History Version Date Changes 8.0 September 2014 Rebranded for UIM 8.0. 7.6 June 2014 No revisions for
CA XOsoft High Availability for Windows
CA XOsoft High Availability for Windows Microsoft File Server Operation Guide r12.5 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is
CA Clarity Project & Portfolio Manager
CA Clarity Project & Portfolio Manager Connector for CA Unicenter Service Desk & CA Software Change Manager for Distributed Product Guide v2.0.00 This documentation, which includes embedded help systems
CA APM Cloud Monitor. Scripting Guide. Release 8.2
CA APM Cloud Monitor Scripting Guide Release 8.2 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is for
CA SiteMinder. Directory Configuration - OpenLDAP. r6.0 SP6
CA SiteMinder Directory Configuration - OpenLDAP r6.0 SP6 This documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA Nimsoft Unified Management Portal
CA Nimsoft Unified Management Portal HTTPS Implementation Guide 7.6 Document Revision History Document Version Date Changes 1.0 June 2014 Initial version for UMP 7.6. CA Nimsoft Monitor Copyright Notice
CA Nimsoft Monitor. Probe Guide for Apache HTTP Server Monitoring. apache v1.5 series
CA Nimsoft Monitor Probe Guide for Apache HTTP Server Monitoring apache v1.5 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to change
CA Nimsoft Monitor. Probe Guide for Cloud Monitoring Gateway. cuegtw v1.0 series
CA Nimsoft Monitor Probe Guide for Cloud Monitoring Gateway cuegtw v1.0 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to change or withdrawal
Unicenter TCPaccess FTP Server
Unicenter TCPaccess FTP Server Release Summary 6.0 This documentation and related computer software program (hereinafter referred to as the Documentation ) is for the end user s informational purposes
CA Nimsoft Monitor. Probe Guide for URL Endpoint Response Monitoring. url_response v4.1 series
CA Nimsoft Monitor Probe Guide for URL Endpoint Response Monitoring url_response v4.1 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject
CA Performance Center
CA Performance Center Single Sign-On User Guide 2.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is
CA Nimsoft Service Desk. Compatibility Matrix
CA Nimsoft Service Desk Compatibility Matrix Last Updated On: December 6, 2013 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to
Arcserve Backup for Windows
Arcserve Backup for Windows Agent for Microsoft SharePoint Server Guide r16 Pre-release Document, only for reference This Documentation, which includes embedded help systems and electronically distributed
CA Unified Infrastructure Management
CA Unified Infrastructure Management Probe Guide for Informix Database Monitoring informix v4.1 series Copyright Notice This online help system (the "System") is for your informational purposes only and
CA Process Automation
CA Process Automation Glossary Service Pack 04.0.01 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is
CA ehealth. Monitoring UPS Devices and Environmental Sensors User Guide. r6.1
CA ehealth Monitoring UPS s and Environmental Sensors User Guide r6.1 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user
CA ehealth. Traffic Accountant and NetFlow Administration Guide. r6.1
CA ehealth Traffic Accountant and NetFlow Administration Guide r6.1 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
CA Unified Infrastructure Management
CA Unified Infrastructure Management Probe Guide for iseries Journal Message Monitoring journal v1.0 series Contact CA Contact CA Support For your convenience, CA Technologies provides one site where you
CA Cloud Storage for System z
CA Cloud Storage for System z Release Notes Release 1.1.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
BrightStor ARCserve Backup for Linux
BrightStor ARCserve Backup for Linux Enterprise Option for Advantage Ingres Guide r11.5 D01220-1E This documentation and related computer software program (hereinafter referred to as the "Documentation")
CA Desktop Migration Manager
CA Desktop Migration Manager DMM Deployment Setup Guide 12.9 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
DevTest Solutions. Local License Server. Version 2.1.2
DevTest Solutions Local License Server Version 2.1.2 This Documentation, which includes embedded help systems and electronically distributed materials (hereinafter referred to as the Documentation ), is
CA SiteMinder. Web Agent Installation Guide for IIS 12.51
CA SiteMinder Web Agent Installation Guide for IIS 12.51 This Documentation, which includes embedded help systems and electronically distributed materials (hereinafter referred to as the Documentation
CA SiteMinder. Upgrade Guide. r12.0 SP2
CA SiteMinder Upgrade Guide r12.0 SP2 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation") are for your informational purposes only and are
Unicenter Service Desk
Unicenter Service Desk ITIL User Guide r11.2 This documentation (the Documentation ) and related computer software program (the Software ) (hereinafter collectively referred to as the Product ) is for
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Enterprise Option for Microsoft SQL Using HP-XP Snap-Shot Guide r11.5 D01190-2E This documentation and related computer software program (hereinafter referred to
CA Single Sign-On r12.x (CA SiteMinder) Implementation Proven Professional Exam
CA Single Sign-On r12.x (CA SiteMinder) Implementation Proven Professional Exam (CAT-140) Version 1.4 - PROPRIETARY AND CONFIDENTIAL INFORMATION - These educational materials (hereinafter referred to as
Disaster Recovery Configuration Guide for CiscoWorks Network Compliance Manager 1.8
Disaster Recovery Configuration Guide for CiscoWorks Network Compliance Manager 1.8 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel:
RECOVERY OF CA ARCSERVE DATABASE IN A CLUSTER ENVIRONMENT AFTER DISASTER RECOVERY
RECOVERY OF CA ARCSERVE DATABASE IN A CLUSTER ENVIRONMENT AFTER DISASTER RECOVERY Legal Notice This publication is based on current information and resource allocations as of its date of publication and
CA SiteMinder. Web Agent Installation Guide for IIS. r12.5
CA SiteMinder Web Agent Installation Guide for IIS r12.5 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA Clarity PPM. Connector for Microsoft SharePoint Product Guide. Service Pack 02.0.01
CA Clarity PPM Connector for Microsoft SharePoint Product Guide Service Pack 02.0.01 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred
Chapter 1: How to Register a UNIX Host in a One-Way Trust Domain Environment 3
Contents Chapter 1: How to Register a UNIX Host in a One-Way Trust Domain Environment 3 Introduction... 3 How to Register a UNIX Host in a One-Way Trust Domain Environment... 4 Creating a Windows Agentless
CA ARCserve Replication and High Availability
CA ARCserve Replication and High Availability Oracle Server Operation Guide for Windows r16.5 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter
CA Clarity Project & Portfolio Manager
CA Clarity Project & Portfolio Manager Using CA Clarity PPM with Open Workbench and Microsoft Project v12.1.0 This documentation and any related computer software help programs (hereinafter referred to
CA ARCserve Backup. UNIX and Linux Data Mover Guide. r15
CA ARCserve Backup UNIX and Linux Data Mover Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation") are for your informational purposes
CA SiteMinder. Agent for IIS Installation Guide. r12.0 SP3
CA SiteMinder Agent for IIS Installation Guide r12.0 SP3 This documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
CA Nimsoft Monitor Snap
CA Nimsoft Monitor Snap Configuration Guide for IIS Server Monitoring iis v1.5 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to change
Active-Active and High Availability
Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark
Symantec ApplicationHA agent for SharePoint Server 2010 Configuration Guide
Symantec ApplicationHA agent for SharePoint Server 2010 Configuration Guide Windows on Hyper-V 6.1 February 2014 Symantec ApplicationHA agent for SharePoint Server 2010 Configuration Guide The software
CA Cloud Service Delivery Platform
CA Cloud Service Delivery Platform Business Relationship Manager Version 01.0.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred
