1 bbc Configuring LiveCycle Application Server Clusters Using WebSphere 6.0 Adobe LiveCycle June 2007 Version 7.2
2 2007 Adobe Systems Incorporated. All rights reserved. Adobe LiveCycle 7.2 Configuring LiveCycle Application Server Clusters Using WebSphere 6.0 for Microsoft Windows, Linux, and UNIX Edition 2.0, June 2007 If this guide is distributed with software that includes an end user agreement, this guide, as well as the software described in it, is furnished under license and may be used or copied only in accordance with the terms of such license. Except as permitted by any such license, no part of this guide may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise, without the prior written permission of Adobe Systems Incorporated. Please note that the content in this guide is protected under copyright law even if it is not distributed with software that includes an end user license agreement. The content of this guide is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Adobe Systems Incorporated. Adobe Systems Incorporated assumes no responsibility or liability for any errors or inaccuracies that may appear in the informational content contained in this guide. Please remember that existing artwork or images that you may want to include in your project may be protected under copyright law. The unauthorized incorporation of such material into your new work could be a violation of the rights of the copyright owner. Please be sure to obtain any permission required from the copyright owner. Any references to company names and company logos in sample material or in the sample forms included in this software are for demonstration purposes only and are not intended to refer to any actual organization. Adobe, the Adobe logo, LiveCycle, and Reader are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. IBM, AIX, DB2, and WebSphere are trademarks of International Business Machines Corporation in the United States, other countries, or both. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Oracle is a trademark of Oracle Corporation and may be registered in certain jurisdictions. SUSE is a trademark of Novell, Inc. in the United States and other countries. Sun, Java, and Solaris are trademarks or registered trademark of Sun Microsystems, Inc. in the United States and other countries. Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. UNIX is a registered trademark of The Open Group in the US and other countries. All other trademarks are the property of their respective owners. Adobe Systems Incorporated, 345 Park Avenue, San Jose, California 95110, USA. Notice to U.S. Government End Users. The Software and Documentation are Commercial Items, as that term is defined at 48 C.F.R , consisting of Commercial Computer Software and Commercial Computer Software Documentation, as such terms are used in 48 C.F.R or 48 C.F.R , as applicable. Consistent with 48 C.F.R or 48 C.F.R through , as applicable, the Commercial Computer Software and Commercial Computer Software Documentation are being licensed to U.S. Government end users (a) only as Commercial Items and (b) with only those rights as are granted to all other end users pursuant to the terms and conditions herein. Unpublished-rights reserved under the copyright laws of the United States. Adobe Systems Incorporated, 345 Park Avenue, San Jose, CA , USA. For U.S. Government End Users, Adobe agrees to comply with all applicable equal opportunity laws including, if appropriate, the provisions of Executive Order 11246, as amended, Section 402 of the Vietnam Era Veterans Readjustment Assistance Act of 1974 (38 USC 4212), and Section 503 of the Rehabilitation Act of 1973, as amended, and the regulations at 41 CFR Parts 60-1 through 60-60, , and The affirmative action clause and regulations contained in the preceding sentence shall be incorporated by reference.
3 Contents Preface... 5 Who should read this guide?... 5 Versions... 5 Conventions used in this guide... 5 Related documentation... 6 Updated LiveCycle product information Overview... 8 About clustering application servers... 8 Failover... 9 Load distribution and load balancing... 9 Web server load balancing... 9 Scalability...10 WebSphere and distributed sessions...10 Terminology...10 Clustering LiveCycle products...11 Supported topologies...12 Combining the web, application, and database servers...12 Combining the web and application servers...12 Combining the application and database servers...12 Separate web, application, and database servers...12 Adding additional web servers...12 Adding additional application servers...13 Multiple JVMs...13 Messaging topologies...13 Unsupported topologies Configuring the Application Servers Preparing to install...15 Installing the Deployment Manager...15 Installing WebSphere application server...15 Installing the WebSphere FixPacks...15 Shutting down and restarting nodes and clusters...16 Creating profiles...17 Creating a WebSphere Deployment Manager profile...17 Creating a WebSphere Application Server profile...18 Adding and removing nodes...19 Adding nodes...19 Removing nodes...20 Deleting profiles...20 Configuring WebSphere clusters...21 Creating clusters by using the Deployment Manager Configuring the Web Servers...23 Preparing for installation...23 Installing the web server...23 Installing the web server plug-in
4 Contents Configuring LiveCycle Application Server Clusters Using WebSphere 4 4 Post Installation Creating an endorsed directory...26 Configuring the shared libraries...26 Increasing the SOAP time-out Configuring the Database Installing database drivers...29 Configuring the databases...30 Configuring Oracle databases...30 Configuring DB2 databases...30 Configuring a DB2 database for concurrent usage...32 Configuring SQL Server databases...32 Installing JTA stored procedures...33 Enabling XA transactions for Windows Server Configuring datasources...34 Configuring the Oracle data source...35 Configuring the DB2 data source...36 Configuring the SQL Server data source Configuring Messaging and Security Configuring a default messaging provider for LiveCycle Policy Server...39 Configuring a default messaging provider for other products...41 Configuring security for LiveCycle Policy Server Configuration Requirements Large document handling...50 Common elements between LiveCycle products...50 Configure WebSphere 6 under AIX to use IPv Configuring LiveCycle Workflow properties...51 Scheduler property definition and values that can be added...51 Cache configuration properties...52 Adding property definitions receiver configuration...54 LiveCycle Policy Server caching configuration setup...55 Configuring caching properties for User Management...55 Configuring JVM properties Deploying LiveCycle Products Assembling LiveCycle.ear for LiveCycle PDF Generator in a cluster...57 About deploying LiveCycle products to a cluster...57 Summary of deployable components...57 Deploying LiveCycle products...58 Starting the application...62 Verifying the LiveCycle Forms deployment...63 Bootstrapping LiveCycle products...63 Generating and propagating the web server plug-in...64 Viewing log files Issues with Clustering Failed to acquire job exception in WebSphere...66 ORB threadpool for LiveCycle Policy Server and LiveCycle Document Security...66 LiveCycle Forms preferences do not get propagated...66 LiveCycle Reader Extensions UI is not supported in a cluster...67
5 Preface This document explains how to deploy an Adobe LiveCycle server product in a IBM WebSphere Application Server 6.0 clustered environment. Who should read this guide? This guide provides information for administrators or developers responsible for installing, configuring, administering, or deploying LiveCycle products. The information provided is based on the assumption that anyone reading this guide is familiar with WebSphere Application Server, RedHat Linux, SUSE Linux, Microsoft Windows, AIX, or Sun Solaris operating systems, MySQL, Oracle, DB2, or SQL Server database servers, and web environments. Versions This document describes clustering for LiveCycle 7.2.x. Conventions used in this guide This guide uses the following naming conventions for common file paths. Name Default value Description [LiveCycle root] [appserver root] [profiles root] Windows: C:\Adobe\LiveCycle\ Linux and UNIX : /opt/adobe/livecycle/ Windows: C:\Program Files\IBM\WebSphere\ AppServer Linux and UNIX: /opt/ibm/websphere/appserver Application Server: [appserver root]/appserver/profiles Deployment Manager: [appserver root]/deploymentmanager /profiles The installation directory that is used for all LiveCycle products. The installation directory contains subdirectories for Adobe Configuration Manager, product SDKs, and each LiveCycle product installed (along with the product documentation). The home directory of the application server that runs the LiveCycle products. The directory location that stores profiles. The directory paths listed indicate the default locations, however, administrators may specify their own profiles directory location. 5
6 Preface Configuring LiveCycle Application Server Clusters Using WebSphere Related documentation 6 Name Default value Description [dbserver root] [product root] Depends on the database type and your specification during installation. Windows: C:\Adobe\LiveCycle\Assembler C:\Adobe\LiveCycle\pdfgenerator C:\Adobe\LiveCycle\Workflow C:\Adobe\LiveCycle\Forms C:\Adobe\LiveCycle\Print C:\Adobe\LiveCycle\Formmanager Linux, UNIX: /opt/adobe/livecycle/assembler /opt/adobe/livecycle/pdfgenerator /opt/adobe/livecycle/workflow /opt/adobe/livecycle/forms /opt/adobe/livecycle/print /opt/adobe/livecycle/formmanager The location where the LiveCycle database server is installed. The directories where product-specific directories and files (such as documentation, uninstall files, samples, and license information) are located. Most of the information about directory locations in this guide is cross-platform (all file names and paths are case-sensitive on Linux). Any platform-specific information is indicated as required. Related documentation This document should be used in conjunction with the Installing and Configuring LiveCycle guide or the Installing and Configuring LiveCycle Security Products guide for your application server. Throughout this document, specific sections in these installing and configuring guides are listed when more detailed information is available. The Installing and Configuring LiveCycle 7.2.x guides apply to the following products: Adobe LiveCycle Assembler Adobe LiveCycle Forms 7.2 Adobe LiveCycle Form Manager Adobe LiveCycle PDF Generator and Adobe LiveCycle Print 7.2 Adobe LiveCycle Workflow Adobe LiveCycle Workflow Designer Watched Folder 1.2 The Installing and Configuring LiveCycle Security Products guides apply to the following products: Adobe LiveCycle Document Security and Adobe LiveCycle Reader Extensions 7.2 Adobe LiveCycle Policy Server7.2
7 Preface Configuring LiveCycle Application Server Clusters Using WebSphere Updated LiveCycle product information 7 Updated LiveCycle product information Adobe Systems has posted a Knowledge Center article to communicate any updated LiveCycle product information with customers. You can access the article at:
8 1 Overview This section describes the benefits and issues associated with setting up clusters. About clustering application servers A cluster is a group of application server instances running simultaneously, which act like a single system enabling load distribution, load balancing, scalability, high availability, and failover. Within a cluster, multiple server instances can run on the same computer (known as a vertical cluster) or be located on different computers (known as a horizontal cluster), or they can form a combination of both horizontal and vertical clusters. With clustering, client work can be distributed across several nodes instead of being handled by a single application server. In a clustered configuration, application server instances are server members of the cluster, all of which must have identical application components deployed on them. However, other than the configured applications, cluster members do not have to share any other configuration parameters. For example, you can cluster multiple server instances on one computer, with a single instance on another computer, provided they are all running WebSphere
9 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Failover 9 By clustering, you can achieve one or more of the following benefits. How you implement clustering determines which benefits are achieved: Failover Failover Load balancing Load distribution Scalability Failover allows one application server instance to act as a backup to a failed application server instance and resume processing the task, thereby enabling one application server to carry on processing. Some products, such as LiveCycle Workflow, can recover process states at the step and process levels. Load distribution and load balancing Load distribution is a static technique used to distribute client HTTP requests across the servers in a cluster based on a predefined algorithm such as round-robin or random (even or weight-based) so that no single device is overwhelmed. If one server starts to get congested or overloaded, requests are forwarded to another server with more capacity. Load balancing is a dynamic or adaptive load distribution technique by which a cluster attempts to balance load on all servers based on real time variations between the member servers. Dynamic load balancing can be achieved by JMS request dequeing, other techniques are beyond the scope of this guide. Application server load balancing is useful for managing the load between application server tiers. Application servers can be configured to use a weighted round-robin routing policy that ensures routing distribution based on the set of server weights that have been assigned to the members of a cluster. Configuring all servers in the cluster to have the same weight produces a load distribution in which all servers receive the same number of requests and this configuration is recommended when all nodes in the cluster have similar hardware configurations. Weighting some servers more heavily sends more requests to these servers than the lower weight value servers. Preferred routing configurations can also be configured to ensure, for example, that only cluster members on that node are selected (using the round-robin weight method) and cluster members on remote nodes are selected only if a local server is not available. Application server load balancing is best used when balancing is needed between tiers. Web server load balancing Web server load balancing is useful for queuing and throttling requests. For IBM HTTP Server, the two most commonly used methods for load balancing and single-system imaging are Round-Robin DNS and the IBM enetwork Dispatcher. Round-Robin DNS is a relatively simple method of load balancing, where a DNS (Domain Name System) server provides a name to address resolution and is always involved when a host name is included in a URL. A Round-Robin DNS server has the capability of resolving one single host name into multiple IP addresses, such that requests for a single URL (containing a host name) actually reference different web servers. The client requests a name resolution for the host name, but in fact receives different IP addresses, thus spreading the load among the web servers. In a simple configuration, the Round-Robin DNS server cycles through the list of available servers.
10 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Scalability 10 Scalability The IBM enetwork Dispatcher provides load balancing services by routing TCP/IP session requests to different servers in a cluster, using advisor services to query and evaluate the load of each of the servers in the cluster. The manager service routes clients' requests to a web server based on the current load information received from these advisor services. The web servers process the requests and respond directly to the clients. Scalability in a cluster is the capability of an administrator to increase the capacity of the application dynamically to meet the demand without interrupting or negatively impacting service. WebSphere clusters allow administrators to remove nodes from a cluster in order to upgrade components, such as memory, or to add nodes to the cluster without bringing down the cluster itself. WebSphere and distributed sessions The HTTP protocol is a stateless protocol. However, an HTTP session provides the ability to track a series of requests and associate those requests with a specific user, thereby allowing applications to appear stateful. Furthermore, applications store user information and other information, which means that they need to track sessions in some way, such as through cookies or URL rewriting. In the cluster environment where multiple nodes can serve user requests, it is very important that user Hot Standby Cluster information is available during all requests by ensuring that up-to-date session data is available at all of the nodes. Several different scenarios are used to achieve this requirement: Persistence to database: In this scenario, all of the session data is stored in a database. If one server node fails, other servers can retrieve session data from the database and continue processing client requests. In this scenario, session data survives the crash of a server. Memory to memory: In this scenario, all of the session data is stored in memory at each node, and some kind of replication service replicates the session data among nodes. Most application servers provide peer-to peer-replication. WebSphere provides three models of memory-to-memory replication: Buddy system of single replica Dedicated replication server Peer-to-peer replication The WebSphere Distributed Replication Service is used to replicate data among distributed processes in a cell; however, session replication is only scalable when used with relatively small data sized objects or small numbers of objects. Terminology WebSphere uses specific terminology, which is defined here to avoid confusion: Server: Represents an instance of a Java virtual machine (JVM). Node: Represents a physical system running one or more WebSphere servers. Cell: Represents a logical grouping of multiple nodes for administrative purposes. Cluster: Represents a logical grouping of multiple application servers within a cell for administration, application deployment, load balancing, and failover purposes. Federation: The process of joining a stand-alone WebSphere node to a WebSphere cell.
11 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Clustering LiveCycle products 11 Clustering LiveCycle products If you install a LiveCycle product on an application server cluster, here are a few things you must know: (All LiveCycle products except LiveCycle PDF Generator Professional and LiveCycle PDF Generator Elements) You are required to install the product on only one system, not necessarily a server. After configuring the LiveCycle product, you can deploy it to the cluster. (LiveCycle PDF Generator Professional and LiveCycle PDF Generator Elements) You must install the product on each node of the cluster. LiveCycle must be clustered by using a homogeneous topology (all nodes in the cluster must be configured identically) on each application server it is deployed to. You can ensure that all modules are configured identically by configuring run-time properties in the single-installation staging area. The configuration is deployed using the single entity approach; all nodes in a cluster are deployed as if deploying to a single node. Server clustering is supported by WebSphere Application Server ND and WebSphere Application Server Enterprise. Clustering is not supported by WebSphere Base or WebSphere Application Server Express. For more information, see the appropriate chapter in this guide for the LiveCycle product you are clustering. Caution: LiveCycle requires that all nodes in the cluster run the same operating system. Clustering LiveCycle products involves the following tasks: 1. Installing the web servers 2. Installing instances of WebSphere Application Server 3. Installing WebSphere Application Server Network Deployment 4. Creating the cluster Creating a deployment server profile and creating the Deployment Manager Creating application server profiles on all the nodes Starting all servers on all nodes that will become members of the cluster Federating nodes to the Deployment Manager Creating the cluster Starting the cluster Configuring cluster resources 5. Deploying applications 6. Generating the WebSphere HTTP plug-in 7. Starting the HTTP server
12 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Supported topologies 12 Supported topologies The following sections discuss various topologies, both clustered and non-clustered, that can be employed. Combining the web, application, and database servers This topology consists of a web server, an application server, and a database server on the same node. This topology is the simplest one and must be used for development only. Combining the web and application servers This topology can be considered for production in cases where the load on the user interface (including the web tier) is minimal, with a small number of users. Combining the web and application servers means that all Enterprise Java Bean (EJB) look-ups are local, and therefore reduces the overhead of doing a remote look-up. Also, it reduces network overhead of a round trip between the web tier and the application tier. However, with both servers on the same node, if the web tier is compromised, both tiers are compromised, and if the web tier experiences a heavy load, the application server processing is affected and vice versa. User response time is usually affected in situations when users need to wait a significant amount of time to get a page back due to all server resources (that is, CPU and/or memory) being consumed by the application server. If the web tier has a large session size, the application could be deprived of the memory required to process messages off the Java Message Service (JMS) layer. Combining the application and database servers The simplest topology that should be considered for a production environment is a web server and combined application server with database server. Use this topology only if you are sure that your database load will be minimal. In this scenario, the web server is providing a redirection to the application server. The advantages of this topology are low cost, low complexity, and no need for load balancing. The disadvantages of this topology are little redundancy, low scalability, inability to perform updates and upgrades, and possible low performance due to too many CPU processes. Separate web, application, and database servers This topology is the most common in production systems because it allows allocation of separate resources to each of the tiers. In this case, the web server acts as a proxy to the web tier on the application server that hosts the web components. This level of indirection provides additional security by securing the application server even if the web server is compromised. Adding additional web servers You can add additional web servers for scalability and failover. When using multiple web servers, the WebSphere HTTP plug-in configuration file must be applied to each web server. Failure to do so after introducing a new application will likely cause a 404 File Not Found error to occur when a user tries to access the web application.
13 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Adding additional application servers 13 Adding additional application servers This topology is used in most large-scale production systems where the application servers are clustered to provide high availability and, based on the application server capabilities, failover and load balancing. Clustering application servers has these benefits: Allows you to use cheaper hardware configurations and still achieve higher performance Allows you to upgrade software on servers without downtime Provides higher availability (that is, if one server fails, the other nodes in the cluster pick up the processing) Provides the ability to leverage load distribution algorithms on the web server (by using load balancers) as well as on the EJB tier for load balancing requests Can provide faster average scalability and throughput LiveCycle products are typically CPU-bound and, as a result, performance gains are better achieved by adding more application servers than by adding more memory or disk space to an existing server. Multiple JVMs Vertical scaling of multiple JVMs offers the following advantages: Better utilization of CPU resources: An instance of an application server runs in a single JVM process. However, the inherent concurrency limitations of a JVM process prevent it from fully utilizing the processing power of a machine. Creating additional JVM processes provides multiple thread pools, each corresponding to the JVM process associated with each application server process. This correspondence avoids concurrency limitations and lets the application server use the full processing power of the machine. Load balancing: Vertical scaling topologies can use the WebSphere Application Server workload management facility. Process failover: A vertical scaling topology also provides failover support among application server cluster members. If one application server instance goes offline, the other instances on the machine continue to process client requests. For more information on vertical clustering, go to the IBM website at fo/ae/ae/ctop_vertscale.html Messaging topologies You can configure embedded JMS in various messaging topologies: Multiple WebSphere Application Servers and a single JMS server within the cluster Multiple WebSphere Application Servers and a single JMS server that is not part of the cluster Multiple WebSphere Application Servers, each with their own messaging engine JMS server implementation Multiple WebSphere application servers and a single JMS server within the cluster Features This topology is not recommended for any product because it does not provide scalability, load balancing, or load distribution. It only provides failover for the application server and the JMS. Only one server performs all processing and the other server takes over if the primary server fails.
14 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Unsupported topologies 14 JMS server implementation Multiple WebSphere application servers and a single JMS server that is not part of the cluster Multiple WebSphere application servers, each with their own JMS Features This topology provides true load balancing as well as JMS failover, high availability, and load distribution. This topology provides JMS failover, high availability, load distribution, but not true JMS load balancing. Unsupported topologies The following topologies are not supported for LiveCycle. Splitting the web container / EJB container Splitting LiveCycle servers into presentation/business logic tiers and running them on distributed computers is not supported. Geographically distributed configuration Many applications locate their systems geographically to help distribute the load and provide an added level of redundancy. LiveCycle does not support this configuration because LiveCycle components cannot be pulled apart to run on different hosts; LiveCycle is deployed as a monolithic application.
15 2 Configuring the Application Servers You must now configure your WebSphere Application Servers. Preparing to install Before installing WebSphere, the following configuration tasks must be performed: Disk space: Ensure that the partition that will hold the application server has 10 GB of free disk space. Note: IBM AIX maintenance-level packages are extracted to /usr/sys/ist.images and can be up to 1 GB. IP address settings: All of the computers must have a fixed IP address and must be in the same DNS. Installing the Deployment Manager You must install the Deployment Manager (WebSphere Application Server Network Deployment) on an administrative system. For information on installing the Deployment Manager, see: Installing WebSphere application server You must install WebSphere version for running LiveCycle products. Install WebSphere on each node that will form the cluster, accepting all default configuration options, including default messaging. Installing the WebSphere FixPacks If you are running WebSphere Application Server 6.0.2, installing Fix Pack 15 upgrades your installation to version You can obtain the Fix Pack from IBM at this location: www-1.ibm.com/support/docview.wss?rs=180&uid=swg #60215 You must also upgrade the Java JDK to SR4 or later, you can find the latest IBM JDK here: Installing the CORBA FixPack The following Fix Pack is required if you are installing WebSphere 6.0 with Fix Pack 15. If you install Fix Pack 17 or higher, the Corba Fix Pack is not required. In order to install this Fix Pack you must have administrative rights in Windows, or be the Actual Root User in a UNIX environment, and be logged in with the same authority level when unpacking a fix, Fix Pack or Refresh Pack. 15
16 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Shutting down and restarting nodes and clusters 16 To install the CORBA Fix Pack: 1. Download the WS-WAS-IFPK30872.pak Fix Pack file from the following link: 2. Unzip the Update Installer to the [appserver root]/updateinstaller directory and place the Fix Pack in the [appserver root]/updateinstaller/maintenance directory. 3. If WebSphere is currently started, stop it. 4. Open a command prompt and navigate to the [appserver root] and run the following command:./updateinstaller/update Note: The update command should be run from the [appserver root] directory to avoid errors. 5. Follow the wizard with all default settings (if the wizard needs to upgrade the IBM JVM, follow the prompts to relaunch the wizard). Make sure to select the CORBA Fix Pack file as the maintenance pack to install. 6. Restart WebSphere. Shutting down and restarting nodes and clusters The procedures below detail how to shut down and restart the servers in a cluster and should be referenced when instructed throughout this document to restart servers in the cluster. To shut down a cluster: 1. In a web browser, type the URL to the Deployment Manager; for example, type: 2. In the left frame, select Servers and click the Clusters link. 3. Select the cluster and click Stop. To shut down a node: 1. In a web browser, type the URL to the Deployment Manager; for example, type: 2. In the left frame select System Administration and click Nodes. 3. Select the node and click Stop. To restart the nodes the node agents from the command line: 1. Navigate to the [profile root]/bin directory of the application server on each node. 2. Run the startnode command: (Windows) startnode.bat (Linux, UNIX) startnode.sh
17 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Creating profiles 17 Creating profiles Profiles are an integral aspect of WebSphere 6 and provide a powerful means of creating complex clusters and diverse node management. Note that the concept of profiles and nodes are directly correlated in WebSphere 6. In order to properly set up a cluster, you must first create a Deployment Manager profile that will run the Administrative Console and then you can add and federate the other nodes that will comprise your eventual cluster. Creating a WebSphere Deployment Manager profile On the node you have selected to host the Deployment Manager, you must create a Deployment Manager profile. This profile will contain the Administrative Console and also host the cell to which the nodes will be federated (server based profiles) that will comprise you cluster. To create a WebSphere Deployment Manager profile: 1. From a command prompt, change the current directory to [appserver root]/bin/profilecreator. 2. Start the WebSphere Profile Creation wizard by entering the following command: (Windows) pctwindows.exe (AIX) pctaix.bin (Solaris) pctsolaris.bin (Linux) pctlinux.bin 3. On the Welcome panel, click Next. 4. Select Create a Deployment Manager profile, and then Click Next. 5. In the Profile Name box, type a name for the profile or accept the default, and click Next. 6. Click Next to accept the default directory to store the profile. 7. In the Node Name box, type a unique name to represent this profile s node. Typically this name is in the form [DNS_Name]-[Node_Name]. 8. In the Host Name box, type the full DNS name or IP address of this machine and click Next. 9. The Cell Name box will default to a generated name such as [DNS_Name]CellXX, where XX represents the cell number. For example, if this is the first Deployment Manager created on this node the number will default to 01. Type a new cell name, or accept the default, and click Next. 10. Record the port number used for the SOAP connector and the Web Container (HTTP and HTTPS), and then click Next. Note: You will need to know these port numbers when you run Configuration Manager to configure the application server. 11. (Windows) Specify whether you want to run the server as a Windows service, and then specify the user account to run the service and click Next. 12. Review the profile summary and click Next. 13. Click Finish.
18 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Creating a WebSphere Application Server profile 18 Creating a WebSphere Application Server profile When installing WebSphere Base, a default profile will be created. You can create a different profile in order to leverage in your cluster by following these procedures. WebSphere needs to be running when you perform this task. To create a WebSphere Application Server profile: 1. From a command prompt, change the current directory to [appserver root]/bin/profilecreator. 2. Start the WebSphere Profile Creation wizard by entering the following command: (Windows) pctwindows.exe (AIX) pctaix.bin (Solaris) pctsolaris.bin (Linux) pctlinux.bin 3. On the Welcome panel, click Next. 4. Select Create an Application Server profile, and then click Next. 5. In the Profile Name box, type a name for the profile or accept the default, and click Next. 6. Click Next to accept the default directory to store the profile. 7. In the Node Name box, type a unique name to represent this profile s node. Typically this name is in the form [DNS_Name]-[Node_Name]. In the Host Name box, type the full DNS name or IP address of this machine and click Next. 8. Record the port number used for the SOAP connector and the Web Container (HTTP and HTTPS), and then click Next. Note: You will need to know these port numbers when you run Configuration Manager to configure the application server. 9. (Windows) Specify whether you want to run the server as a Windows service, and then specify the user account to run the service and click Next. 10. Review the profile summary and click Next. 11. Click Finish. Repeat steps 1 to 11 for each node in the cluster. For a vertical cluster, these steps would be repeated on the same machine as the first profile was created; otherwise, these steps would be performed on the other machines that comprise the cluster.
19 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Adding and removing nodes 19 Adding and removing nodes Now that your profiles have been created, you now must federate each of those profiles/nodes into the Deployment Manager profile. The procedures below detail how to add a node to or remove a node from an existing WebSphere cell (with multiple nodes). Adding nodes In order to federate a node, ensure that the Deployment Manager is running. Check to ensure that you can ping the Deployment Manager from the new node by name, not simply by IP address. It is also important to ensure that the server system clocks are synchronized so that they are within 5 minutes of each other. To federate a node: 1. From a command prompt, navigate to the [profiles root]\<profile name>\bin directory of the new node. 2. Run the addnode script (addnode.bat for Windows, addnode.sh for Linux/UNIX) and use the computer name as a parameter; for example, type: (Windows) addnode.bat [ND_ServerName] [ND_ServerPort] (UNIX/Linux) addnode.sh [ND_ServerName] [ND_ServerPort] In addition to federating the node to the cell, addnode also starts the node agent process. After the node is federated to a cell, the node agent is started with the startnode command, which is also located in the profile s bin directory. During this process, the node being federated communicates to the Deployment Manager using port 8879, by default. The node agent pings all of the application servers on a node. When the node agent detects that the application server is not available, it tries to stop and restart the application server. It is a good idea to add the node agent as an operating system daemon process in UNIX. You can add the node agent as a service in Windows by using WASService, available in the bin directory of the base application server installation. Alternatively, the nodes can also be federated from the Administrative Console of the Deployment Manager. To federate nodes from the Deployment Manager: 1. Navigate to System Administration > Nodes and click Add Node. 2. Select Managed Node and click Next. 3. Specify the following properties: In the Hostname box, enter the hostname or IP address of the node. In the JMX connector port box, type the connector port to be used, the default value is Deselect include busses and include applications Click OK. 4. Repeat steps 1 to 3 for each additional node to be added. 5. Log out of the Administrative Console, and then log back in. The federated nodes will be visible under System Administration > Nodes.
20 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Removing nodes 20 Removing nodes Nodes can be removed either using script files on each of the nodes, or in the Administrative Console of the Deployment Manager. To remove a node from a cluster using script files: 1. Verify that the Deployment Manager is running. 2. On each node, navigate to the bin directory of the profile running the node manager and run the appropriate removenode and cleanup scripts: (Windows) removenode.bat (Linux, UNIX) removenode.sh (Windows) cleanupnode.bat (Linux, UNIX) cleanupnode.sh To remove a node from a cluster using the admin console: 1. Verify that the Deployment Manager is running. 2. In a web browser, type the URL to the Deployment Manager; for example, type: 3. In the left frame, select System Administration and click the Nodes link. 4. Select the node to be deleted and click Remove Node. 5. Verify the node has been removed. Deleting profiles You can delete profiles that are no longer needed on your Deployment Manager and servers. To delete a profile: 1. Open a command prompt, and navigate to the [appserver root]/bin directory. 2. Run the following command from the console: (Windows) wasprofile.bat -delete -profilename [profilename] (UNIX/Linux) wasprofile.sh -delete -profilename [profilename] Note: The deleted profiles directory and log files will not be deleted. If you attempt to create a new profile using the same profile name as the deleted profile, you will receive an error. You must manually delete the directory before creating the new profile.