bbc Configuring LiveCycle Application Server Clusters Using WebSphere 5.1 Adobe LiveCycle June 2007 Version 7.2

Size: px
Start display at page:

Download "bbc Configuring LiveCycle Application Server Clusters Using WebSphere 5.1 Adobe LiveCycle June 2007 Version 7.2"

Transcription

1 bbc Configuring LiveCycle Application Server Clusters Using WebSphere 5.1 Adobe LiveCycle June 2007 Version 7.2

2 2007 Adobe Systems Incorporated. All rights reserved. Adobe LiveCycle 7.2 Configuring LiveCycle Application Server Clusters Using WebSphere 5.1 for Microsoft Windows, Linux, and UNIX Edition 2.0, June 2007 If this guide is distributed with software that includes an end user agreement, this guide, as well as the software described in it, is furnished under license and may be used or copied only in accordance with the terms of such license. Except as permitted by any such license, no part of this guide may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise, without the prior written permission of Adobe Systems Incorporated. Please note that the content in this guide is protected under copyright law even if it is not distributed with software that includes an end user license agreement. The content of this guide is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Adobe Systems Incorporated. Adobe Systems Incorporated assumes no responsibility or liability for any errors or inaccuracies that may appear in the informational content contained in this guide. Please remember that existing artwork or images that you may want to include in your project may be protected under copyright law. The unauthorized incorporation of such material into your new work could be a violation of the rights of the copyright owner. Please be sure to obtain any permission required from the copyright owner. Any references to company names and company logos in sample material or in the sample forms included in this software are for demonstration purposes only and are not intended to refer to any actual organization. Adobe, the Adobe logo, LiveCycle, and Reader are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. IBM, AIX, DB2, and WebSphere are trademarks of International Business Machines Corporation in the United States, other countries, or both. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Oracle is a trademark of Oracle Corporation and may be registered in certain jurisdictions. SUSE is a trademark of Novell, Inc. in the United States and other countries. Sun, Java, and Solaris are trademarks or registered trademark of Sun Microsystems, Inc. in the United States and other countries. Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. UNIX is a registered trademark of The Open Group in the US and other countries. All other trademarks are the property of their respective owners. Adobe Systems Incorporated, 345 Park Avenue, San Jose, California 95110, USA. Notice to U.S. Government End Users. The Software and Documentation are Commercial Items, as that term is defined at 48 C.F.R , consisting of Commercial Computer Software and Commercial Computer Software Documentation, as such terms are used in 48 C.F.R or 48 C.F.R , as applicable. Consistent with 48 C.F.R or 48 C.F.R through , as applicable, the Commercial Computer Software and Commercial Computer Software Documentation are being licensed to U.S. Government end users (a) only as Commercial Items and (b) with only those rights as are granted to all other end users pursuant to the terms and conditions herein. Unpublished-rights reserved under the copyright laws of the United States. Adobe Systems Incorporated, 345 Park Avenue, San Jose, CA , USA. For U.S. Government End Users, Adobe agrees to comply with all applicable equal opportunity laws including, if appropriate, the provisions of Executive Order 11246, as amended, Section 402 of the Vietnam Era Veterans Readjustment Assistance Act of 1974 (38 USC 4212), and Section 503 of the Rehabilitation Act of 1973, as amended, and the regulations at 41 CFR Parts 60-1 through 60-60, , and The affirmative action clause and regulations contained in the preceding sentence shall be incorporated by reference.

3 Contents Preface... 6 Who should read this guide?... 6 Versions... 6 Conventions used in this guide... 6 Related documentation... 7 Updated LiveCycle product information Overview... 9 About clustering application servers... 9 Failover...10 Load balancing...10 Web server load balancing...10 Scalability...10 WebSphere and distributed sessions...11 Terminology...11 Clustering LiveCycle products...11 Supported topologies...12 Combining the web, application, and database servers...12 Combining the web and application servers...12 Combining the application and database servers...13 Separate web, application, and database servers...13 Adding additional web servers...13 Adding additional application servers...13 Multiple JVMs...14 Messaging topologies...14 Embedded JMS...14 Using WebSphere MQ...15 Unsupported topologies Configuring the Application Servers Preparing to configure...17 Installing the Deployment Manager...17 Installing WebSphere Application Server...17 Installing the WebSphere FixPacks...19 Creating an endorsed directory...19 Configuring the shared libraries...19 Increasing the SOAP time-out...21 Preparing WebSphere MQ...21 Installing the required software...22 Configuring WebSphere MQ...25 Setting the MQ_INSTALL_ROOT Configuring the Web Servers...28 Preparing for installation...28 Installing the web server...28 Generating and updating the IBM HTTP Server plug-in

4 Contents Configuring LiveCycle Application Server Clusters Using WebSphere 4 4 Configuring WebSphere Clusters Federating all nodes...30 Creating clusters by using the Deployment Manager...30 Setting up clusters...31 Configuring cluster members...32 Starting clusters Maintaining a WebSphere Cluster Shutting down and restarting nodes and clusters...33 Adding and removing nodes...33 Adding nodes...33 Removing nodes Configuring the Database Installing database drivers...36 Configuring the databases...37 Configuring Oracle databases...37 Configuring DB2 databases...37 Configuring a DB2 database for concurrent usage...39 Configuring SQL Server databases...39 Installing JTA stored procedures...40 Enabling XA transactions for Windows Server Configuring datasources...41 Configuring the Oracle data source...42 Configuring the DB2 data source...43 Configuring the SQL Server data source Configuring Security for LiveCycle Policy Server Configuring Messaging and Security Configuring JMS resources for LiveCycle Policy Server...50 Configuring JMS resources with embedded messaging...50 Configuring JMS resources for WebSphere MQ...52 Configuring messaging for other LiveCycle products...55 Configuring JMS resources with embedded messaging...55 Configuring JMS resources for WebSphere MQ...60 Configuring JMS resources...60 Creating WebSphere MQ queue destinations...60 Creating WebSphere MQ topic destinations...62 Creating a queue connection factory...63 Creating topic connection factories...64 Configuring the listener ports...66 Configuring WebSphere MQ JMS provider for Watched Folder...68 Checking your WebSphere MQ configuration Configuration Requirements Large document handling...69 Common elements between LiveCycle products...69 Configuring LiveCycle Workflow properties...69 Scheduler property definition and values that can be added...69 Cache configuration properties...71 Adding property definitions receiver configuration...73

5 Contents Configuring LiveCycle Application Server Clusters Using WebSphere 5 9 Configuration Requirements (Continued) LiveCycle Policy Server caching configuration setup...73 Configuring caching properties for User Management...73 Configuring JVM properties Deploying LiveCycle Products...75 Assembling LiveCycle.ear for LiveCycle PDF Generator in a cluster...75 About deploying LiveCycle products to a cluster...75 Summary of deployable components...75 Deploying LiveCycle products...76 Starting the application...80 Verifying the LiveCycle Forms deployment...80 Bootstrapping LiveCycle products...81 Viewing log files Issues with Clustering Failed to acquire job exception in WebSphere...83 ORB threadpool for LiveCycle Policy Server and LiveCycle Document Security...83 LiveCycle Forms preferences do not get propagated...83 LiveCycle Reader Extensions UI is not supported in a cluster...84

6 Preface This document explains how to deploy an Adobe LiveCycle server product in a IBM WebSphere Application Server 5.1 clustered environment. Who should read this guide? This guide provides information for administrators or developers responsible for installing, configuring, administering, or deploying LiveCycle products. The information provided is based on the assumption that anyone reading this guide is familiar with WebSphere Application Server, RedHat Linux, SUSE Linux, Microsoft Windows, AIX, or Sun Solaris operating systems, MySQL, Oracle, DB2, or SQL Server database servers, and web environments. Versions This document describes clustering for LiveCycle 7.2.x. Conventions used in this guide This guide uses the following naming conventions for common file paths. Name Default value Description [LiveCycle root] [appserver root] [dbserver root] Windows: C:\Adobe\LiveCycle\ Linux and UNIX : /opt/adobe/livecycle/ Windows: C:\Program Files\WebSphere\AppServer Linux and UNIX: /opt/websphere/appserver Depends on the database type and your specification during installation. The installation directory that is used for all LiveCycle products. The installation directory contains subdirectories for Adobe Configuration Manager, product SDKs, and each LiveCycle product installed (along with the product documentation). The home directory of the application server that runs the LiveCycle products. The location where the LiveCycle database server is installed. 6

7 Preface Configuring LiveCycle Application Server Clusters Using WebSphere Related documentation 7 Name Default value Description [product root] Windows: C:\Adobe\LiveCycle\Assembler C:\Adobe\LiveCycle\pdfgenerator C:\Adobe\LiveCycle\Workflow C:\Adobe\LiveCycle\Forms C:\Adobe\LiveCycle\Print C:\Adobe\LiveCycle\Formmanager Linux, UNIX: /opt/adobe/livecycle/assembler /opt/adobe/livecycle/pdfgenerator /opt/adobe/livecycle/workflow /opt/adobe/livecycle/forms /opt/adobe/livecycle/print /opt/adobe/livecycle/formmanager The directories where product-specific directories and files (such as documentation, uninstall files, samples, and license information) are located. Most of the information about directory locations in this guide is cross-platform (all file names and paths are case-sensitive on Linux). Any platform-specific information is indicated as required. Related documentation This document should be used in conjunction with the Installing and Configuring LiveCycle guide or the Installing and Configuring LiveCycle Security Products guide for your application server. Throughout this document, specific sections in these installing and configuring guides are listed when more detailed information is available. The Installing and Configuring LiveCycle 7.2.x guides apply to the following products: Adobe LiveCycle Assembler Adobe LiveCycle Forms 7.2 Adobe LiveCycle Form Manager Adobe LiveCycle PDF Generator 7.2 and Adobe LiveCycle Print 7.2 Adobe LiveCycle Workflow Adobe LiveCycle Workflow Designer Watched Folder 1.2 The Installing and Configuring LiveCycle Security Products guides apply to the following products: Adobe LiveCycle Document Security 7.2 and Adobe LiveCycle Reader Extensions 7.2 Adobe LiveCycle Policy Server7.2

8 Preface Configuring LiveCycle Application Server Clusters Using WebSphere Updated LiveCycle product information 8 Updated LiveCycle product information Adobe Systems has posted a Knowledge Center article to communicate any updated LiveCycle product information with customers. You can access the article at:

9 1 Overview This section describes the benefits and issues associated with setting up clusters. About clustering application servers A cluster is a group of application server instances running simultaneously, which act like a single system enabling load distribution, load balancing, scalability, high availability, and failover. Within a cluster, multiple server instances can run on the same computer (known as a vertical cluster) or be located on different computers (known as a horizontal cluster), or they can form a combination of both horizontal and vertical clusters. With clustering, client work can be distributed across several nodes instead of being handled by a single application server. In a clustered configuration, application server instances are server members of the cluster, all of which must have identical application components deployed on them. However, other than the configured applications, cluster members do not have to share any other configuration parameters. For example, you can cluster multiple server instances on one computer, with a single instance on another computer, provided they are all running WebSphere 5.1. By clustering, you can achieve one or more of the following benefits. How you implement clustering determines which benefits are achieved: Failover Load balancing Load distribution Scalability 9

10 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Failover 10 Failover Failover allows one application server instance to act as a backup to a failed application server instance and resume processing the task, thereby enabling one application server to carry on processing. Some products, such as LiveCycle Workflow, can recover process states at the step and process levels. Load balancing Load balancing is a technique used to distribute processing and communications activity evenly across a number of systems so that no single device is overwhelmed. If one server starts to get congested or overloaded, requests are forwarded to another server with more capacity. Application server load balancing is useful for managing the load between application server tiers. Application servers can be configured to use a weighted round-robin routing policy that ensures routing distribution based on the set of server weights that have been assigned to the members of a cluster. Configuring all servers in the cluster to have the same weight produces a load distribution in which all servers receive the same number of requests and this configuration is recommended when all nodes in the cluster have similar hardware configurations. Weighting some servers more heavily sends more requests to these servers than the lower weight value servers. Preferred routing configurations can also be configured to ensure, for example, that only cluster members on that node are selected (using the round-robin weight method) and cluster members on remote nodes are selected only if a local server is not available. Application server load balancing is best used when balancing is needed between tiers. Web server load balancing Scalability Web server load balancing is useful for queuing and throttling requests. For IBM HTTP Server, the two most commonly used methods for load balancing and single-system imaging are Round-Robin DNS and the IBM enetwork Dispatcher. Round-Robin DNS is a relatively simple method of load balancing, where a DNS (Domain Name System) server provides a name to address resolution and is always involved when a host name is included in a URL. A Round-Robin DNS server has the capability of resolving one single host name into multiple IP addresses, such that requests for a single URL (containing a host name) actually reference different web servers. The client requests a name resolution for the host name, but in fact receives different IP addresses, thus spreading the load among the web servers. In a simple configuration, the Round-Robin DNS server cycles through the list of available servers. The IBM enetwork Dispatcher provides load balancing services by routing TCP/IP session requests to different servers in a cluster, using advisor services to query and evaluate the load of each of the servers in the cluster. The manager service routes clients' requests to a web server based on the current load information received from these advisor services. The web servers process the requests and respond directly to the clients. Scalability in a cluster is the capability of an administrator to increase the capacity of the application dynamically to meet the demand without interrupting or negatively impacting service. WebSphere clusters allow administrators to remove nodes from a cluster in order to upgrade components, such as memory, or to add nodes to the cluster without bringing down the cluster itself.

11 Overview Configuring LiveCycle Application Server Clusters Using WebSphere WebSphere and distributed sessions 11 WebSphere and distributed sessions The HTTP protocol is a stateless protocol. However, an HTTP session provides the ability to track a series of requests and associate those requests with a specific user, thereby allowing applications to appear stateful. Furthermore, applications store user information and other information, which means that they need to track sessions in some way, such as through cookies or URL rewriting. In the cluster environment where multiple nodes can serve user requests, it is very important that user Hot Standby Cluster information is available during all requests by ensuring that up-to-date session data is available at all of the nodes. Several different scenarios are used to achieve this requirement: Persistence to database: In this scenario, all of the session data is stored in a database. If one server node fails, other servers can retrieve session data from the database and continue processing client requests. In this scenario, session data survives the crash of a server. Memory to memory: In this scenario, all of the session data is stored in memory at each node, and some kind of replication service replicates the session data among nodes. Most application servers provide peer-to peer-replication. WebSphere provides three models of memory-to-memory replication: Buddy system of single replica Dedicated replication server Peer-to-peer replication The WebSphere Distributed Replication Service is used to replicate data among distributed processes in a cell; however, session replication is only scalable when used with relatively small data sized objects or small numbers of objects. Terminology WebSphere uses specific terminology, which is defined here to avoid confusion: Server: Represents an instance of a Java virtual machine (JVM). Node: Represents a physical system running one or more WebSphere servers. Cell: Represents a logical grouping of multiple nodes for administrative purposes. Cluster: Represents a logical grouping of multiple application servers within a cell for administration, application deployment, load balancing, and failover purposes. Federation: The process of joining a stand-alone WebSphere node to a WebSphere cell. Clustering LiveCycle products If you install a LiveCycle product on an application server cluster, here are a few things you must know: (All LiveCycle products except LiveCycle PDF Generator Professional and LiveCycle PDF Generator Elements) You are required to install the product on only one system, not necessarily a server. After configuring the LiveCycle product, you can deploy it to the cluster. (LiveCycle PDF Generator Professional and LiveCycle PDF Generator Elements) You must install the product on each node of the cluster. LiveCycle must be clustered by using a homogeneous topology (all nodes in the cluster must be configured identically) on each application server it is deployed to. You can ensure that all modules are configured identically by configuring run-time properties in the single-installation staging area.

12 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Supported topologies 12 The configuration is deployed using the single entity approach; all nodes in a cluster are deployed as if deploying to a single node. Server clustering is supported by WebSphere Application Server ND and WebSphere Application Server Enterprise. Clustering is not supported by WebSphere Base or WebSphere Application Server Express. For more information, see the appropriate chapter in this guide for the LiveCycle product you are clustering. Caution: LiveCycle requires that all nodes in the cluster run the same operating system. Clustering LiveCycle products involves the following tasks: 1. Installing the web servers 2. Installing instances of WebSphere Application Server 3. Installing the Deployment Manager 4. Creating the cluster Starting the Deployment Manager Federating nodes into the cell Creating the cluster using Deployment Manager Setting up the cluster Configuring the cluster members Starting the cluster 5. Deploying applications 6. Generating the WebSphere HTTP plug-in 7. Starting the HTTP server Supported topologies The following sections discuss various topologies, both clustered and non-clustered, that can be employed. Combining the web, application, and database servers This topology consists of a web server, an application server, and a database server on the same node. This topology is the simplest one and must be used for development only. Combining the web and application servers This topology can be considered for production in cases where the load on the user interface (including the web tier) is minimal, with a small number of users. Combining the web and application servers means that all Enterprise Java Bean (EJB) look-ups are local, and therefore reduces the overhead of doing a remote look-up. Also, it reduces network overhead of a round trip between the web tier and the application tier.

13 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Combining the application and database servers 13 However, with both servers on the same node, if the web tier is compromised, both tiers are compromised, and if the web tier experiences a heavy load, the application server processing is affected and vice versa. User response time is usually affected in situations when users need to wait a significant amount of time to get a page back due to all server resources (that is, CPU and/or memory) being consumed by the application server. If the web tier has a large session size, the application could be deprived of the memory required to process messages off the Java Message Service (JMS) layer. Combining the application and database servers The simplest topology that should be considered for a production environment is a web server and combined application server with database server. Use this topology only if you are sure that your database load will be minimal. In this scenario, the web server is providing a redirection to the application server. The advantages of this topology are low cost, low complexity, and no need for load balancing. The disadvantages of this topology are little redundancy, low scalability, inability to perform updates and upgrades, and possible low performance due to too many CPU processes. Separate web, application, and database servers This topology is the most common in production systems because it allows allocation of separate resources to each of the tiers. In this case, the web server acts as a proxy to the web tier on the application server that hosts the web components. This level of indirection provides additional security by securing the application server even if the web server is compromised. Adding additional web servers You can add additional web servers for scalability and failover. When using multiple web servers, the WebSphere HTTP plug-in configuration file must be applied to each web server. Failure to do so after introducing a new application will likely cause a 404 File Not Found error to occur when a user tries to access the web application. Adding additional application servers This topology is used in most large-scale production systems where the application servers are clustered to provide high availability and, based on the application server capabilities, failover and load balancing. Clustering application servers has these benefits: Allows you to use cheaper hardware configurations and still achieve higher performance Allows you to upgrade software on servers without downtime Provides higher availability (that is, if one server fails, the other nodes in the cluster pick up the processing) Provides the ability to leverage load balancing algorithms on the web server (by using load balancers) as well as on the EJB tier for processing requests Can provide faster average scalability and throughput LiveCycle products are typically CPU-bound and, as a result, performance gains are better achieved by adding more application servers than by adding more memory or disk space to an existing server.

14 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Multiple JVMs 14 Multiple JVMs Vertical scaling of multiple JVMs offers the following advantages: Better utilization of CPU resources: An instance of an application server runs in a single JVM process. However, the inherent concurrency limitations of a JVM process prevent it from fully utilizing the processing power of a machine. Creating additional JVM processes provides multiple thread pools, each corresponding to the JVM process associated with each application server process. This correspondence avoids concurrency limitations and lets the application server use the full processing power of the machine. Load balancing: Vertical scaling topologies can use the WebSphere Application Server workload management facility. Process failover: A vertical scaling topology also provides failover support among application server cluster members. If one application server instance goes offline, the other instances on the machine continue to process client requests. For more information on vertical clustering, go to the IBM website at fo/ae/ae/ctop_vertscale.html Messaging topologies The following sections discuss various messaging topologies that can be employed. Embedded JMS Embedded JMS is not recommended for WebSphere Application Server clusters. Although embedded JMS can function in a cluster, data cannot be effectively shared between embedded JMS nodes in a cluster. Also, because nodes are not handled as a single entity, deployment is unsatisfactory and failover is not supported. Finally, embedded JMS is not scalable. In a production environment, WebSphere MQ messaging is required. You can configure embedded JMS in various messaging topologies: Single WebSphere Application Server, with a single embedded JMS server Multiple WebSphere Application Servers, with a single embedded JMS server Multiple WebSphere Application Servers, each with their own embedded JMS server A single JMS server in a single or multiserver environment is simple to configure and does not require JMS Server Management. Because the data store (where messages are stored) is not available for tuning and/or management, the single JMS server cannot be tuned as it is started, nor can it be managed by the WebSphere Application Server. Using a JMS server for each WebSphere Application Server is the preferred configuration for clustering WebSphere Application Servers, even though the JMS servers are not clustered. It provides redundancy with minimal additional configuration. However, all messages are node-locked; that is, if one JMS server on a WebSphere Application Server fails, its messages remain on the node and cannot be recovered unless the server recovers. Also, you cannot load balance because each JMS server is local to the WebSphere Application Server.

15 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Messaging topologies 15 Depending on which topology is selected, the configuration of connection factories and topics will be configured differently. JMS server implementation Single WebSphere Application Server and a single JMS server Multiple WebSphere Application Servers and a single JMS server Multiple WebSphere Application Servers, each with their own JMS Topic configuration Because this configuration has one embedded JMS server, the topics can be defined at the cell level. With only one embedded JMS server, the topics need to be at the node level on a server within the cluster running the JMS server and both topic connection factories pointing to that JMS server. Some LiveCycle products, such as LiveCycle Forms and LiveCycle Form Manager do not support this topology. In this configuration, the connection factories and topics need to be defined at the node level to leverage the individual JMS servers. Using WebSphere MQ In various topologies, you can configure these components: Multiple WebSphere MQ managers One WebSphere MQ manager per WebSphere Application Server cluster; WebSphere MQ in parent-child mode It is recommended that the WebSphere MQ is installed on a separate server from any of the cluster nodes. The WebSphere MQ client and JMS are installed on each WebSphere Application Server node in the cluster. Multiple WebSphere MQ managers You can configure the WebSphere Application Server to connect to another WebSphere MQ server if the queue manager fails. To implement this configuration, all WebSphere MQ queue managers need to be configured. There will be more processes to manage because each WebSphere MQ needs the queue manager, queue broker, and message listener configured. Although multiple WebSphere MQ servers may be running, this configuration does not support failover because a message delivered to one queue manager from one WebSphere Application Server can only be processed by that WebSphere MQ queue manager. One WebSphere MQ manager per WebSphere Application Server cluster; WebSphere MQ in parent-child mode When installing a WebSphere MQ manager per WebSphere Application Server cluster in parent-child mode, there is effectively one WebSphere MQ manager per cluster of WebSphere Application Servers. Therefore, the topics need to be defined at the node level, and the topic connection factory needs to be configured with Clone Support enabled and a specific Client ID set for the connection factory. If the parent queue manager fails, the child queue manager processes future messages without having to reconfigure the WebSphere Application Servers. However, there is additional configuration overhead of setting up broker communication and broker topology.

16 Overview Configuring LiveCycle Application Server Clusters Using WebSphere Unsupported topologies 16 Depending on which topology you select, the configuration of connection factories and topics will be configured differently. WebSphere MQ One WebSphere MQ per WebSphere Application Server One WebSphere MQ per WebSphere Application Server cluster; WebSphere MQ in parent-child mode Topic configuration In this configuration, the topics need to be defined at the node level. Because there is a single WebSphere MQ manager per server, the connection factories need to be defined at the node level to leverage both WebSphere MQ queue managers. In this configuration, there is effectively one WebSphere MQ manager per cluster of WebSphere Application Servers, so the topics need to be defined at the node level, and the topic connection factory needs to be configured with Clone Support enabled and a Client ID needs to be set for the connection factory. For details about the topology setup, see Chapter 2 of the Pub-Sub User guide at www-306.ibm.com/software/integration/mqfamily/library/manualsa/. Unsupported topologies The following topologies are not supported for LiveCycle. Splitting the web container / EJB container Splitting LiveCycle servers into presentation/business logic tiers and running them on distributed computers is not supported. Geographically distributed configuration Many applications locate their systems geographically to help distribute the load and provide an added level of redundancy. LiveCycle does not support this configuration because LiveCycle components cannot be pulled apart to run on different hosts; LiveCycle is deployed as a monolithic application.

17 2 Configuring the Application Servers You can install the WebSphere Application Server Base on each node in the cluster. The Deployment Manager is to be installed on a separate system but not on one of the clustered nodes. Preparing to configure Before installing WebSphere, the following configuration tasks must be performed: Disk space: Ensure that the partition that will hold the application server has 10 GB of free disk space. Note: IBM AIX maintenance-level packages are extracted to /usr/sys/ist.images and can be up to 1 GB. IP address settings: All of the computers must have a fixed IP address and must be in the same DNS. AIX port setting: If you are using AIX, the WebSphere default port 9090 conflicts with the AIX service called wsmserver. In this case, a transport_bindexception is generated and the product does not deploy effectively. You must change the wsmserver port to a number other than To do so, edit the /etc/services file by searching for the following line and changing 9090 to another port number, such as 9091: wsmserver 9090/tcp Swap Space: You must increase the amount of swap space configured on your OS to 2 GB of swap space for each 8 MB of RAM installed on the computer. You must make this change to each computer in the cluster. Installing the Deployment Manager You must install the Deployment Manager (WebSphere Application Server Network Deployment) on an administrative system. For information on installing the Deployment Manager see: c/info/welcome_nd.html Installing WebSphere Application Server You must install WebSphere Application Server version for running LiveCycle products. The administrator user account that is being used to perform the WebSphere installation must have a user name up to 12 characters in length. The WebSphere embedded messaging server is supported for use in development environments where server workloads are expected to be low. However, in production environments you must install WebSphere MQ. The embedded messaging server option is not required when WebSphere MQ is installed. The following steps detail how to install WebSphere Application Server. It is assumed that you have downloaded and unzipped the installation file to an installation directory, and have opened a system terminal and navigated to that directory. Then, for each WebSphere Application Server node, perform the following procedure. 17

18 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Installing WebSphere Application Server 18 To install WebSphere Application Server 5.1: 1. Run the installation program (install.exe, install.sh, or lauchpad.sh) from the appropriate directory for your operating system. 2. Select Install the product. 3. Accept the License Agreement. 4. If you are installing WebSphere under AIX, errors may appear about missing filesets, such as X11.fnt.ucs.ttf*, which are AIX Windows Unicode TrueType fonts. You can ignore these error messages. 5. Select the Custom installation, and then select only the following features: Application Server Administration Scripted Administration Administrative Console Ant and Deployment Tools Deploy Tool Ant Utilities Web Server Plug-ins Plug-in for IBM HTTP Server v2.0 Performance and Analysis Tools Tivoli Performance Viewer Dynamic Cache Monitor Performance Servlet Log Analyzer Note: Be sure to deselect IBM HTTP Server version and Plug-In for IBM HTTP Server , which are enabled by default, and select Plug-In for IBM HTTP Server v Specify the directory where WebSphere will be installed. On some operating systems, you may have to specify the location of your HTTP server. 7. In the Node Name box, specify the name of the server and, in the Host Name or IP Address box, specify the fully qualified host name or the IP address of the server. The Install Wizard confirms the packages to be installed. 8. Click Next. 9. After the installation is complete, select whether to register the product. 10. Click Next, and then click Finish.

19 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Installing the WebSphere FixPacks 19 Installing the WebSphere FixPacks Installing Fix Pack 1 upgrades your installation of WebSphere Application Server 5.1 to version You can obtain the Fix Pack from IBM at this location: www-1.ibm.com/support/docview.wss?rs=180&context=sseqtp&uid=swg Installing Cumulative Fix 5 upgrades version to version You can obtain Cumulative Fix 5 from IBM at this location: www-1.ibm.com/support/docview.wss?rs=180&uid=swg Creating an endorsed directory If you have not installed LiveCycle, you must do so now before continuing. You must create an endorsed directory in the WebSphere directory. The endorsed directory must be placed in the same location, within the WebSphere directory, on each node that will be in the cluster and on the Deployment Manager. To create an endorsed directory: 1. Navigate to the [appserver root]/java/jre/lib directory and create a directory called endorsed. 2. Copy the following files from the [LiveCycle root]/components/um/endorsed directory to the endorsed directory you just created: dom3-xercesimpl jar dom3-xml-apis jar xalan jar 3. (Solaris) Move the [appserver root]/java/jre/lib/xml.jar file to the /java/jre/lib directory. Note: For more information, see the Adobe Knowledge Center article c4863, LiveCycle Forms issues with WebSphere. Configuring the shared libraries You must copy the DocumentServicesLibrary.jar file to the WebSphere directory on each base node in the cluster, and then configure WebSphere shared libraries. You also need to configure a new classloader that uses the shared library. Note: All of tasks in this section must be performed for each node in the cluster. You must also copy the Document Services Library file and configure the shared library files on the Deployment Manager. Please ensure that you use the same directory path for all servers. (All products except LiveCycle Policy Server) To configure the DocumentServicesLibrary file: 1. Copy the DocumentServicesLibrary.jar file from the [LiveCycle root]/components/csa/websphere /lib/adobe directory to the [appserver root]/optionallibraries directory. 2. In the WebSphere Administrative Console navigation tree, select Environment > Shared Libraries and click New.

20 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Configuring the shared libraries On the Configuration page, specify the following information: Under Scope, select all nodes and servers and change the scope to Cell. In the Name box, type DocumentServicesLibrary. In the Classpath box, type [appserver root]/optionallibraries/documentserviceslibrary.jar. Then click Apply. 4. Click OK, and then save your changes to the Master Configuration. (LiveCycle Policy Server only) To configure the shared library files: 1. For each node in the cluster, copy the edc-server-spi.jar file from [LiveCycle root]/policyserver/sdk/spi-lib/edc-server-spi.jar to the [appserver_root]/optionallibraries directory. 2. In the WebSphere Administrative Console navigation tree, select Environment > Shared Libraries. 3. Click Browse Servers, select the server you are using for LiveCycle Policy Server, and then click OK. 4. Click Apply, and then click New. 5. On the Configuration page, type the following information: Under Scope, select all nodes and servers and change the scope to Cell. In the Name box, type EDCApplication In the Classpath box, type [appserver_root]/optionallibraries/edc-server-spi.jar then click Apply. 6. Click OK, and then click Save. 7. Save changes to the Master Configuration. To configure the classloader: 1. In the WebSphere Administrative Console navigation tree, select Servers > Application Servers. 2. Click the server instance that you are configuring (for example, server1). 3. In the Configuration tab under Additional Properties, click Classloader. 4. Click New. In the Class loader mode list, keep the default value Parent_First, and click OK. 5. On the page that appears, click the Classloader Id link of the newly created classloader instance. 6. On the page that appears, under Additional Properties, click Libraries and click Add. 7. In the Library Name list, select DocumentServicesLibrary and click OK. 8. (LiveCycle Policy Server only) In the Library Name list, select EDCApplication and click OK. 9. Save your changes to the Master Configuration. 10. Repeat steps 1-9 for each server in the cluster.

21 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Increasing the SOAP time-out 21 (LiveCycle PDF Generator) To set the JVM properties: 1. In the WebSphere Administrative Console navigation tree, select Servers > Application Servers. 2. Click the server instance that you are configuring (for example, server1). 3. Click Process Definition and under Additional Properties, select Java Virtual Machine. 4. Under Additional properties, select Custom Properties and click New. 5. In the Name box, type java.io.tmpdir, and in the Value box, type a shared directory location that is accessible to all servers in the cluster, and click OK. 6. Click New and in the Name box, type com.ibm.websphere.ejbcontainer.poolsize. 7. (LiveCycle PDF Generator for PostScript) In the Value box, type pdfg-ps-all#adobe-pstopdfejb.jar#pstopdfdequeue=1,3:*=40, (LiveCycle PDF Generator Professional or LiveCycle PDF Generator Elements) In the Value box, type pdfg-all#aaes-ejb.jar#dequeue=1,1:pdfg-all#adobe-pstopdfejb.jar#pstopdfdeq ueue=1,3:*=40, Click OK. (LiveCycle Policy Server only) To update the qname.jar file 1. Copy the jax-qname.jar from the [LiveCycle root]/policyserver/sdk/lib/websphere directory to the [appserver root]/lib directory. 2. Delete the qname.jar file from the [appserver root]/lib directory. You must stop the application server to do this step. Increasing the SOAP time-out For each node in the cluster and on the Deployment Manager, you must increase the default SOAP time-out by editing the soap.client.props file. Edit the soap.client.props file. You can find the file under this location: [appserver root]/appserver/properties (for stand-alone server) [appserver root]/deploymentmanager/properties (for managed servers) The default value for the com.ibm.soap.requesttimeout property is 180 seconds. You must change the value to 1800, and then restart the server or the Deployment Manager and all of the nodes. Preparing WebSphere MQ This section applies only to LiveCycle Form Manager, LiveCycle Workflow, LiveCycle PDF Generator, and Watched Folder (for use with LiveCycle Workflow and LiveCycle Assembler). If you are using WebSphere as your application server, you must install WebSphere MQ in production environments and in development environments where server workloads are expected to be high. In development environments where you expect low server workloads, you can use the WebSphere embedded messaging server.

22 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Installing the required software 22 This section describes how to configure WebSphere MQ in a non-clustered environment. In this guide, [MQ_root] indicates the location at which WebSphere MQ is installed. The default installation location varies depending on the operating system you are using: (Windows) c:\program Files\ibm\WebSphere MQ (Solaris, Linux) /opt/mqm (AIX) /usr/mqm Installing the required software You must install all of the required software according to the following procedure before proceeding with the rest of the instructions in this section. Note: WebSphere MQ and WebSphere Application Server do not need to be installed on the same computer. To install the required software: 1. Ensure that the patch levels are correct for the operating system. Computers running AIX require version or later. 2. If you are installing WebSphere MQ on a computer that is running WebSphere Application Server, stop the application server before the installation. 3. If you are installing WebSphere MQ on a computer that has WebSphere Application Server installed, go to the [appserver root]\bin directory and run the setupcmdline script to ensure that the environment variables are set correctly. 4. If you are installing WebSphere MQ on a computer that does not have WebSphere Application Server installed, install WebSphere MQ Client and Java Messaging on the machine where WebSphere Application Server is installed. 5. Proceed to To install WebSphere MQ on Windows: on page 22 or To install WebSphere MQ on UNIX: on page 23. To install WebSphere MQ on Windows: 1. Start the WebSphere MQ installer. 2. If you are installing WebSphere MQ on a computer that does not have WebSphere Application Server or Embedded Messaging installed, proceed to step 4. If you are installing WebSphere MQ on a computer that has WebSphere Application Server installed with Embedded Messaging, you are prompted to either modify or remove it. Select Remove, click Next and, when prompted to either keep or remove existing queues, select Keep. 3. After the removal of the Embedded Messaging server is complete, start the installer again and select a Custom installation.

23 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Installing the required software On the Features panel of the installer, ensure that Server, Client, Java Messaging, and Development Toolkit are all selected for installation and proceed with the installation. Note: If you have installed WebSphere MQ on a computer that does not have WebSphere Application Server installed, you must also install WebSphere MQ Client and Java Messaging on the computer running WebSphere Application Server. Note: For computers with both WebSphere Application Server and WebSphere MQ, the MQ client and Java Messaging must be installed on all nodes in the cluster and the Deployment Manager. They must be installed in the same location on each server, and the MQ_INSTALL_ROOT must be set at the node level for each node in the cluster as well as the Deployment Manager. 5. When the installation is complete, the Prepare WebSphere MQ Wizard appears. Use the wizard to configure MQ for your environment. 6. Proceed to To complete the post-installation tasks: on page 23. To install WebSphere MQ on UNIX: 1. Read and follow the WebSphere MQ Quick Beginnings document for the operating system on which you plan to install WebSphere MQ. 2. When given the option, install Server, Client, and Java Messaging. 3. If you have installed WebSphere MQ on a computer that does not have WebSphere Application Server installed, install WebSphere MQ Client and Java Messaging on the computer running WebSphere Application Server. 4. Proceed to To complete the post-installation tasks: on page 23. To complete the post-installation tasks: 1. (Windows) Stop WebSphere MQ and use Task Manager to stop any tasks that begin with amq. 2. From a command prompt, install Fix Pack 10 (formerly CSD10). Fix Pack 10 is required for the installation of the Java Messaging support, which is required by LiveCycle products. You can obtain the Fix Pack from IBM at this location: D10&uid=swg &loc=en_US&cs=utf-8&lang=en 3. Define a local user and add the user to the mqm group, which was created during the WebSphere MQ install. The user name must be less than 12 characters long. You will later select this user when configuring the JMS connection factories in WebSphere Application Server. On Windows, the WebSphere MQ install creates both the mqm group and a user named MUSR_MQADMIN. Do not alter this user, because WebSphere MQ uses it to run some services. If you have altered this user, run the following command from the [MQ_root]\bin folder to reset it: amqmsrvn -regserver

24 Configuring the Application Servers Configuring LiveCycle Application Server Clusters Using WebSphere Installing the required software (UNIX) Log on using the mqm user account you created and update the CLASSPATH, MQ_JAVA_INSTALL_PATH, MQ_JAVA_DATA_PATH, and LD_LIBRARY_PATH, as required, from a command prompt. For example, under Solaris, the paths could look like this: export CLASSPATH=/opt/mqm/java/lib/com.ibm.mq.jar:/opt/mqm/java/lib/com.ibm.mqj ms.jar:/opt/mqm/java/lib/connector.jar:/opt/mqm/java/lib/jms.jar:/opt/mq m/java/lib/jndi.jar:/opt/mqm/java/lib/jta.jar:/opt/mqm/java export MQ_JAVA_INSTALL_PATH=/opt/mqm/java export MQ_JAVA_DATA_PATH=/var/mqm export LD_LIBRARY_PATH=/opt/mqm/java/lib 5. Copy the following scripts from the [LiveCycle root]/configurationmanager/scripts/mq_server directory to the [MQ_root]/bin directory: configure_mq LC_Queues.mqsc setmqclasspath start_mq Note: (UNIX) Copy the scripts using the mqm user account you created when you installed WebSphere MQ. 6. Copy the LC_JmsDefs.scp script from the [LiveCycle root]/configurationmanager/scripts/mq_server directory to the [MQ_root]/Java/bin directory. Note: (UNIX) Copy the scripts using the mqm user account you created when you installed WebSphere MQ. 7. From a command prompt, run the following script: (Windows) From the directory containing the configure_mq.bat file, enter the following command: configure_mq.bat queue_manager_name where queue_manager_name is an appropriate queue manager name for the node. (UNIX) From the directory containing the configure_mq.sh file, enter the following command: configure_mq.sh queue_manager_name MQ_root MQ_Java_data_path where queue_manager_name is an appropriate queue manager name for the node, MQ_root is the location at which WebSphere MQ is installed (by default, /usr/mqm), and MQ_Java_data_path is the data install path (by default, /var/mqm). Note: If you close or lose this prompt for any reason, you must run the setmqclasspath script by entering the following command in any subsequent command windows that you use to administer WebSphere MQ: (Windows) setmqclasspath.bat (UNIX) setmqclasspath.sh MQ_root MQ_Java_data_path Note: (UNIX) Run the script using the mqm user account you created when you installed WebSphere MQ.