WebSphere Portal 8 Using GPFS file sharing in a Portal Farm Test Configuration Owner: Mark E. Blondell Configuration name: Test Infrastructure: WebSphere Portal 8 Using GPFS file sharing in a Portal Farm Date of issue: May 01, 2012 Product Version: 8 Document Version: 1.1 Abstract The goal of this document is to outline the steps by which the WebSphere Portal System Verification Test (SVT) team installed, configured, and tested the file sharing of a Portal Farm in WebSphere Portal 8.0. Content Introduction The environment included a node and several farm client nodes. Portal Content was leveraged across the various Farm nodes thru the webserver./tam configuration. The software versions used in this test environment are as follows: IBM WebSphere Application Server 8.0.0.3 IBM WebSphere Portal 8.0 IBM DB2 9.7 fp4 IBM HTTP Server 8 IBM TDS 7 IBM Tivoli Access Manager 6.1 IBM General Parallel File System 3.5 IBM extreme Scale 7.1 fp3
1 <> Configuration diagram for Stand-alone WebSphere v8.0 Testing Environment Diagram
2 <> Goals of test The goal of this test was to support the use of IBM GPFS file sharing software and build multiple file sharing images of Portal 8.0 to create a portal farm environment. The environment utilized IBM GPFS (file sharing) across all the Portal images. IBM's General Parallel File System (GPFS) provides file system services to parallel and serial applications. GPFS allows parallel applications simultaneous access to the same files, or different files, from any node which has the GPFS file system mounted while managing a high level of control over all file system operations. Configuration of two Master nodes called NSDs (Network shared disks) to allow us perform application updates (WAS, Portal, fixpacks, etc) or failover condition. Both NSDs will have a copy of the existing Portal application. The NSDs will have R/W access to the GPFS file system and the GPFS clients will have R/O access to the GPFS file system. The GPFS NSD component provides a method for cluster-wide disk naming and access. All the WebSphere Portal configurations are identical and will share the common customization, community and likeminds dbs for user data. Installed WP8.0 build to the Master A NSD1 and Master B (NSD2) into the /apps directory (this filesystem is R/W). The Portal farm clients were mounted to the /apps directory on the Master NSD server as R/O. Tivoli Webseal with Tivoli Acces Manager was used to provide SSO. HTTP server requests were routed to the various portal farm clients (not to the NSD Master) A regression test of 3 day WCM rendering with CMIS ( Content Management Interoperability Services) longrun and 1 day WCM authoring run using a http webserver and TAM/webseal on the environment after configuration and tuning. A regression test of 1 day Search of Portal and WCM content run using a http webserver and TAM/webseal on the environment after configuration and tuning.
3 <> Machine details Purpose Farm master A - need to Enable Farm mode task Installed App Description Machine Information Miscellaneous Details mmmount /dev/gpfs1nsd /apps mmmount /dev/gpfs3nsd /profiles Farm master B used for 24x7 test mmmount /dev/gpfs2nsd /backup mmmount /dev/gpfs4nsd /profiles_bak read-only farm read-only farm read-only farm read-only farm mmmount /dev/gpfs1nsd /apps -o ro mmmount /dev/gpfs1nsd /apps -o ro mmmount /dev/gpfs1nsd /apps -o ro mmmount /dev/gpfs1nsd /apps -o ro read-only farm mmmount /dev/gpfs1nsd /apps ** make as RW Maintenance/S earch machine
3 <> Machine details continued Purpose Installed App Description Machine Information Miscellaneous Details DB2 - db2 9.7 fp4 AIX 6.1 lpar Database LDAP - auth ITDS 7 win2008 standalone LDAP with ldap users TAM - 3rd party TAM 6.1 win2008 http (80) and https (443) auth IHS - Webserver IHS HTTP 7 AIX 6.1 lpar port 80 - have a custom http_plugin.xml for farm./apachectl stop./apachectl start
4 <> Configuration settings WebSphere Portal 8 wiki Main URL: http://www-10.lotus.com/ldd/portalwiki.nsf/xpviewcategories.xsp?lookupname=product%20documentation Please refer to the topics listed in the steps below for more detailed instructions on the steps to install and configure the environment used for this test. The environment was installed with the following steps: Install DB2 Server, using the topic Planning for DB2 on Windows Install TAM Server, using the topic Planning for external security managers Install WebSphere Portal Stand-Alone Server on AIX, using the topic Installing WebSphere Portal on AIX a. Install WebSphere Portal b. Configure/Transfer the remote database(s) Install and Configure the IBM HTTP Web server, using the topic Preparing a remote Web server on Windows Configure TAM Security for Environment using topic Configuring Tivoli Access Manager Tune All Servers for Portal related settings. Enabled standalone security for remote ITDS LDAP, and enabled JMS messaging on Portal Master node. Also used xml scripts to populate both the WCM and Pages and Portlets (PnP) pages/portlets used for testing. Restart environment and verify all settings
5 <> Test / User Configuration The following tests were conducted using a load generator test tool, which is a similar open source tool to HP Load Runner, to simulate the multiple users performing their various tasks over specific periods of time. Search Test: 300 Concurrent Users logging into portal over a 24 hour time span and searching Portal content and WCM content. WCM Rendering Test: 400 Concurrent Users logging into portal over a 72 hour time span and accessing WCM pages with various portlets