EMC VIPR SRM 3.7: GUIDELINES FOR CONFIGURING MULTIPLE FRONTEND SERVERS

Similar documents
CS312 Solutions #6. March 13, 2015

Configuring Load Balancing for EMC ViPR SRM

HAProxy. Ryan O'Hara Principal Software Engineer, Red Hat September 17, HAProxy

Configuring HAproxy as a SwiftStack Load Balancer

HAProxy. Free, Fast High Availability and Load Balancing. Adam Thornton 10 September 2014

Load balancing MySQL with HaProxy. Peter Boros Percona 4/23/13 Santa Clara, CA

ALOHA LOAD BALANCER MANAGING SSL ON THE BACKEND & FRONTEND

FILECLOUD HIGH AVAILABILITY

How To Configure An Orgaa Cloud Control On A Bigip (Cloud Control) On An Orga Cloud Control (Oms) On A Microsoft Cloud Control 2.5 (Cloud) On Microsoft Powerbook (Cloudcontrol) On The

DEPLOYING WEBTOP 6.8 ON JBOSS 6.X APPLICATION SERVER

ALOHA Load-Balancer. Microsoft Exchange 2010 deployment guide. Document version: v1.4. ALOHA version concerned: v4.2 and above

EMC ViPR Controller. Version 2.4. User Interface Virtual Data Center Configuration Guide REV 01 DRAFT

Process Integrator Deployment on IBM Webspher Application Server Cluster

vrealize Operations Manager Load Balancing

Syncplicity On-Premise Storage Connector

SETTING UP ACTIVE DIRECTORY (AD) ON WINDOWS 2008 FOR EROOM

EMC VIPR SRM: VAPP BACKUP AND RESTORE USING EMC NETWORKER

What s Your HTTPS Grade? A Case Study of HTTPS/SSL at Mid Michigan Community College. Brandon bkish@midmich.edu

Deploying EMC Documentum WDK Applications with IBM WebSEAL as a Reverse Proxy

IBM WEBSPHERE LOAD BALANCING SUPPORT FOR EMC DOCUMENTUM WDK/WEBTOP IN A CLUSTERED ENVIRONMENT

EMC ViPR Controller Add-in for Microsoft System Center Virtual Machine Manager

DEPLOYMENT GUIDE Version 1.1. Deploying F5 with IBM WebSphere 7

DEPLOYMENT GUIDE Version 1.0. Deploying the BIG-IP LTM with Apache Tomcat and Apache HTTP Server

DEPLOYMENT GUIDE Version 1.0. Deploying F5 with the Oracle Fusion Middleware SOA Suite 11gR1

DEPLOYMENT GUIDE Version 1.1. Deploying F5 with Oracle Fusion Middleware Identity Management 11gR1

DEPLOYING EMC DOCUMENTUM BUSINESS ACTIVITY MONITOR SERVER ON IBM WEBSPHERE APPLICATION SERVER CLUSTER

Deploying the BIG-IP LTM System and Microsoft Outlook Web Access

REMOTE KEY MANAGEMENT (RKM) ENABLEMENT FOR EXISTING DOCUMENTUM CONTENT SERVER DEPLOYMENTS

XMS FULLY AUTOMATED PROVISIONING: SERVER CONFIGURATION AND QUICK START GUIDE

Load Balancing VMware Horizon View. Deployment Guide

HaProxy możliwości i zastosowania. Marek Oszczapiński m.oszczapiński@polskapresse.pl

Native SSL support was implemented in HAProxy 1.5.x, which was released as a stable version in June 2014.

Oracle BI Publisher Enterprise Cluster Deployment. An Oracle White Paper August 2007

EMC Documentum Connector for Microsoft SharePoint

Technical Notes. EMC NetWorker Performing Backup and Recovery of SharePoint Server by using NetWorker Module for Microsoft SQL VDI Solution

EMC Celerra Version 5.6 Technical Primer: Control Station Password Complexity Policy Technology Concepts and Business Considerations

White Paper DEPLOYING WDK APPLICATIONS ON WEBLOGIC AND APACHE WEBSERVER CLUSTER CONFIGURED FOR HIGH AVAILABILITY AND LOAD BALANCE

Pertino HA Cluster Deployment: Enabling a Multi- Tier Web Application Using Amazon EC2 and Google CE. A Pertino Deployment Guide

DEPLOYMENT GUIDE Version 1.2. Deploying F5 with Microsoft Exchange Server 2007

DEPLOYMENT GUIDE DEPLOYING F5 WITH MICROSOFT WINDOWS SERVER 2008

Technical Notes P/N Rev 01

DEPLOYMENT GUIDE. Deploying F5 for High Availability and Scalability of Microsoft Dynamics 4.0

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

Exchange 2013 deployment guide

High Availability Cluster Solutions for Ubuntu14.04 on Power

Web Load Balancing on a Budget

EMC VoyenceControl Integration Module. BMC Atrium Configuration Management Data Base (CMDB) Guide. version P/N REV A01

EMC Documentum Content Management Interoperability Services

Integration Module for BMC Remedy Helpdesk

Policy Guide Access Manager 3.1 SP5 January 2013

VMWARE PROTECTION USING VBA WITH NETWORKER 8.1

Using EMC Unisphere in a Web Browsing Environment: Browser and Security Settings to Improve the Experience

Setting Up a Unisphere Management Station for the VNX Series P/N Revision A01 January 5, 2010

Load Balancing VMware Horizon View. Deployment Guide

vrealize Automation Load Balancing

Copyright Exceliance

Acronis Storage Gateway

Configuring Nex-Gen Web Load Balancer

XCP APP FAILOVER CONFIGURATION FOR WEBLOGIC CLUSTER AND APACHE WEBSERVER

EMC Data Domain Management Center

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH MICROSOFT WINDOWS SERVER 2008 TERMINAL SERVICES

DEPLOYMENT GUIDE. Deploying the BIG-IP LTM v9.x with Microsoft Windows Server 2008 Terminal Services

Leverage Your EMC Storage Investment with User Provisioning for Syncplicity:

GRAVITYZONE HERE. Deployment Guide VLE Environment

eg Enterprise v5.2 Clariion SAN storage system eg Enterprise v5.6

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

Deploying the BIG-IP System v10 with SAP NetWeaver and Enterprise SOA: ERP Central Component (ECC)

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5

ENABLING SINGLE SIGN-ON FOR EMC DOCUMENTUM WDK-BASED APPLICATIONS USING IBM WEBSEAL ON AIX

Firewall Systems Pty Limited Standard Scope of Works

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP SYSTEM WITH MICROSOFT INTERNET INFORMATION SERVICES (IIS) 7.0

Using the vcenter Orchestrator Plug-In for vsphere Auto Deploy 1.0

Configuring Single Sign-On for Documentum Applications with RSA Access Manager Product Suite. Abstract

DEPLOYMENT GUIDE DEPLOYING F5 WITH SAP NETWEAVER AND ENTERPRISE SOA

Jive and High-Availability

Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint Deployment Guide

EMC Documentum Interactive Delivery Services Accelerated Overview

Installing Management Applications on VNX for File

OnCommand Performance Manager 1.1

Enterprise Deployment of the EMC Documentum WDK Application

EMC Data Protection Search

EMC DOCUMENTUM JAVA METHOD SERVER HIGH AVAILABLITY CONFIGURATION

Penetration Testing LAB Setup Guide

PROSPHERE: DEPLOYMENT IN A VITUALIZED ENVIRONMENT

Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h REV A01 Date June, 2011

DEPLOYMENT GUIDE CONFIGURING THE BIG-IP LTM SYSTEM WITH FIREPASS CONTROLLERS FOR LOAD BALANCING AND SSL OFFLOAD

The QueueMetrics Uniloader User Manual. Loway

Load Balancing Microsoft AD FS. Deployment Guide

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP system v10 with Microsoft Exchange Outlook Web Access 2007

CERTIFICATE BASED SSO FOR MYDOCUMENTUM OUTLOOK WITH IBM TAM WEBSEAL

Administration Guide Messenger 3.0 February 2015

EMC ViPR Controller. ViPR Controller REST API Virtual Data Center Configuration Guide. Version

DEPLOYMENT GUIDE Version 1.1. Deploying the BIG-IP LTM v10 with Citrix Presentation Server 4.5

TROUBLESHOOTING RSA ACCESS MANAGER SINGLE SIGN-ON FOR WEB-BASED APPLICATIONS

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N REV A02

VMware Site Recovery Manager with EMC RecoverPoint

Integrating the F5 BigIP with Blackboard

Configuring the BIG-IP system for FirePass controllers

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

Transcription:

EMC VIPR SRM 3.7: GUIDELINES FOR CONFIGURING MULTIPLE FRONTEND SERVERS ABSTRACT This document describes how to deploy two frontend servers in an EMC ViPR SRM 3.7 installation. The steps presented in this document can be easily extended to larger installations that require more than two frontend servers. October 2015 WHITE PAPER

To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store Copyright 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware and are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number H14594 2

TABLE OF CONTENTS PURPOSE AND GOALS... 4 AUDIENCE... 4 INTRODUCTION... 4 ARCHITECTURE... 5 SLAVE FRONTEND SERVER DEPLOYMENT... 5 Installing the slave frontend Server... 5 Redirecting Web-Applications to the primary frontend... 6 Restarting Tomcat... 6 CONFIGURING STORED REPORTS... 7 CONFIGURING THE SCHEDULED REPORTS... 7 EDITING THE IMPORT PROPERTY TASK... 8 APPENDIX A: FAQ... 9 SolutionPacks Report installation... 9 SolutionPacks upload... 9 SolutionPacks formula... 9 SolutionPacks property-mapping... 9 APPENDIX B: HAPROXY LOAD BALANCER CONFIGURATION... 9 APPENDIX C: F5 LOAD BALANCER CONFIGURATION... 11 APPENDIX D: REDIRECTING REQUESTS TO THE FRONTEND SERVER... 11 F5 BIG-IP can do URL redirection using irules... 11 HAProxy URL redirection... 11 APPENDIX E: NFS SHARE FOR STORED REPORTS... 12 APPENDIX F: ELIMINATING MULTIPLE COPIES OF A SHARED REPORT... 12 3

PURPOSE AND GOALS This document describes how to deploy two frontend servers in an EMC ViPR SRM 3.7 installation. You can easily extend the procedures in this document to cover installations that require more than two frontend servers. EMC recommends that each frontend server should serve no more than 10 concurrent and active users. AUDIENCE This document is intended for anyone planning to deploy multiple frontend servers in an SRM/SAS installation. This document assumes that the master frontend server is already installed and running. The master frontend server (which is the first server to be installed) is used for management tasks. Refer to the EMC ViPR SRM 3.7 Installation and Configuration Guide if you need help with deploying the master frontend server. INTRODUCTION The performance and scalability guidelines recommend that you have one frontend server for every 10 active and concurrent users. If you plan on having more than 10 users, you need to deploy more frontend servers. Using a load balancer to distribute processing among multiple frontend servers is highly recommended. Although this guide covers the F5 and HAproxy load balancers, you can use the load balancer of your choice. These two load balancers are provided as examples. In a multiple frontend server environment, only one frontend server should handle the webapplications listed under the Administrator panel. These web-applications are: Alerting-Frontend APG-Web-Service Centralized-Management Compliance frontend Device-Discovery Mib-Browser This frontend server is called the master or the primary frontend server. This primary frontend server can serve the /APG/ section as well. All other frontend servers that serve the /APG/ section only are called slave frontend servers. 4

ARCHITECTURE SLAVE FRONTEND SERVER DEPLOYMENT INSTALLING THE SLAVE FRONTEND SERVER To install the slave frontend: 1. Ensure the frontend server is installed as described in the EMC ViPR SRM 3.7 Installation and Configuration Guide. 2. Copy the following files from the master frontend server and paste them into the matching location on the slave frontend server. You can use the scp command to do it. EMC recommends that you back up the files on the slave frontend server before overwriting them: - <APG>/Web-Servers/Tomcat/<instance-name>/conf/server.xml - <APG>/Web-Servers/Tomcat/<instance-name>/conf/Catalina/localhost/APG.xml - <APG>/Web-Servers/Tomcat/<instance-name>/conf/Catalina/localhost/APG-WS.xml - <APG>/Tools/Frontend-Report-Generator/<instance-name>/conf/report-generationconfig.xml - <APG>/Tools/Administration-Tool/<instance-name>/conf/master-accessor-service-conf.xml - <APG>/Tools/WhatIf-Scenario-CLI/<instance-name>/conf/whatif-scenario-cli-conf.xml 5

REDIRECTING WEB-APPLICATIONS TO THE PRIMARY FRONTEND Some web-applications store files locally on the frontend server. For example, if you add a server from Centralized Management (CM) running on the master frontend server, it is saved locally on that server and no other frontend server can access it. This causes a mismatch in the configuration across all frontend servers. To fix this problem, some web-applications need to run on only on the master frontend server. If a user tries to access these web-applications from a slave frontend server, the user should be redirected to the master frontend server. The following table lists the web-applications that need to be redirected, along with the file and line(s) that needs to be edited to force the redirection. c r Mib-Browser e& SNMP dcollector e n t i a l s FILE TO BE EDITED WEB- Y oapplication u Alerting- Frontend m u s t Centralizede Management n t e r <APG>/Web-Applications/Alerting- Frontend/alertingfrontend/conf/common.properties apg.centralized- management.url=http://<fqdn-master- Frontend-server>:58080/centralizedmanagement/ The apg.device-discovery.url should be defined like this: ydevice- odiscovery u r <APG>/Web-Applications/Device- Discovery/devicediscovery/conf/common.properties apg.device-discovery.url= http://<fqdn-master-frontendserver>:58080/device-discovery/ The apg.mib-browser.url and apg.snmpcollector.url should be defined like this: <APG>/Web-Applications/Mib- Browser/mibbrowser/conf/common.properties LINE(S) TO BE EDITED The apg.alerting-frontend.url should be defined like this: apg.alerting-frontend.url =http://<fqdn-master-frontendserver>:58080/alerting-frontend/ The apg.centralized-management.url should be defined like this: <APG>/Web-Applications/Centralized- Management/centralizedmanagement/conf/common.properties apg.mib-browser.url= http://<fqdn- Master-Frontend-server>:58080/mibbrowser/ apg.snmp-collector.url= http://<fqdn- Master-Frontend-server>:58080/mibbrowser/snmpconfig/ For the compliance frontend redirection you need to run that the following command: <APG>/bin/administration-tool.sh updatemodule -module [ -name 'storage_compliance' -url 'http://<fqdn-master-frontend-server>:58080/compliance-frontend/' ] Existing users must re-enter credentials when redirected to the primary frontend server. RESTARTING TOMCAT Once the steps above are complete, restart the slave frontend service. You can do this from either Centralized Management, or from the command line interface (CLI) by running the following command: <APG>/bin/manage-modules.sh service restart tomcat 6

CONFIGURING STORED REPORTS Stored reports are stored locally on a designated frontend server. The other frontends do not have any access to the stored reports. As a consequence if you store reports on the master frontend server, you will not be able to see the reports from the slave frontend server. EMC recommends creating a shared folder that each frontend server can access. You can use the NFS file server to do this, as explained in Appendix E. CONFIGURING THE SCHEDULED REPORTS By default, each frontend server has its own scheduling service. This causes a problem in a multiple frontend server environment since a scheduled report runs multiple times. To remedy this problem, all frontend servers should point to one server that should be running the scheduling service. The server could be the master frontend server or any other server as long as it has access to the databases. If multiple scheduled reports will run, you should have a dedicated server for scheduling. The file you need to edit to specify the scheduling server is located under <APG>/Custom/WebApps- Resources/<instance-name>/scheduling/scheduling-servers.xml. The line you need to edit is shown below. Note that you must edit this file on all frontend servers. <?xml version="1.0" encoding="utf-8" standalone="yes"?> <scheduling-servers xmlns="http://www.watch4net.com/apg/frontend/scheduled- Reports/Scheduling-Servers"> <server validatessl="false" password="changeme" username="admin" url="https://fqdn- Scheduling-Server:48443/"/> </scheduling-servers> If you decide to store reports, they are saved locally on the scheduling server. Each frontend server then tries to access the stored reports folder on the scheduling server to synchronize its local stored report folder. This does not cause a problem if you only have one frontend server running. In the case of multiple frontend servers, they access the same file share (as described in the stored report 7

section). As a consequence, you might end up by having multiple copies of the same reports on the shared server. This is a problem, especially if the stored report is large. To get around this issue, follow the steps in Appendix F. Those steps assume that you have already run the steps listed in the Appendix E. EDITING THE IMPORT PROPERTY TASK Each frontend server has to run its import property task. This operation could cause a load on the database servers, especially if the databases are very large and the import tasks are running simultaneously. As a consequence, you should edit the import property task and delay the execution of the scripts with respect to each to each other. The delay needs to be inserted only if the import tasks take too long to generate. Open the following file: <APG>/Databases/APG-Property-Store/<instance-name>/conf/import-properties.task and edit the following scheduling information: Edit these lines for the master frontend server: <!-- If the average of the last 5 executions takes < 10 minutes, schedule every 4 hours --> <conditional condition="slidingfinishedaverageduration < 3000000"> <schedule cron="0 0,4,8,12,16,20 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> Edit these lines for each slave frontend server: <!-- If the average of the last 5 executions takes < 10 minutes, schedule every 4 hours --> <conditional condition="slidingfinishedaverageduration < 3000000"> <schedule cron="0 1,5,9,13,17,21 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> <!-- If the average of the last 5 executions takes < 1 hour, schedule at 5:00AM and 12:00PM --> <conditional condition="slidingfinishedaverageduration < 3600000"> <schedule cron="0 7,14 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> Note that for each slave frontend server above, you need to insert a 1-hour difference for the first line changed and a 2-hour difference for the second line changed. As a consequence, if you have a second slave frontend server, the configuration looks like this: <!-- If the average of the last 5 executions takes < 10 minutes, schedule every 4 hours --> <conditional condition="slidingfinishedaverageduration < 3000000"> <schedule cron="0 2,6,10,14,18,22 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> <!-- If the average of the last 5 executions takes < 1 hour, schedule at 5:00AM and 12:00PM --> 8

<conditional condition="slidingfinishedaverageduration < 3600000"> <schedule cron="0 9,16 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> APPENDIX A: FAQ SOLUTIONPACKS REPORT INSTALLATION The interactive installation window requires a frontend server to install the reports. The report is transferred from the frontend server to the MySQL database and is visible on all frontend servers. SOLUTIONPACKS UPLOAD Since centralized-management performs the upload, the operation takes place on the master frontend server. The package file is saved locally and can only be installed from this server. If the master frontend server is in a high availability solution, the folder must be updated as well. SOLUTIONPACKS FORMULA A SolutionPack can have a local formula built in a java file used in the reports. The formula is installed on the frontend server pointed to during installation. The SolutionPack that has a custom formula contains a java file in the blocks\reports\templates\arp\_formulas folder. Currently only emc-vipr has the formula. SOLUTIONPACKS PROPERTY-MAPPING If the SolutionPack uses events, then the report has an XML file for property mapping that is saved on the pointed to frontend server only. APPENDIX B: HAPROXY LOAD BALANCER CONFIGURATION HAProxy is open source software. The following code sample shows how to configure HAProxy to balance the load across all frontend servers. global log /dev/log local0 info log /dev/log local0 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 5000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /etc/haproxy/haproxy.sock level admin defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s 9

timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend http-in bind *:80 acl url_static path_beg -i /centralized-management acl url_static path_beg -i /alerting-frontend acl url_static path_beg -i /compliance-frontend acl url_static path_beg -i /device-discovery acl url_static path_beg -i /snmpconfig use_backend static if url_static default_backend app backend static balance roundrobin option forwardfor option http-server-close appsession JSESSIONID len 52 timeout 14400000 # Main admin server server m_frontend backend:58080 weight 256 check # HA admin server server s_frontend frontend:58080 weight 2 check backend app balance roundrobin option forwardfor option http-server-close appsession JSESSIONID len 52 timeout 14400000 # No.1 APG server server frontend backend:58080 check inter 5000 # No.2 APG server server frontend2 frontend:58080 check inter 5000 # No.3 APG server server frontend3 frontend2:58080 check inter 5000 listen stats bind *:88 stats enable stats uri / 10

APPENDIX C: F5 LOAD BALANCER CONFIGURATION Parameters that you need to tweak for an F5 Load balancer configuration are: Configuration name Server Pool Persistence type Load balancing method Action on service down Health monitors Value Hostname and IP of each frontend Destination address affinity persistence (Sticky) Least Connections (member) and Least Connections (node) Load balancing method Associate with the pool. If not possible with each member. APPENDIX D: REDIRECTING REQUESTS TO THE FRONTEND SERVER The current guidelines show how to redirect some Web-Application requests to the master frontend server by editing the common.properties file on each server. You can also redirect requests to the master frontend server by configuring redirection at the Network load balancer level, as shown below. F5 BIG-IP CAN DO URL REDIRECTION USING IRULES when HTTP_REQUEST { if{ [HTTP::host] contains "centralized-management"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "alerting-frontend"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "mib-browser"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "compliance-frontend"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "device-discovery"} { HTTP::redirect http://masterfrontend[http::uri] } } HAPROXY URL REDIRECTION frontend http-in bind *:80 acl url_static path_beg -i /centralized-management acl url_static path_beg -i /alerting-frontend acl url_static path_beg -i /compliance-frontend acl url_static path_beg -i /device-discovery acl url_static path_beg -i /snmpconfig use_backend static if url_static default_backend app 11

APPENDIX E: NFS SHARE FOR STORED REPORTS Use this procedure to create an NFS share for stored reports: 1. Select a frontend server to host the folder. Install nfs-kernel-server and nfs-common on it. 2. Run the following command: echo "<APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports <IP-Addess-NFS- Server>/<subnet-mask> (rw,no_root_squash,subtree_check)" >> /etc/exports replacing <IP-Addess-NFS-Server> and <subnet-mask> with their proper values. 3. Restart the NFS server: /etc/init.d/nfs-kernel-server reload 4. On the other frontend servers, login as root and create the following folder: <APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports If the folder already exists, rename it. 5. Mount the folder: mount <host ip>:<apg>/web-servers/tomcat/<instance-name>/temp/apg-reports <APG>/Web- Servers/Tomcat/<instance-name>/temp/apg-reports 6. Add the following line to /etc/fstab file to mount the shares at boot time: <IP-Addess-NFS-Server>: <APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports <APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports nfs rw,sync,hard,intr 0 0 Refer to your NFS file server documentation for more information online or using the man command. APPENDIX F: ELIMINATING MULTIPLE COPIES OF A SHARED REPORT The following procedure assumes that you have already set up a share for the stored report. 1. Run the following command on the scheduling server: echo "<APG>/Tools/Frontend-Report-Generator/<instance-name>/data/stored-reports <IP- Addess-NFS-Server>/<subnet-mask> (rw,no_root_squash,subtree_check)" >> /etc/exports replacing <IP-Addess-NFS-Server> and <subnet-mask> with the proper values. 2. Restart the NFS server: /etc/init.d/nfs-kernel-server reload 3. Mount the folder: mount <host ip>:<apg>/tools/frontend-report-generator/<instance-name>/data/storedreports >:<APG>/Tools/Frontend-Report-Generator/<instance-name>/data/stored-reports 4. To mount the shares at boot time add the following line to the /etc/fstab file: <IP-Addess-NFS-Server>:<APG>/Tools/Frontend-Report-Generator/<instancename>/data/stored-reports >:<APG>/Tools/Frontend-Report-Generator/<instancename>/data/stored-reports nfs rw,sync,hard,intr 0 0 12