EMC VIPR SRM 3.7: GUIDELINES FOR CONFIGURING MULTIPLE FRONTEND SERVERS ABSTRACT This document describes how to deploy two frontend servers in an EMC ViPR SRM 3.7 installation. The steps presented in this document can be easily extended to larger installations that require more than two frontend servers. October 2015 WHITE PAPER
To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store Copyright 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware and are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number H14594 2
TABLE OF CONTENTS PURPOSE AND GOALS... 4 AUDIENCE... 4 INTRODUCTION... 4 ARCHITECTURE... 5 SLAVE FRONTEND SERVER DEPLOYMENT... 5 Installing the slave frontend Server... 5 Redirecting Web-Applications to the primary frontend... 6 Restarting Tomcat... 6 CONFIGURING STORED REPORTS... 7 CONFIGURING THE SCHEDULED REPORTS... 7 EDITING THE IMPORT PROPERTY TASK... 8 APPENDIX A: FAQ... 9 SolutionPacks Report installation... 9 SolutionPacks upload... 9 SolutionPacks formula... 9 SolutionPacks property-mapping... 9 APPENDIX B: HAPROXY LOAD BALANCER CONFIGURATION... 9 APPENDIX C: F5 LOAD BALANCER CONFIGURATION... 11 APPENDIX D: REDIRECTING REQUESTS TO THE FRONTEND SERVER... 11 F5 BIG-IP can do URL redirection using irules... 11 HAProxy URL redirection... 11 APPENDIX E: NFS SHARE FOR STORED REPORTS... 12 APPENDIX F: ELIMINATING MULTIPLE COPIES OF A SHARED REPORT... 12 3
PURPOSE AND GOALS This document describes how to deploy two frontend servers in an EMC ViPR SRM 3.7 installation. You can easily extend the procedures in this document to cover installations that require more than two frontend servers. EMC recommends that each frontend server should serve no more than 10 concurrent and active users. AUDIENCE This document is intended for anyone planning to deploy multiple frontend servers in an SRM/SAS installation. This document assumes that the master frontend server is already installed and running. The master frontend server (which is the first server to be installed) is used for management tasks. Refer to the EMC ViPR SRM 3.7 Installation and Configuration Guide if you need help with deploying the master frontend server. INTRODUCTION The performance and scalability guidelines recommend that you have one frontend server for every 10 active and concurrent users. If you plan on having more than 10 users, you need to deploy more frontend servers. Using a load balancer to distribute processing among multiple frontend servers is highly recommended. Although this guide covers the F5 and HAproxy load balancers, you can use the load balancer of your choice. These two load balancers are provided as examples. In a multiple frontend server environment, only one frontend server should handle the webapplications listed under the Administrator panel. These web-applications are: Alerting-Frontend APG-Web-Service Centralized-Management Compliance frontend Device-Discovery Mib-Browser This frontend server is called the master or the primary frontend server. This primary frontend server can serve the /APG/ section as well. All other frontend servers that serve the /APG/ section only are called slave frontend servers. 4
ARCHITECTURE SLAVE FRONTEND SERVER DEPLOYMENT INSTALLING THE SLAVE FRONTEND SERVER To install the slave frontend: 1. Ensure the frontend server is installed as described in the EMC ViPR SRM 3.7 Installation and Configuration Guide. 2. Copy the following files from the master frontend server and paste them into the matching location on the slave frontend server. You can use the scp command to do it. EMC recommends that you back up the files on the slave frontend server before overwriting them: - <APG>/Web-Servers/Tomcat/<instance-name>/conf/server.xml - <APG>/Web-Servers/Tomcat/<instance-name>/conf/Catalina/localhost/APG.xml - <APG>/Web-Servers/Tomcat/<instance-name>/conf/Catalina/localhost/APG-WS.xml - <APG>/Tools/Frontend-Report-Generator/<instance-name>/conf/report-generationconfig.xml - <APG>/Tools/Administration-Tool/<instance-name>/conf/master-accessor-service-conf.xml - <APG>/Tools/WhatIf-Scenario-CLI/<instance-name>/conf/whatif-scenario-cli-conf.xml 5
REDIRECTING WEB-APPLICATIONS TO THE PRIMARY FRONTEND Some web-applications store files locally on the frontend server. For example, if you add a server from Centralized Management (CM) running on the master frontend server, it is saved locally on that server and no other frontend server can access it. This causes a mismatch in the configuration across all frontend servers. To fix this problem, some web-applications need to run on only on the master frontend server. If a user tries to access these web-applications from a slave frontend server, the user should be redirected to the master frontend server. The following table lists the web-applications that need to be redirected, along with the file and line(s) that needs to be edited to force the redirection. c r Mib-Browser e& SNMP dcollector e n t i a l s FILE TO BE EDITED WEB- Y oapplication u Alerting- Frontend m u s t Centralizede Management n t e r <APG>/Web-Applications/Alerting- Frontend/alertingfrontend/conf/common.properties apg.centralized- management.url=http://<fqdn-master- Frontend-server>:58080/centralizedmanagement/ The apg.device-discovery.url should be defined like this: ydevice- odiscovery u r <APG>/Web-Applications/Device- Discovery/devicediscovery/conf/common.properties apg.device-discovery.url= http://<fqdn-master-frontendserver>:58080/device-discovery/ The apg.mib-browser.url and apg.snmpcollector.url should be defined like this: <APG>/Web-Applications/Mib- Browser/mibbrowser/conf/common.properties LINE(S) TO BE EDITED The apg.alerting-frontend.url should be defined like this: apg.alerting-frontend.url =http://<fqdn-master-frontendserver>:58080/alerting-frontend/ The apg.centralized-management.url should be defined like this: <APG>/Web-Applications/Centralized- Management/centralizedmanagement/conf/common.properties apg.mib-browser.url= http://<fqdn- Master-Frontend-server>:58080/mibbrowser/ apg.snmp-collector.url= http://<fqdn- Master-Frontend-server>:58080/mibbrowser/snmpconfig/ For the compliance frontend redirection you need to run that the following command: <APG>/bin/administration-tool.sh updatemodule -module [ -name 'storage_compliance' -url 'http://<fqdn-master-frontend-server>:58080/compliance-frontend/' ] Existing users must re-enter credentials when redirected to the primary frontend server. RESTARTING TOMCAT Once the steps above are complete, restart the slave frontend service. You can do this from either Centralized Management, or from the command line interface (CLI) by running the following command: <APG>/bin/manage-modules.sh service restart tomcat 6
CONFIGURING STORED REPORTS Stored reports are stored locally on a designated frontend server. The other frontends do not have any access to the stored reports. As a consequence if you store reports on the master frontend server, you will not be able to see the reports from the slave frontend server. EMC recommends creating a shared folder that each frontend server can access. You can use the NFS file server to do this, as explained in Appendix E. CONFIGURING THE SCHEDULED REPORTS By default, each frontend server has its own scheduling service. This causes a problem in a multiple frontend server environment since a scheduled report runs multiple times. To remedy this problem, all frontend servers should point to one server that should be running the scheduling service. The server could be the master frontend server or any other server as long as it has access to the databases. If multiple scheduled reports will run, you should have a dedicated server for scheduling. The file you need to edit to specify the scheduling server is located under <APG>/Custom/WebApps- Resources/<instance-name>/scheduling/scheduling-servers.xml. The line you need to edit is shown below. Note that you must edit this file on all frontend servers. <?xml version="1.0" encoding="utf-8" standalone="yes"?> <scheduling-servers xmlns="http://www.watch4net.com/apg/frontend/scheduled- Reports/Scheduling-Servers"> <server validatessl="false" password="changeme" username="admin" url="https://fqdn- Scheduling-Server:48443/"/> </scheduling-servers> If you decide to store reports, they are saved locally on the scheduling server. Each frontend server then tries to access the stored reports folder on the scheduling server to synchronize its local stored report folder. This does not cause a problem if you only have one frontend server running. In the case of multiple frontend servers, they access the same file share (as described in the stored report 7
section). As a consequence, you might end up by having multiple copies of the same reports on the shared server. This is a problem, especially if the stored report is large. To get around this issue, follow the steps in Appendix F. Those steps assume that you have already run the steps listed in the Appendix E. EDITING THE IMPORT PROPERTY TASK Each frontend server has to run its import property task. This operation could cause a load on the database servers, especially if the databases are very large and the import tasks are running simultaneously. As a consequence, you should edit the import property task and delay the execution of the scripts with respect to each to each other. The delay needs to be inserted only if the import tasks take too long to generate. Open the following file: <APG>/Databases/APG-Property-Store/<instance-name>/conf/import-properties.task and edit the following scheduling information: Edit these lines for the master frontend server: <!-- If the average of the last 5 executions takes < 10 minutes, schedule every 4 hours --> <conditional condition="slidingfinishedaverageduration < 3000000"> <schedule cron="0 0,4,8,12,16,20 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> Edit these lines for each slave frontend server: <!-- If the average of the last 5 executions takes < 10 minutes, schedule every 4 hours --> <conditional condition="slidingfinishedaverageduration < 3000000"> <schedule cron="0 1,5,9,13,17,21 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> <!-- If the average of the last 5 executions takes < 1 hour, schedule at 5:00AM and 12:00PM --> <conditional condition="slidingfinishedaverageduration < 3600000"> <schedule cron="0 7,14 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> Note that for each slave frontend server above, you need to insert a 1-hour difference for the first line changed and a 2-hour difference for the second line changed. As a consequence, if you have a second slave frontend server, the configuration looks like this: <!-- If the average of the last 5 executions takes < 10 minutes, schedule every 4 hours --> <conditional condition="slidingfinishedaverageduration < 3000000"> <schedule cron="0 2,6,10,14,18,22 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> <!-- If the average of the last 5 executions takes < 1 hour, schedule at 5:00AM and 12:00PM --> 8
<conditional condition="slidingfinishedaverageduration < 3600000"> <schedule cron="0 9,16 * * *" xsi:type="schedule-repeated" disabled="false"></schedule> </conditional> APPENDIX A: FAQ SOLUTIONPACKS REPORT INSTALLATION The interactive installation window requires a frontend server to install the reports. The report is transferred from the frontend server to the MySQL database and is visible on all frontend servers. SOLUTIONPACKS UPLOAD Since centralized-management performs the upload, the operation takes place on the master frontend server. The package file is saved locally and can only be installed from this server. If the master frontend server is in a high availability solution, the folder must be updated as well. SOLUTIONPACKS FORMULA A SolutionPack can have a local formula built in a java file used in the reports. The formula is installed on the frontend server pointed to during installation. The SolutionPack that has a custom formula contains a java file in the blocks\reports\templates\arp\_formulas folder. Currently only emc-vipr has the formula. SOLUTIONPACKS PROPERTY-MAPPING If the SolutionPack uses events, then the report has an XML file for property mapping that is saved on the pointed to frontend server only. APPENDIX B: HAPROXY LOAD BALANCER CONFIGURATION HAProxy is open source software. The following code sample shows how to configure HAProxy to balance the load across all frontend servers. global log /dev/log local0 info log /dev/log local0 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 5000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /etc/haproxy/haproxy.sock level admin defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s 9
timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend http-in bind *:80 acl url_static path_beg -i /centralized-management acl url_static path_beg -i /alerting-frontend acl url_static path_beg -i /compliance-frontend acl url_static path_beg -i /device-discovery acl url_static path_beg -i /snmpconfig use_backend static if url_static default_backend app backend static balance roundrobin option forwardfor option http-server-close appsession JSESSIONID len 52 timeout 14400000 # Main admin server server m_frontend backend:58080 weight 256 check # HA admin server server s_frontend frontend:58080 weight 2 check backend app balance roundrobin option forwardfor option http-server-close appsession JSESSIONID len 52 timeout 14400000 # No.1 APG server server frontend backend:58080 check inter 5000 # No.2 APG server server frontend2 frontend:58080 check inter 5000 # No.3 APG server server frontend3 frontend2:58080 check inter 5000 listen stats bind *:88 stats enable stats uri / 10
APPENDIX C: F5 LOAD BALANCER CONFIGURATION Parameters that you need to tweak for an F5 Load balancer configuration are: Configuration name Server Pool Persistence type Load balancing method Action on service down Health monitors Value Hostname and IP of each frontend Destination address affinity persistence (Sticky) Least Connections (member) and Least Connections (node) Load balancing method Associate with the pool. If not possible with each member. APPENDIX D: REDIRECTING REQUESTS TO THE FRONTEND SERVER The current guidelines show how to redirect some Web-Application requests to the master frontend server by editing the common.properties file on each server. You can also redirect requests to the master frontend server by configuring redirection at the Network load balancer level, as shown below. F5 BIG-IP CAN DO URL REDIRECTION USING IRULES when HTTP_REQUEST { if{ [HTTP::host] contains "centralized-management"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "alerting-frontend"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "mib-browser"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "compliance-frontend"} { HTTP::redirect http://masterfrontend[http::uri] } if{ [HTTP::host] contains "device-discovery"} { HTTP::redirect http://masterfrontend[http::uri] } } HAPROXY URL REDIRECTION frontend http-in bind *:80 acl url_static path_beg -i /centralized-management acl url_static path_beg -i /alerting-frontend acl url_static path_beg -i /compliance-frontend acl url_static path_beg -i /device-discovery acl url_static path_beg -i /snmpconfig use_backend static if url_static default_backend app 11
APPENDIX E: NFS SHARE FOR STORED REPORTS Use this procedure to create an NFS share for stored reports: 1. Select a frontend server to host the folder. Install nfs-kernel-server and nfs-common on it. 2. Run the following command: echo "<APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports <IP-Addess-NFS- Server>/<subnet-mask> (rw,no_root_squash,subtree_check)" >> /etc/exports replacing <IP-Addess-NFS-Server> and <subnet-mask> with their proper values. 3. Restart the NFS server: /etc/init.d/nfs-kernel-server reload 4. On the other frontend servers, login as root and create the following folder: <APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports If the folder already exists, rename it. 5. Mount the folder: mount <host ip>:<apg>/web-servers/tomcat/<instance-name>/temp/apg-reports <APG>/Web- Servers/Tomcat/<instance-name>/temp/apg-reports 6. Add the following line to /etc/fstab file to mount the shares at boot time: <IP-Addess-NFS-Server>: <APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports <APG>/Web-Servers/Tomcat/<instance-name>/temp/apg-reports nfs rw,sync,hard,intr 0 0 Refer to your NFS file server documentation for more information online or using the man command. APPENDIX F: ELIMINATING MULTIPLE COPIES OF A SHARED REPORT The following procedure assumes that you have already set up a share for the stored report. 1. Run the following command on the scheduling server: echo "<APG>/Tools/Frontend-Report-Generator/<instance-name>/data/stored-reports <IP- Addess-NFS-Server>/<subnet-mask> (rw,no_root_squash,subtree_check)" >> /etc/exports replacing <IP-Addess-NFS-Server> and <subnet-mask> with the proper values. 2. Restart the NFS server: /etc/init.d/nfs-kernel-server reload 3. Mount the folder: mount <host ip>:<apg>/tools/frontend-report-generator/<instance-name>/data/storedreports >:<APG>/Tools/Frontend-Report-Generator/<instance-name>/data/stored-reports 4. To mount the shares at boot time add the following line to the /etc/fstab file: <IP-Addess-NFS-Server>:<APG>/Tools/Frontend-Report-Generator/<instancename>/data/stored-reports >:<APG>/Tools/Frontend-Report-Generator/<instancename>/data/stored-reports nfs rw,sync,hard,intr 0 0 12