ANSYS Remote Solve Manager User's Guide

Size: px
Start display at page:

Download "ANSYS Remote Solve Manager User's Guide"

Transcription

1 ANSYS Remote Solve Manager User's Guide ANSYS, Inc. Southpointe 275 Technology Drive Canonsburg, PA (T) (F) ANSYS Release 15.0 November 2013 ANSYS, Inc. is certified to ISO 9001:2008.

2 Copyright and Trademark Information 2013 SAS IP, Inc. All rights reserved. Unauthorized use, distribution or duplication is prohibited. ANSYS, ANSYS Workbench, Ansoft, AUTODYN, EKM, Engineering Knowledge Manager, CFX, FLUENT, HFSS and any and all ANSYS, Inc. brand, product, service and feature names, logos and slogans are registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries in the United States or other countries. ICEM CFD is a trademark used by ANSYS, Inc. under license. CFX is a trademark of Sony Corporation in Japan. All other brand, product, service and feature names or trademarks are the property of their respective owners. Disclaimer Notice THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFID- ENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software products and documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreement that contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exporting laws, warranties, disclaimers, limitations of liability, and remedies, and other provisions. The software products and documentation may be used, disclosed, transferred, or copied only in accordance with the terms and conditions of that software license agreement. ANSYS, Inc. is certified to ISO 9001:2008. U.S. Government Rights For U.S. Government users, except as specifically granted by the ANSYS, Inc. software license agreement, the use, duplication, or disclosure by the United States Government is subject to restrictions stated in the ANSYS, Inc. software license agreement and FAR (for non-dod licenses). Third-Party Software See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary software and third-party software. If you are unable to access the Legal Notice, please contact ANSYS, Inc. Published in the U.S.A.

3 Table of Contents 1. Overview RSM Roles and Terminology Typical RSM Workflows File Handling RSM Integration with ANSYS Client Applications RSM Supported Solvers RSM Integration with Workbench Installation and Configuration Software Installation Installing a Standalone RSM Package Uninstalling RSM Using the ANSYS Remote Solve Manager Setup Wizard RSM Service Installation and Configuration Installing and Configuring RSM Services for Windows Installing RSM Services for Windows Installing and Configuring RSM Services for Linux Configuring RSM to Use a Remote Computing Mode for Linux Installing RSM Services for Linux Starting RSM Services Manually for Linux Manually Running RSM Service Scripts for Linux Manually Uninstalling RSM Services for Linux Starting RSM Services Automatically at Boot Time for Linux Installing RSM Automatic Startup (Daemon) Services for Linux Working with RSM Automatic Startup (Daemon) Services for Linux Uninstalling RSM Automatic Startup (Daemon) Services for Linux Additional Linux Considerations Configuring a Multi-User Manager or Compute Server Configuring RSM for a Remote Computing Environment Adding a Remote Connection to a Manager Adding a Remote Connection to a Compute Server Configuring Computers with Multiple Network Interface Cards (NIC) Setting Up RSM File Transfers Operating System File Transfer Utilizing Network Shares Windows-to-Windows File Transfer Linux-to-Linux File Transfer Windows-to-Linux File Transfer Verifying OS Copy File Transfers Eliminating File Transfers by Utilizing a Common Network Share Native RSM File Transfer SSH File Transfer Custom Client Integration Accessing the RSM Configuration File User Interface Main Window Menu Bar Toolbar Tree View List View Status Bar Job Log View iii

4 Remote Solve Manager (RSM) 3.8. Options Dialog Box Desktop Alert Accounts Dialog RSM Notification Icon and Context Menu User Accounts and Passwords Adding a Primary Account Adding Alternate Accounts Working with Account Passwords Manually Running the Password Application Configuring Linux Accounts When Using SSH Administration Automating Administrative Tasks with the RSM Setup Wizard Working with RSM Administration Scripts Creating a Queue Modifying Manager Properties Adding a Compute Server Compute Server Properties Dialog: General Tab Compute Server Properties Dialog: Cluster Tab Compute Server Properties Dialog: SSH Tab Testing a Compute Server Customizing RSM Understanding RSM Custom Architecture Job Templates Code Templates Job Scripts HPC Commands File Custom Cluster Integration Setup Customizing Server-Side Integration Configuring RSM to Use Cluster-Specific Code Template Creating Copies of Standard Cluster Code Using Custom Cluster Keyword Modifying Cluster-Specific Job Code Template to Use New Cluster Type Modifying Cluster-Specific HPC Commands File Customizing Client-Side Integration Configuring RSM to Use Cluster-Specific Code Template on the Client Machine Creating Copies of Sample Code Using Custom Client Keyword Modifying Cluster-Specific Job Code Template to Use New Cluster Type Modifying Cluster-Specific HPC Commands File Configuring File Transfer by OS Type and Network Share Availability Windows Client to Windows Cluster Windows-to-Windows, Staging Visible Windows-to-Windows, Staging Not Visible Windows Client to Linux Cluster Windows-to-Linux, Staging Visible Windows-to-Linux, Staging Not Visible Linux Client to Linux Cluster Linux-to-Linux, Staging Visible Linux-to-Linux, Staging Not Visible Writing Custom Code for RSM Integration Parsing of the Commands Output Commands Output in the RSM Job Log Error Handling Debugging iv

5 Remote Solve Manager (RSM) Customizable Commands Submit Command Status Command Cancel Command Transfer Command Cleanup Command Custom Integration Environment Variables Environment Variables Set by Customer Environment Variables Set by RSM Providing Client Custom Information for Job Submission Defining the Environment Variable on the Client Passing the Environment Variable to the Compute Server Verify the Custom Information on the Cluster Troubleshooting A. ANSYS Inc. Remote Solve Manager Setup Wizard A.1. Overview of the RSM Setup Wizard A.2. Prerequisites for the RSM Setup Wizard A.3. Running the RSM Setup Wizard A.3.1. Step 1: Start RSM Services and Define RSM Privileges A.3.2. Step 2: Configure RSM A.3.3. Step 3: Test Your RSM Configuration A.4. Troubleshooting in the Wizard B. Integrating Windows with Linux using SSH/SCP B.1. Configure PuTTY SSH B.2. Add a Compute Server C. Integrating RSM with a Linux Platform LSF, PBS, or SGE (UGE) Cluster C.1. Add a Linux Submission Host as a Compute Server C.2. Complete the Configuration C.3. Additional Cluster Details D. Integrating RSM with a Windows Platform LSF Cluster D.1. Add the LSF Submission Host as a Compute Server D.2. Complete the Configuration D.3. Additional Cluster Details E. Integrating RSM with a Microsoft HPC Cluster E.1. Configure RSM on the HPC Head Node E.2. Add the HPC Head Node as a Compute Server E.3. Complete the Configuration E.4. Additional HPC Details Glossary Index v

6 vi

7 Chapter 1: RSM Overview The Remote Solve Manager (RSM) is a job queuing system that distributes tasks that require computing resources. RSM enables tasks to be run in background mode on the local machine, sent to a remote machine for processing, or tasks can be broken into a series of jobs for parallel processing across a variety of computers. Computers with RSM installed are configured to manage jobs using three primary services: The RSM Client service, the Solve Manager service (typically shortened to Manager ), and the Compute Server service. You use the RSM Client interface to manage jobs. RSM Clients submit jobs to a queue, and the Manager dispatches these jobs to idle Compute Servers that run submitted jobs. These services and their capabilities are explained in RSM Roles and Terminology (p. 1) The following topics are discussed in this overview: 1.1. RSM Roles and Terminology 1.2.Typical RSM Workflows 1.3. File Handling 1.4. RSM Integration with ANSYS Client Applications 1.1. RSM Roles and Terminology The following terms are essential to understanding RSM uses and capabilities: Job A job consists of a job template, a job script, and a processing task submitted from a client application such as ANSYS Workbench. The job template is an XML file that specifies input and output files of the client application. The job script runs an instance of the client application on the Compute Server(s) used to run the processing task. Client Application A client application is the ANSYS application used to submit jobs to RSM, and then solve those jobs as managed by RSM. Examples include ANSYS Workbench, ANSYS Fluent, ANSYS CFX, etc. Queue A queue is a list of Compute Servers available to run jobs. When a job is sent to a queue, the Manager selects an idle Compute Server in the list. Compute Server Compute Servers are the machines on which jobs are run. In most cases, the Compute Server refers to a remote machine, but it can also refer to your local machine ("localhost"). The Compute Server can be a Windows-based computer or a Linux system equipped with Mono, the open source development platform based on the.net framework. The job script performs a processing task (such as running a finite element solver). If the job script requires a client application to complete that task, that client application must be installed on the Compute Server. 1

8 Overview Once Compute Servers are configured, they are added to a queue (which can contain multiple Compute Servers). Jobs must specify a queue when they are submitted to a Manager. RSM Manager The RSM Manager (also called the Solve Manager ) is the central RSM service that dispatches jobs to computing resources. It contains a configuration of queues (lists of Compute Servers available to run jobs). RSM Clients submit jobs to one or more queues configured for the Manager, and their jobs are dispatched to Compute Servers as resources become available. The RSM administrator decides if users should use the Manager on their local machine or a central Manager, depending on the number of users and compute resources. RSM Client The RSM Client is a computer that runs both RSM and a client application such as ANSYS Workbench. RSM enables this computer to off-load jobs to a selected queue. Code Template A code template is an XML file containing code files (for example, C#, VB, JScript), references, and support files required by a job. For more information on code templates, see Job Templates Typical RSM Workflows Any computer with RSM installed can act as the RSM Client, Manager, Compute Server, or any simultaneous combination of these three functions. This section provides an overview of several configurations of these functions as they are typically seen in RSM workflows. For specific instruction regarding RSM configurations, refer to RSM Service Installation and Configuration (p. 10). The most effective use of RSM is to designate one computer as the Manager for central management of compute resources. All RSM Clients submit jobs to a queue(s) configured for that Manager, and the Manager dispatches jobs as compute resources become available on Compute Servers. The following list shows several typical RSM usage workflows: 1. The RSM Client submits jobs using RSM (running locally) directly to itself so that the job runs locally in background mode. Here, the RSM Client, the Manager, and the Compute Server are all on the local machine. This capability is available automatically when you install ANSYS Workbench. 2. The RSM Client submits jobs to the Manager running locally on the same machine. You can assign a remote Compute Server to run the job or split the job between multiple Compute Servers, optionally including your local machine (as depicted in the second workflow below). A remote Compute Server requires RSM 2

9 Typical RSM Workflows and the client application to be installed (the client application is typically installed with ANSYS Workbench, which also includes RSM). 3. An RSM Client machine submits jobs to a Manager running on a remote machine (refer to Adding a Remote Connection to a Manager (p. 20)). The remote machine also acts as the Compute Server. This configuration is available automatically when both machines have ANSYS Workbench installed. 4. An RSM Client machine submits jobs to a Manager running on a remote machine. The Manager then assigns the job to a remote Compute Server(s). The RSM Client and the Compute Servers must have ANSYS Workbench installed. You can install ANSYS Workbench on the Manager, or choose to install only standalone RSM software, as described in Software Installation (p. 7). 3

10 Overview 1.3. File Handling Input files are generally transferred from the RSM Client working directory, to the Manager project directory, and then to the Compute Server working directory where the job is run. Output files generated by the job are immediately transferred back to the Manager s project storage when the job finishes. The files are stored there until the client application downloads the output files. This section provides more details about how RSM handles files. Client Application The location of files on the RSM Client machine is controlled by the client application (for example, ANSYS Workbench). When the RSM Client submits a job to a Manager, it specifies a directory where inputs are found and where output files are placed. Refer to the client application documentation to determine where input files are placed when submitting jobs to RSM. Input files are copied to the Manager immediately when the job is submitted. RSM Manager The RSM Manager creates a project directory as defined in the project directory input from the RSM UI. However, when the Manager is local to the client (i.e., when it is on the same machine as the RSM Client), it ignores the RSM UI setting and creates the directory where the job is saved. The base project directory location is controlled with the Solve Manager Properties dialog (see Modifying Manager Properties (p. 54)). All job files are stored in this location until the RSM Client releases the job. Jobs can also be deleted manually in the RSM user interface. Compute Server If the Working Directory property on the General tab of the Compute Server Properties dialog is set to Automatically Determined, the Compute Server reuses the Manager s project directory as an optimization. Otherwise, the Compute Server creates a temporary directory in the location defined in the Working Directory property on the General tab of the Compute Server Properties dialog. If the Working Directory property is left blank, the system TMP variable is used. When the job is complete, output files are immediately copied back to the Manager's Project Directory. If the Delete Job Files in Working Directory check box of the Compute Server Properties dialog is selected (default), the temporary directory is then deleted. Linux SSH When Windows to Linux SSH file transfer is required by security protocols, the Linux Working Directory property on the SSH tab of the Compute Server Properties dialog determines where files are located. If this field is empty, the account s home directory is used as the default location. In either case, a unique temporary directory is created. Third-Party Schedulers When using the RSM job scripts that integrate with third-party schedulers such as LSF, PBS, Microsoft HPC (previously known as Microsoft Compute Cluster), SGE (UGE), etc., the file handling rules listed in this section apply to the extent that RSM is involved. For more information on integrating RSM with various third-party schedulers, see: Compute Server Properties Dialog: Cluster Tab Appendix C Appendix D Appendix E 4

11 File Transfer Methods ANSYS Remote Solve Manager offers different methods of transferring files. The preferred method is OS File Transfer and involves using existing network shares to copy the files using the built-in operating system copy commands. Other methods include native RSM file transfer, SSH file transfer, and complete custom integration. You can also reduce or eliminate file transfers by sharing a network save/storage location. For more information, see Setting Up RSM File Transfers (p. 21) RSM Integration with ANSYS Client Applications This section discusses RSM compatibility and integration topics related to ANSYS client applications. For client application-specific RSM instruction, integration, or configuration details, refer to the following resources: Submitting Solutions for Local, Background, and Remote Solve Manager (RSM) Processes in the Workbench User's Guide For tutorials featuring step-by-step instructions for specific configuration scenarios, go to the Downloads page of the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to The client application documentation The following topics are discussed in this section RSM Supported Solvers RSM Integration with Workbench RSM Supported Solvers RSM supports the following solvers: CFX Fluent Mechanical (excluding the Samcef solver) Mechanical APDL Polyflow RSM Integration with Workbench RSM Integration with ANSYS Client Applications Many ANSYS Workbench applications enable you to use RSM; however, the following considerations may apply: Some applications may not always work with remote Compute Servers or Managers. When a client application is restricted to the RSM Client machine, RSM enables the client application to run in the background. When a client application can send jobs to remote Compute Servers, the job may be run completely on one Compute Server, or the job may be broken into pieces so that each piece can run in parallel on 5

12 Overview multiple Compute Servers (possibly including the RSM Client machine). In the case where a job is being run in parallel on multiple machines, you need to ensure that the software that controls the parallel processing is supported on all of the Compute Servers. 6

13 Chapter 2: ANSYS Remote Solve Manager Installation and Configuration A general overview of RSM installation and configuration is presented in this chapter. This section discusses the following installation and configuration topics: 2.1. Software Installation 2.2. Using the ANSYS Remote Solve Manager Setup Wizard 2.3. RSM Service Installation and Configuration 2.4. Setting Up RSM File Transfers 2.5. Accessing the RSM Configuration File For tutorials featuring step-by-step instructions for specific configuration scenarios, go to the Downloads page of the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to Software Installation RSM is automatically installed with ANSYS Workbench products. You can also install RSM by itself if desired. For example, you may want to install RSM by itself on a computer that acts as a dedicated Manager; a Manager requires only an RSM installation for connectivity with remote RSM Clients and Compute Servers. RSM Clients and Compute Servers require ANSYS Workbench, the ANSYS applications you want to run, and RSM. Administrator privileges are not required to install or uninstall RSM on RSM Client machines. The following RSM installation topics are discussed in this section: Installing a Standalone RSM Package Uninstalling RSM Installing a Standalone RSM Package In addition to the default method of installing Remote Solve Manager along with Workbench, it is possible to install a standalone RSM package (i.e., to install everything necessary to run RSM services and the RSM interface, but without a full ANSYS Workbench installation that includes ANSYS Mechanical, ANSYS Fluent, ANSYS CFX, and ANSYS Polyflow solvers, etc.) You can install the standalone RSM package on either a Windows or a Linux machine via the ANSYS Product Installation Wizard, as follows: 1. Run the wizard as described in Installing ANSYS, Inc. Products. 2. On the Select the products to install page: Under ANSYS Additional Tools, select the ANSYS Remote Solve Manager Standalone Services check box. Deselect all the other check boxes. 7

14 Installation and Configuration 3. Continue the installation process as directed. When you install a standalone RSM package, this does not mean that RSM services are installed at the same time; you still need to install or start up necessary RSM services. For instructions, see Installing RSM Services for Windows or Installing RSM Services for Linux Uninstalling RSM Uninstall RSM with Workbench For a default RSM installation that was installed along with ANSYS Workbench, RSM is removed when you do a full uninstall of Workbench and ANSYS products. Run the ANSYS Product Uninstall wizard and click the Select All button to remove all products. Uninstall a Standalone RSM Package To uninstall a standalone RSM package, run the ANSYS Product Uninstall wizard and select only the ANSYS RSM check box. Uninstall a Standalone RSM Package Manually To uninstall a standalone RSM package manually, first uninstall all RSM services. To uninstall RSM services for Windows, see Uninstalling RSM Services for Windows (p. 11). To uninstall RSM services started manually for Linux, see Manually Uninstalling RSM Services for Linux (p. 15). To uninstall RSM daemon services for Linux, see Uninstalling RSM Automatic Startup (Daemon) Services for Linux (p. 17). After the services have been uninstalled, delete the RSM installation directory Using the ANSYS Remote Solve Manager Setup Wizard The ANSYS Remote Solve Manager Setup Wizard is a new utility that guides you through the process of setting up and configuring Remote Solve Manager; instead of using manual setup processes, you can launch the wizard and follow its instructions for each part of the setup. Depending on the RSM Layout you intend to use, you may need to run the wizard on multiple machines. The wizard will walk you through the following setup tasks: Start RSM services The creation of shared directories needed for use with a commercial cluster is performed as part of the Wizard configuration. 8

15 Using the ANSYS Remote Solve Manager Setup Wizard To start RSM services when UAC is enabled on Windows 7, you must launch the wizard using the right-click Run as administrator menu option. For instructions on enabling or disabling UAC, see RSM Troubleshooting (p. 99). Configure the machines to be included in your RSM Layout Perform various cluster configuration tasks Integrate RSM with the following third-party job schedulers (without requiring job script customization): LSF (Windows and Linux) PBS (Linux only) Microsoft HPC SGE (UGE) Create and share RSM directories (Project Directory, Working Directory, and where applicable, Shared Cluster Directory) Define queues Create accounts Test the final RSM configuration To launch the RSM Setup Wizard: For Windows: Select Start > All Programs > ANSYS 15.0 > Remote Solve Manager > RSM Setup Wizard Alternatively, you can navigate to the [RSMInstall]\bin directory and double-click Ans.Rsm.Wizard.exe. For Linux: Open a terminal window in the [RSMInstall]\Config\tools\linux directory and run rsmwizard. 1. Log into the machine that will serve as the Manager. If you are configuring a cluster, this is the head node of the cluster. For Windows, you must either have Windows administrative privileges on the Manager, have RSM administrative privileges (as a member of the RSM Admins user group), or launch the wizard via the right-click Run as administrator menu option. For Linux, you must log in with root privileges or have non-root administrative privileges. ( Nonroot administrative privileges means that you are a member of the rsmadmins user group. Before you run the wizard, your IT department must create the rsmadmins user group and manually add any users who will be starting/running non-daemon services.) 2. Launch the wizard: that the wizard requires different privileges for different parts of the RSM setup process. For details on necessary permissions, see Prerequisites for the RSM Setup Wizard (p. 105). 9

16 Installation and Configuration For detailed information on the wizard s requirements, prerequisites, and capabilities, see Appendix A (p. 103). For a quick-start guide on using the wizard, see the Readme file. To access this file: For Windows: Select Start > All Programs > ANSYS 15.0 > Remote Solve Manager > Readme - RSM Setup Wizard For Linux: Navigate to the [RSMInstall]\Config\tools\linux directory and open rsm_wiz.pdf. For more detailed information on the wizard s capabilities, prerequisites, and use, see Appendix A (p. 103) RSM Service Installation and Configuration This section includes instructions for installing and configuring RSM services for Windows or Linux machines Installing and Configuring RSM Services for Windows Installing and Configuring RSM Services for Linux Configuring a Multi-User Manager or Compute Server Configuring RSM for a Remote Computing Environment Installing and Configuring RSM Services for Windows The following RSM configuration topics for Windows are discussed in this section: Installing RSM Services for Windows Installing RSM Services for Windows On a Windows machine, you can configure RSM services to start automatically at boot time by running the RSM startup script for Windows. You can also uninstall and restart the services by running the script with adding command line options. RSM services cannot be started from a network installation. It is recommended that you install RSM on a local machine. For GPU requirements when Windows is installed as a service, see GPU Requirements in the Installation and Licensing Documentation. RSM Command Line Options for Windows By adding the following command line options to the end of an RSM service script, you can specify what service or services you wish to configure. -mgr Command line option for applying the command to the Manager service. -svr Command line option for applying the command to the Compute Server service. 10

17 If you use both options with the selected script, the script will be applied to be both services. Configuring RSM Services to Start Automatically at Boot Time for Windows To configure RSM services to start automatically at boot time, run the AnsConfigRSM.exe script. 1. Log into a Windows account with administrative privileges. 2. Ensure that Ans.Rsm.* processes are not running in the Windows Task Manager. 3. Open a command prompt in the [RSMInstall]\bin directory. 4. Enter the AnsConfigRSM.exe script into the command line, specifying the service by using the appropriate command line options. The examples below show how to configure both services, the Manager service only, or the Compute Server service only. AnsConfigRSM.exe -mgr -svr AnsConfigRSM.exe -mgr AnsConfigRSM.exe -svr RSM Service Installation and Configuration 5. Run the command. Windows 7 users may need to select the Run as administrator option. If the RSM services have been removed, you can also use the above sequence of steps to reconfigure the services. Uninstalling RSM Services for Windows To unconfigure (remove) all RSM services, run the AnsUnconfigRSM.exe script. 1. Log into a Windows account with administrative privileges. 2. Ensure that Ans.Rsm.* processes are not running in the Windows Task Manager. 3. Open a command prompt in the [RSMInstall]\bin directory. 4. Enter the AnsUnconfigRSM.exe script into the command line. 5. Run the command. If you using a Windows 7 operating system, you may need to select the Run as administrator option from the right-click context menu. The uninstaller can only stop services which were started by and are owned by the user performing the uninstall. 11

18 Installation and Configuration Installing and Configuring RSM Services for Linux The following RSM configuration topics for Linux are discussed in this section: Configuring RSM to Use a Remote Computing Mode for Linux Installing RSM Services for Linux Additional Linux Considerations Configuring RSM to Use a Remote Computing Mode for Linux When RSM is installed on a Linux-based platform, you can select either native communication mode or SSH communication mode for RSM to communicate with remote machines. The differences between these two modes are detailed below: Protocol Type Installation Requirements Data Transfer Efficiency Platform Support Native communication Uses RSM application to execute commands and copy data to/from Compute Servers Requires RSM to be installed and running on the Compute Server (see Starting RSM Services Manually for Linux (p. 13)) Most efficient data transfer for solution process launch and retrieval of results Supported on Windows & Linux only SSH communication Uses an external SSH application to execute commands and copy data to/from Compute Servers Requires installation of SSH client (Putty SSH) on the RSM Client machines (see Appendix B) Communication overhead slows solution process launch and retrieval of results Supported on all platforms ANSYS recommends that you use native communication where possible, and use SSH where platform support or IT policy requires it. Configuring Native Cross-Platform Communications In RSM, it is possible to configure a Linux machine for native mode communications. For performance reasons, native mode is the recommended method for cross-platform RSM communications; SSH should only be used if your IT department requires it. With native mode, a Linux Compute Server has RSM installed and running locally, so the SSH protocol isn t needed to provide communications between a Windows Compute Server and a Linux Compute Server. You can configure native mode communications by performing either of the following options on the Linux machine: OPTION A: Run the./rsmmanager and./rsmserver scripts to manually start the Manager and Compute Server services. Refer to Starting RSM Services Manually for Linux (p. 13) for more information. OPTION B: Configure RSM to start the Manager and Compute Server services at boot, as described in Starting RSM Services Automatically at Boot Time for Linux (p. 15). Adding Common Job Environment Variables for Native Jobs Before installing and starting the RSM service on Linux, you can edit the rsm_env_profile file under the [RSMInstall]/Config/tools/linux directory. In this file, you can add any common job environment variables for native jobs to run. For example, you can use this file to source environment variables specific to a batch-queueing system, or you can append a cluster-specific PATH. Once defined, 12

19 RSM service and native jobs should inherit these environments when any job is run. It is useful to be able to set common environment variables in a single place instead of having to set them up on each job user's.cshrc or.profile file from the user s $HOME directory. The following shows the content of rsm_env_profile file: #!/bin/sh # The following examples show loading environment settings specific to batch system (e.g. LSF, SGE/UGE). # If defined, RSM service and jobs should then inherit this environment when a job is run. #. /home/batch/lsf7.0/conf/profile.lsf #. /home/batch/sge6.2u2/default/common/settings.sh Installing RSM Services for Linux The following related topics are discussed in this section: Starting RSM Services Manually for Linux Starting RSM Services Automatically at Boot Time for Linux Starting RSM Services Manually for Linux Manager and Compute Server machines must have RSM services running in order to manage or run jobs. If you are submitting jobs to a Manager or Compute Server on a remote machine, you can start RSM services manually by running the scripts detailed in this section. These scripts include: rsmmanager Starts the Manager service. rsmserver Starts the Compute Server service. rsmxmlrpc Starts the XmlRpcServer service (required for EKM servers only). RSM Service Installation and Configuration These scripts are generated as part of the RSM installation process and are located in the WBInstallDir/RSM/Config/tools/linux directory. If for some reason these scripts were not generated during installation or are for other reasons not available, you can generate them yourself. For instructions, see Generating RSM Service Startup Scripts for Linux (p. 99) in the RSM Troubleshooting (p. 99) section. Important When installing RSM services, you must determine whether you want to start the RSM services manually via the startup scripts or want to install the RSM services as daemons (i.e., start the service automatically when the machine is booted). Only one method of these methods should be used. Important For security reasons, it is recommended that you do not start and run RSM service processes manually as the "root" user. If you want to configure the process of installing RSM to a multi- 13

20 Installation and Configuration user Linux machine, the recommended practice is to install it as a daemon. See Starting RSM Services Automatically at Boot Time for Linux (p. 15). that when RSM services are started manually, the RSM services run as a process for the user who initiated the services. RSM services that were started manually are stopped each time the machine is rebooted; after a reboot, before you submit any jobs to RSM you must first restart the RSM services by running the appropriate startup scripts. If you d prefer to start the services automatically when the machine is booted, you can configure daemons as described in Starting RSM Services Automatically at Boot Time for Linux (p. 15) Manually Running RSM Service Scripts for Linux You can run the RSM service scripts to manually start, stop, check the status of, and restart RSM services. Starting an RSM Service Manually You can start any of the three RSM services manually by running the appropriate service script with the command line option start. The examples below illustrate how to start each of the RSM services manually:./rsmmanager start./rsmserver start./rsmxmlrpc start Stopping an RSM Service Manually You can stop any of the three RSM services manually by running the appropriate service script with the command line option stop. The examples below illustrate how to start each of the RSM services manually:./rsmmanager stop./rsmserver stop./rsmxmlrpc stop Checking the Status of an RSM Service Manually You can check the status of any of the three RSM services manually by running the appropriate service script with the command line option status. The examples below illustrate how to check the status of each of the RSM services manually:./rsmmanager status./rsmserver status./rsmxmlrpc status Restarting an RSM Service Manually You can restart any of the three RSM services manually by running the appropriate service script with the command line option restart. The examples below illustrate how to restart each of the RSM services manually:./rsmmanager restart./rsmserver restart./rsmxmlrpc restart 14

21 RSM Service Installation and Configuration Manually Uninstalling RSM Services for Linux 1. Log into a Linux account with administrative privileges. 2. Ensure that Ans.Rsm.* processes are not running. 3. Open a terminal window in the RSM/Config/tools/linux directory. 4. Enter the rsmunconfig script into the command line, as shown below: tools/linux#>./rsmunconfig 5. Run the script. The uninstaller can only stop services which were started by and are owned by the user performing the uninstall Starting RSM Services Automatically at Boot Time for Linux You can configure RSM services to start automatically when the machine is booted by configuring them as daemon services (if the services are not configured to start automatically, they must be started manually, as described in Starting RSM Services Manually for Linux (p. 13)). Daemon services are scripts or programs that run persistently in the background of the machine, and which are usually executed at startup by the defined runlevel. The following related topics are discussed in this section: Installing RSM Automatic Startup (Daemon) Services for Linux Working with RSM Automatic Startup (Daemon) Services for Linux Uninstalling RSM Automatic Startup (Daemon) Services for Linux Installing RSM Automatic Startup (Daemon) Services for Linux Security Requirements for Daemon Service Configuration To install RSM services as daemons, you must have system administrative permissions (i.e., you must be logged in and installing as a root user or sudoer ). For security reasons, it is recommended that you do not run RSM services as the root user. Many Linux OS only allow root users to listen to specific ports, so the ports that are required by the RSM Solve Manager and Compute Server services may be blocked by system administration. For these reasons, the RSM daemon service installation will create a non-root user account with no logon called rsmadmin; the account is a member of the rsmadmins user group, and has a home directory of /home/rsmadmin. The RSM daemon service will then be run by the rsmadmin user. The RSM daemon service installation will only create the rsmadmin user account if the account does not already exist. The same is true for the rsmadmins user group if the group name does not exist. The account/group will be created locally on the computer on which the RSM service(s) will be run. If you want the account/group to be managed in the master server by 15

22 Installation and Configuration Network Information Service (NIS), you need to ask your IT department to create an rsmadmin user account and rsmadmins group from NIS before running RSM daemon service scripts. When an RSM package is installed under a directory, please make sure that all its parent directories (not the files in the directory) have both read and execution permissions so that the RSM service executable can be started by a non-root user. Daemon Service Installation Methods There are two ways to install RSM services as daemons: by running the rsmconfig script, or by running the install_daemon script. The difference between the two methods is that whereas the rsmconfig script always generates fresh service scripts before starting the service installation, the install_daemon script assumes that the service scripts are always available in the WBInstallDir/RSM/Config/tools/linux directory and uses the existing scripts for service installation, allowing the system administrator to perform advanced script customizations before the services are installed.) Both scripts are located in the RSM/Config/tools/linux directory and have the same command line options. tools/linux#>./rsmconfig -help Options: -mgr: Install RSM Job Manager service. -svr: Install RSM Compute Server service. -xmlrpc: Install RSM XML-RPC Server. tools/linux#./install_daemon Usage:./install_daemon [-mgr] [-svr] [-xmlrpc] Options: -mgr: Install RSM Job Manager service. -svr: Install RSM Compute Server service. -xmlrpc: Install RSM XML-RPC Server. Installing RSM Services as Daemons To install RSM services as daemon services, run either the rsmconfig script or the install_daemon script, as follows: 1. Log into a Linux account with administrative privileges. 2. Ensure that Ans.Rsm.* processes are not running. 3. Open a terminal window in the RSM/Config/tools/linux directory. 4. Enter the script into the terminal window. 5. Add the appropriate command line options (-mrg, -svr, or -xmlrpc). 6. Run the command. The two examples below show the command line used to configure the Manager and Compute Server service daemons via either the rsmconfig or the install_daemon script. tools/linux#>./rsmconfig -mgr -svr tools/linux#>./install_daemon -mgr -svr 16

23 Once the daemon service is installed, the RSM service will be started automatically without rebooting. The next time when the machine is rebooted, the installed RSM service will be started automatically. Verifying the RSM Daemon Installation To verify that the automatic boot procedure is working correctly, reboot the system and check to see that the services are running by typing the appropriate ps command and looking for Ans.Rsm in the resulting display: ps aux grep Ans.Rsm Working with RSM Automatic Startup (Daemon) Services for Linux Once an RSM daemon service is configured, any user can check the status of the service. System administrators can also start or restart the service. Stopping the Daemon Service To stop the daemon service:./etc/init.d/rsmmanager150 stop Checking the Status of the Daemon Service To check the status of the daemon service:./etc/init.d/rsmmanager150 status Restarting the Daemon Service To restart the daemon service:./etc/init.d/rsmmanager150 restart Uninstalling RSM Automatic Startup (Daemon) Services for Linux As with RSM daemon service installation, only a system administrator can uninstall the RSM daemon service. Also, the uninstaller can only stop services which were started by and are owned by the user performing the uninstall. Uninstalling All RSM Daemon Services To uninstall all RSM daemon services, run the rsmunconfig script (without command line options). The script is located in the WBInstallDir/RSM/Config/tools/linuxdirectory. The example below shows the command line used to uninstall all RSM service daemons. tools/linux#>./rsmunconfig Uninstalling Individual RSM Daemon Services To uninstall RSM daemon services individually, run the uninstall_daemon script. The script is located in the WBInstallDir/RSM/Config/tools/linuxdirectory. Specify the service by using command line options, as shown below: tools/linux#./uninstall_daemon Usage:./uninstall_daemon [-mgr] [-svr] [-xmlrpc] [-rmadmin] Options: -mgr: Uninstall RSM Job Manager service. -svr: Uninstall RSM Compute Server service. -xmlrpc: Uninstall RSM XML-RPC Server. -rmadmin : Remove 'rsmadmin' user and 'rsmadmins' group service account. RSM Service Installation and Configuration 17

24 Installation and Configuration The example below shows the command line used to uninstall Solve Manager and Compute Server service daemons via the uninstall_daemon script. tools/linux#>./uninstall_daemon -mgr -svr Removing the Administrative User Account and Service Group Manually By default, the rsmunconfig script does not remove the rsmadmin user account and rsmadmins user group that were created earlier when service was configured. This allows the same account and user group to be reused for the next service installation and configuration, and also prevents the accidental deletion of important files from the rsmadmin home directory (/home/rsmadmin). However, if you decide that you do not want to keep the user account and user group, you can remove them manually by adding the -rmadmin command line option to the uninstall_daemon script. tools/linux#>./uninstall_daemon -rmadmin Important The service account and group cannot be deleted if one or more RSM services are still being run by that user account and service group name. You will be prompted to answer Yes or No from the above command when there is no service is being run by these accounts and RSM is trying to delete them Additional Linux Considerations When running RSM on Linux, the following considerations apply: Linux Path Configuration Requirements The RSM job scripts that integrate with Linux using PuTTY SSH require you to set AWP_ROOT150 in the user's environment variables. If the job is not running properly, check the job log in the Job Log view for "Command not found". Remote command clients like PuTTY SSH use the remote account's default shell for running commands. For example, if the account's default shell is CSH, the following line needs to be added to the.cshrc file (path may be different for your environment): setenv AWP_ROOT150 /ansys_inc/v150 ~ (tilde) representation of the home directory is not supported for use in RSM paths (for example, the Working Directory in the Compute Server Properties dialog). Different shells use different initialization files than the account's home directory and may have a different syntax than shown above. Refer to the Linux man page for the specific shell or consult the machine administrator. RSH/SSH Settings for Inter/Intra-Node Communications The Use SSH protocol for inter- and intra-node communication (Linux only) property, located on the General tab of the Compute Server Properties dialog, determines whether RSM and solvers use RSH or SSH for inter-node and intra-node communications on Linux machines. When Fluent, CFX, 18

25 Mechanical, and Mechanical APDL are configured to send solves to RSM, their solvers will use the same RSH/SSH settings asrsm. Explicit Dynamics Systems RSM does not support Linux connections for Explicit Dynamics systems. Only Windows-to-Windows connections are currently supported Configuring a Multi-User Manager or Compute Server RSM Service Installation and Configuration When configuring RSM on a single machine used by multiple users to submit RSM jobs, follow these guidelines: All RSM users should have write access to the RSM working directory. The default working directory may not function properly if write permissions are not enabled for all applicable users. All RSM users should cache their account password (refer to Working with Account Passwords (p. 48)). If all users do not cache their password, only the user that started RSM on the machine can submit jobs. When installing RSM to a multi-user Linux machine, ANSYS strongly recommends that you set up RSM as a daemon (see Starting RSM Services Automatically at Boot Time for Linux (p. 15)). Running RSM as a daemon allows you to maintain consistent settings. If RSM is not run as daemon, the settings vary depending on which user first starts RSM processes. If you are running ANSYS Workbench on a multi-user RSM machine, the My Computer, Background option that is available for ANSYS Mechanical (see Using Solve Process Settings in the ANSYS Mechanical User's Guide) will likely not function as expected with Rigid Dynamics or Explicit Dynamics due to write permissions for RSM working directories. As a workaround for this issue, follow these guidelines: Ensure that Manager and Compute Server (ScriptHost) processes always run under the same user account. This will ensure consistent behavior. Do not use the built-in My Computer or My Computer Background solve process settings. Add a remote Solve Process Setting that specifies that the Manager name is the machine name, rather than localhost. For more information, see Using Solve Process Settings in the ANSYS Mechanical User's Guide. To run more than one job simultaneously, adjust the Max Running Jobs property in the Compute Server Properties dialog Configuring RSM for a Remote Computing Environment You must configure RSM Clients to work with Managers and Compute Servers on remote computers. If RSM services are run across multiple computers, refer to the following RSM configuration procedures: Adding a Remote Connection to a Manager Adding a Remote Connection to a Compute Server Configuring Computers with Multiple Network Interface Cards (NIC) When communicating with a remote computer, whether RSM Client to Manager or Manager to Compute Server, RSM services must be installed on those computers. 19

26 Installation and Configuration Adding a Remote Connection to a Manager RSM Clients can monitor and configure multiple Managers. The following steps describe how to add a remote connection to a Manager on a remote computer: 1. Launch RSM. 2. In the RSM main window select Tools > Options. The Options dialog appears. 3. In the Name field, enter the name of a remote machine with the Manager service installed. 4. Select the Add button and then OK. The Manager and all of its queues and Compute Servers appear in the tree view. 5. Passwords are cached on the Manager machine, so you must set the password again. Refer to Working with Account Passwords (p. 48) for this procedure Adding a Remote Connection to a Compute Server To use compute resources on a remote Compute Server, the Manager machine must add a new Compute Server as described in Adding a Compute Server (p. 55), and then configure remote Compute Server connections with the following considerations: If the Compute Server is running Windows, only the machine name is required in the Display Name property on the General tab of the Compute Server Properties dialog. If the Compute Server involves integration with a Linux machine or another job scheduler, refer to Appendix B for integration details. Ensure that you have administrative privileges to the working directory of the new Compute Server. Always test the configuration of a connection to a new remote Compute Server after it has been created, as described in Testing a Compute Server (p. 70) Configuring Computers with Multiple Network Interface Cards (NIC) When multiple NIC cards are used, RSM may require additional configuration to establish desired communications between tiers (i.e., the RSM Client, Manager, and Compute Server machines). The most likely scenario is that the issues originate with the Manager and/or Compute Server. First, try configuring the Manager and/or Compute Server machine(s): 1. In a text editor, open the Ans.Rsm.JMHost.exe.config file (Manager) and/or Ans.Rsm.SH- Host.exe.config file (Compute Server). These files are located in Program Files\ANSYS Inc\v150\RSM\bin. 2. To both files, add the machine s IP address to the TCP channel configuration. Substitute the machine s correct IP address for the value of machinename. The correct IP address is the address seen in the output of a ping from a remote machine to the Fully Qualified Domain Name (FQDN). <channel ref="tcp" port="9150" secure="false" machinename=" "> 3. Save and close both files. 20

27 Setting Up RSM File Transfers 4. Restart the following services: ANSYS JobManager Service V15.0 and ANSYS ScriptHost Service V15.0. For Windows: On your Administrative Tools or Administrative Services page, open the Services dialog. Restart the services by right-clicking on the service and selecting Restart. For Linux: Log into a Linux account with administrative privileges and ensure that Ans.Rsm.* processes are not running. Open a terminal window in the [RSMInstall]/Config/tools/linux directory and run the following command:./rsmmanager restart If the Manager and/or Compute Server does not resolve the problem, the RSM Client machine may have multiple NICs and require additional configuration. For example, a virtual NIC used for a VPN connection on an RSM Client machine can cause a conflict, even if not connected. If configuring the Manager and/or Compute Server machines doesn t work, configure the Multi-NIC RSM Client machine: 1. Using a text editor, create a file named Ans.Rsm.ClientApi.dll.config in Program Files\ANSYS Inc\v150\RSM\bin. If this file does not exist, RSM uses a default configuration. 2. From this file, copy and paste from the text below into Ans.Rsm.ClientApi.dll.config: <?xml version="1.0" encoding="utf-8"?> <configuration> <system.runtime.remoting> <application> <channels> <channel ref="tcp" port="0" secure="true" machinename="ip_address"> <clientproviders> <formatter ref="binary" typefilterlevel="full"/> </clientproviders> </channel> </channels> </application> </system.runtime.remoting> </configuration> 3. Replace the contents of ip_address with a valid IP address. 4. Save and close the file Setting Up RSM File Transfers ANSYS Remote Solve Manager offers different methods of transferring files. The preferred method is OS File Transfer and involves using existing network shares to copy the files using the built-in operating system copy commands. Other methods include native RSM file transfer, SSH file transfer, and complete custom integration. You can also reduce or eliminate file transfers by sharing a network save/storage location. One of these methods will be used when you are submitting a job to a remote machine. For details on each method or how to eliminate file transfers, see: Operating System File Transfer Utilizing Network Shares Eliminating File Transfers by Utilizing a Common Network Share Native RSM File Transfer SSH File Transfer Custom Client Integration 21

28 Installation and Configuration Operating System File Transfer Utilizing Network Shares RSM file transfer provides the ability to use the Operating System (OS) Copy operation. The OS Copy operation can be significantly (up to 5 times) faster than the native file transfer used in the RSM code (as described in Native RSM File Transfer (p. 28)). OS Copy is a faster and more efficient method of file transfer because it utilizes standard OS commands and NFS shares. Typically, the client files are local to the Client machine and are only transferred to the remote machines to solve because of storage speed, capacity, and network congestion concerns. No specific configuration is necessary within RSM itself. To enable the OS Copy operation, you must configure the directories that will be involved in the file transfer so that the target directory is both visible to and writable by the source machine. Generally, the target directories involved are: The Project Directory on the Manager machine (as specified in the Solve Manager Properties dialog) The Working Directory on the Compute Server machine (as specified in the Compute Server Properties dialog) Once the configuration is complete, the RSM Client machine should be able to access the Project Directory on the Manager machine and the Manager machine should be able to access the Working Directory on the remote Compute Server machine. The OS Copy operation will be used automatically for file transfers. 22

29 Setting Up RSM File Transfers If two RSM services are on the same machine, no configuration is necessary for OS Copy to function between those two services. For example, in an RSM layout where the RSM Manager and Compute Server are on the same machine, the Client is running on a separate machine, the RSM Manager can access the Working Directory, as long as the permissions are set to allow it. In this case, the only other configuration necessary is to ensure that the RSM Client can access the Manager s network shared Project Directory on the remote machine. The steps for configuring directories for the OS Copy operation, discussed in the following sections, are different between Linux and Windows. For the sake of general applicability, the configuration instructions in the following sections assume an RSM layout in which each service runs on a separate machine. In a typical environment, however, ANSYS suggests that the Manager and Compute Server be on the same machine. Related Topics: Windows-to-Windows File Transfer Linux-to-Linux File Transfer Windows-to-Linux File Transfer Verifying OS Copy File Transfers Windows-to-Windows File Transfer System Administrator permissions are required to configure directories for Windows-to-Windows OS Copy file transfers. For Windows-to-Windows file transfers, RSM uses predefined share names to locate and identify the target directories. You must perform the following setup tasks for each of the target directories: Share the target directory out to the remote machine. Provide full read-write permissions for the shared directory. Perform these steps for each of the target directories: 1. In Windows Explorer, right-click on the target directory. This is the directory you want to make visible for the OS Copy operations: either the Manager Project Directory or the Compute Server Working Directory. 2. Select the Sharing tab and click Share. 3. Click the Advanced Sharing button. 4. In the Advanced Settings dialog, click Share this Folder and enter the correct name for the share, as shown below. For the Project Directory on the Manager machine, enter RSM_Mgr. For example, the directory C:\Projects\ProjectFiles may have a share named \\winmachine06\rsm_mgr. 23

30 Installation and Configuration For the Working Directory on the Compute Server machine, enter RSM_CS. For example, the directory D:\RSMWorkDir may have a share named \\winmachine2\rsm_cs. 5. Ensure that full read-write permissions are defined for the target directory. 6. This naming requirement applies only to the network share for the target directory; the directory itself can have a different name. Once target directory is shared, you can access it by typing the share path into Windows Explorer. 7. Perform these steps for the other target directory Linux-to-Linux File Transfer Root permissions are required to configure directories for Linux-to-Linux OS Copy file transfers. For Linux-to-Linux file transfers, RSM uses mount points to locate and identify the target directories. You must configure each of the target directories by performing the following setup tasks: 1. Ensure that the target directory belongs to a file system that is mounted, so that the target directory is visible to the machine on which the source directory is located. Use the full path for the target directory. 2. Provide full read-write privileges for the target directory Windows-to-Linux File Transfer Root permissions on the Linux machine are required to configure directories for Windows-to-Linux OS Copy file transfers. For Windows-to-Linux transfers (using Samba or a similar Linux utility), entries in the Samba configuration file map the actual physical location of the Linux target directories to the predefined Windows share names that RSM uses to locate and identify the target directories. The following example shows how to configure a Samba share on Linux for the target directories RSM requires for the OS Copy operation. If you are unable to create the share, contact your IT System Administrator for assistance with this step. Edit the smb.conf Samba configuration file to include definitions for each of the Linux target directories. The example below shows Samba s default values for the Linux target directories. [RSM_Mgr] path = /home/staff/rsm/projectdirectory browseable = yes writable = yes create mode = 0664 directory mode = 0775 guest ok = no [RSM_CS] path = /home/staff/rsm/workingdirectory 24

31 Setting Up RSM File Transfers browseable = yes writable = yes create mode = 0664 directory mode = 0775 guest ok = no The path should point to the actual physical location of the existing target directories. The path for the Project Directory should match the Project Directory path defined in the Solve Manager Properties dialog. The path for the Working Directory should match the Working Directory path defined in the Compute Server Properties dialog. After making your changes to smb.conf, restart the Samba server by running the following command: /etc/init.d/smb restart The locations of files and method of restarting the Samba service may vary for different Linux versions. Verify that the Samba shares are accessible by your Windows machine, indicating that they have been properly set up. Check this by using Windows Explorer and navigating to the locations shown below (using your specific machine name in place of linuxmachinename): \\linuxmachinename\rsm_mgr for the Project Directory on the Manager machine \\linuxmachinename\rsm_cs for the Working Directory on the Compute Server machine Additional Windows-to-Linux Configuration When Using Alternate Accounts A permissions issue can occur when an alternate account is used to run jobs on the Linux side. To resolve this issue, make sure that Samba (or a similar Linux utility) is correctly configured. The following code sample is from the Samba configuration file, smb.conf, showing a configuration for file sharing between three accounts: A Windows account mapped to a Linux account An alternate account An account that runs as the RSM service [RSM_CS] path = /lsf/wbtest browseable = yes writable = yes create mode = 0666 directory mode = 0777 guest ok = no create mode: The Samba default is 664, which corresponds to rw-rw-r--. If the alternate account is not in the same group as the owner of the file, the job cannot write to the file and an error occurs for files that are both inputs and outputs. 25

32 Installation and Configuration To provide full read-write access for all the accounts, set create mode to 666, as shown above in the code sample. This sets the permissions for files that are copied from Windows to Linux to rwrw-rw, allowing all accounts to both read from and write to the file. directory mode: The Samba default is 775. If the copy from Windows to the Samba share results in the creation of directories, a value of 775 prevents the job running under the alternate account from creating files in the newly copied subdirectories. To allow the job to create files in the new subdirectories, set directory mode to 777. After making your changes to smb.conf, restart the Samba server as shown above. The locations of files and method of restarting the Samba service may vary for different Linux versions Verifying OS Copy File Transfers After configuring a target directory for sharing, you can run a test server operation. Information about the method used for file transfer is written to the job log in the RSM Job Log view and can be used to verify whether RSM files are being transferred via the OS Copy operation: In the job log, the messages Manager network share is available and Compute Server network share is available indicate that all necessary directories are visible and OS Copy is being used Eliminating File Transfers by Utilizing a Common Network Share Even though Workbench projects are typically run locally, small projects or larger models utilizing exceptional networks and file systems that exist today can allow Workbench projects to be saved and opened from a network share. When using a shared Workbench storage location, this shared folder can be used to minimize file transfers. In particular, this can be used to remove the necessity of transferring files to and from the Client machine and the remote machine(s); ideally, this storage would be directly attached to the Compute Server(s). RSM places marker files in the RSM Client, Manager, and Compute Server directories to uniquely identify the job. If the Manager finds the RSM Client s marker in the project storage area (by recursively searching subfolders), it will use that folder rather than copying the files to a separate folder. Similarly, if the Compute Server finds the Manager s marker (by recursively searching subfolders), it will also use that location rather than copying files unnecessarily. Remember that while this leverages drivers at the operating system level which are optimized for network file manipulation, files are still located on remote hard drives. As such, there will still be significant network traffic, e.g. when viewing results and opening and saving projects. Each customer will have to determine the RSM configuration that best utilizes network resources. The Client must be able access the Client Directory under the RSM Manager Project Directory. The Manager must have access to its sub-folders, including the RSM Client Directory and the shared 26

33 Setting Up RSM File Transfers Compute Server Working Directory. One or both of these directories will be under the shared Manager Project Directory in this setup. Example: You can set up RSM to use file shares in order to remove unnecessary file transfers. For example, you might have a Linux share \usr\user_name\myprojectfiles\, and have that same folder shared via Samba or a similar method and mounted on the Windows Client machine as Z:\MyProjectFiles\. If you save your Workbench projects to this network location, you can set the Manager and Compute Server properties as follows in order to remove all file transfers and use the network share directly as the working directory: Manager For a Linux-based Manager, set the Project Directory Location property to \user\user_name\myprojectfiles\. For a Windows-based Manager, set the Project Directory Location property to Z:\MyProject- Files\. Compute Server For a Linux-based Compute Server, set the Working Directory Location property to \user\user_name\myprojectfiles\. For a Windows-based Compute Server, set the Working Directory Location property to Z:\MyProjectFiles\. 27

34 Installation and Configuration In some cases, you might still want a separate Working Directory and/or Project Directory and thus, would not define the corresponding network file share(s) as described above. For example, if the jobs to be submitted will make heavy use of scratch space (as some Mechanical jobs do), you might wish to retain a separate Working Directory which is on a separate physical disk and thus would not define the two Working Directories to be in the same location Native RSM File Transfer Native RSM file transfer occurs automatically if the preferred OS file copy or a Common Network Share setup is not found. Native transfer requires no special setup or considerations, but is usually slower than the preferred OS File copy setup. This method of file transfer uses the installed RSM services to start a service to service file copy using the standard Microsoft.Net libraries. RSM has also included some built-in compression features which can aid with copying over slow connections. For more information about these features see section Modifying Manager Properties (p. 54) SSH File Transfer SSH file transfer can be defined to transfer files between a Windows proxy Compute Server to a Linux machine, but is not supported in other configurations. SSH file transfer mode is actually just referencing an external PuTTY implementation and is not natively included with RSM, but is included as an option for customers who must use this protocol based on their specific IT security requirements. This method is also usually slower than the preferred OS File Copy method, and thus is not recommended unless it is required. For more information on setting up SSH, see Appendix B (p. 113) Custom Client Integration RSM also provides a method for completely customizing the file handling of RSM, using client-side integration to suit any specialized customer needs by using customer-written scripts. For more information on custom integration techniques, see Customizing ANSYS Remote Solve Manager (p. 73) Accessing the RSM Configuration File RSM configuration data is stored in the RSM.Config file. It is not recommended that you edit this file, but you may want locate it in order to create a backup copy of your RSM configurations. You can also manually load RSM configurations to another machine by copying the file to the appropriate directory on that machine. The location of the RSM.Config file depends on how the anager service has been installed. To access the RSM.Config file: If the Manager service has been installed as a Windows service running as SYSTEM, the file is located in%allusersprofile%\ansys\v150\rsm\rsm.config. If the Manager is run as a normal process on Windows, the file is located in%appdata%\ansys\v150\rsm\rsm.config. For a user who can log on from different machines, the system must already be configured to use the Roaming profile. 28

35 Accessing the RSM Configuration File On Linux, the file is located in ~/.ansys/v150/rsm/rsm.config, where ~ is home directory of the account under which the Manager is being run. 29

36 30

37 Chapter 3: ANSYS Remote Solve Manager User Interface This chapter describes the following features of the RSM user interface: 3.1. Main Window 3.2. Menu Bar 3.3.Toolbar 3.4.Tree View 3.5. List View 3.6. Status Bar 3.7. Job Log View 3.8. Options Dialog Box 3.9. Desktop Alert Accounts Dialog RSM Notification Icon and Context Menu 3.1. Main Window To launch the RSM application main window: If you are using a Windows system, select Start > All Programs > ANSYS 15.0 > Remote Solve Manager > RSM If you are using a Linux system, run the rsmadmin script. The main window displays as shown below: The RSM main window interface elements are described in the table that follows. 31

38 User Interface Interface Element Menu Bar Toolbar Tree View List View Job Log View Status Bar Description Provides access to the following menus: File, View, Tools, and Help. Contains the following tools, from left to right: the Show drop-down and the Remove, All Owner Jobs, and Job Log icons. Displays defined Solve Managers, along with the Queues and Compute Servers configured for each. Displays a listing of current jobs. You can delete jobs from this area by selecting one or more jobs from the list and selecting Remove from the context menu. Displays the progress and log messages for the job selected in the List view. Displays an icon indicating the status of the active operation Menu Bar The menu bar provides the following functions: Menu File View Selections Minimize to System Tray Exit All Owner Jobs Job Log Refresh Now Update Speed Function Hides the RSM main window. RSM continues to run in the system tray. Exits the RSM application. Alternatively, you can right-click the RSM icon in the notification area (or system tray) and select Exit from in the context menu. Controls the display of jobs in the List view, allowing you to display or hide jobs according to ownership. Deselect to display only your own jobs. Select to display the jobs of all owners. Displays or hides the Job Log view. Forces the List view to update immediately, regardless of the update speed setting. Provides the following submenu selections: High - updates the display automatically every 2 seconds. Normal - updates the display automatically every 4 seconds. Low - updates the display automatically every 8 seconds. Paused - the display does not automatically update. 32

39 Toolbar Tools Help Always On Top Hide When Minimized Desktop Alert Remove Submit a Job Options ANSYS Remote Solve Manager Help About AN- SYS Remote Solve Manager When this option is selected, the main RSM window remains in front of all other windows unless minimized. When this option is selected, RSM will not appear in the task bar when minimized. An RSM icon will display in the notification area (or system tray). Enables/disables the desktop alert window. Deletes the job or jobs selected in the List view. Displays the Submit Job dialog, which allows you to submit jobs manually. Displays the Manager Options dialog, which allows you to define Managers and specify desktop alert settings. Displays the Help system in the ANSYS Help Viewer. Provides information about the program Toolbar The toolbar provides the following functions: Tool Show Selections All Jobs Queued Failed Completed Running Cancelled Function When this menu item is selected from the drop-down, all jobs display in the List view. When this menu item is selected from the drop-down, only completed jobs display in the List view. These jobs display with a Status of Finished. When this menu item is selected from the drop-down, only running jobs display in the List view. These jobs display with a Status of Running. When this menu item is selected from the drop-down, only queued jobs display in the List view. These jobs display with a Status of Queued. When this menu item is selected from the drop-down, only failed jobs display in the List view. These jobs display with a Status of Failed. When this menu item is selected from the drop-down, only cancelled jobs display in the List view. 33

40 User Interface Remove All Owner Jobs Job Log Not applicable. Selected or deselected. Selected or deselected. These jobs display with a Status of Cancelled. This icon allows you to delete the currently selected job or jobs. It functions in the same way as the Remove option of the right-click context menu, the Tools > Remove option in the menu bar, or the Delete key. This icon allows you to display or hide jobs that belong to owners other than yourself. The function is the same as using the View > All Owner Jobs option in the menu bar. This icon allows you to display or hide the Job Log view. The function is the same as using the View > Job Log option in the menu bar Tree View The Tree view contains a list of Compute Servers, Queues, and Managers. Compute Servers and queues that appear may be set up on either your local machine (shown as My Computer) or remotely on a Manager. The components in the list are summarized below: Each Manager node is a separate configuration, defined by the machine designated as the Manager. New Managers are added via the Options dialog, accessed by Tools > Options on the menu bar. The Queues node for a Manager contains all of the queues that have been defined for that Manager. You can expand a Queue to view the Queue Compute Servers associated with it; these are the Compute Servers that have been assigned to the Queue (i.e., the machines to which the Manager will send queued jobs for processing). The Compute Servers node contains all of the Compute Servers associated with the Manager; these are the machines that are available to be assigned to a Queue and to which jobs can be sent for processing. If you disable a Manager, Queue, or Compute Server, it will be grayed out on the Tree view. For information on disabling Managers, see Options Dialog Box (p. 40). 34

41 Tree View For information on disabling Queues, see Creating a Queue (p. 53). For information on disabling Compute Servers, see Adding a Compute Server (p. 55). If a connection cannot be made with a Manager, the Manager will be preceded by a red X icon. For information on testing Managers, see Testing a Compute Server (p. 70). Tree View Context Menus When a Manager node is selected, Properties and Accounts options are available in the context menu. If you haven t yet cached your password with RSM, the Set Password option is also available. When a Queues node is selected, only the Add option is available in the context menu. When a queue is selected, the Properties and Delete options are available in the context menu. 35

42 User Interface When a Compute Server is selected under a Queues node or under a Compute Servers node, the Properties and Test Server options are available. The Delete option becomes available if a Compute Server that is not assigned to any queue is selected under a Compute Servers node, as shown in the image on the right below. When a Compute Servers node is selected, only the Add option is available. For more information on using the Tree view context menu options, see RSM Administration (p. 51) List View You can sort the displayed fields by clicking on the column by which you want to sort. You can delete one or more jobs that belong to you by selecting the jobs and clicking the Remove button in the toolbar. Alternatively, you can also select Remove from the context menu, select Remove from the Tools menu, or press the Delete key. When you delete a job, the job may not be removed from the List view immediately; it will be removed the next time that the List view is refreshed. If a job is still running, you cannot remove it. Use either the Abort or the Interrupt option in the List view context menu. Once the job Status changes to either Finished or Canceled, you can click the Remove button to delete the job. The Interrupt command allows a job to clean up the processes it has spawned before termination; the Abort command terminates the job immediately. There may also be a job stopping option in the client application that submitted the job (for example, ANSYS Workbench Mechanical Stop Solution command). There may also be a disconnect option in the client application that submitted the job (for example, the ANSYS Workbench Mechanical Disconnect Job from RSM command). 36

43 List View The List view context menu provides the following options: Option Inquire Abort Interrupt Remove Set Priority Function Inquire about a running job. This action depends on the type of job being run. Generally, the Inquire command will run some additional job script code to perform some action on a running job. It can also bring back intermediate output and progress files. Immediately terminates a running job. Enabled only if a running job is selected. Jobs terminated via this option will have a Status of Canceled in the List view. Terminates a running job. Enabled only if a running job is selected. Jobs terminated via this option will have a Status of Finished in the List view. Deletes the selected job or jobs from the List view. Enabled only if a completed job is selected. Cannot be used on a running job. It functions in the same way as the Tools > Remove option in the menu bar. Allows you to set the submission priority for the selected job or jobs. When jobs are submitted they have a default priority of Normal. Enabled only for jobs with a Status of Queued. The higher priority jobs in a queue run first. To change the priority of a Queued job, right-click on the job name, select Set Priority and change the priority. Only RSM administrators can change a job priority to the highest level. The status of each job displayed in the List view is indicated by the Status column and an icon. For jobs that have completed, the Status column and an icon indicate the final status of the job; the addition of an asterisk (*) to the final status icon indicates that the job has been released. Status Description Icon Released Icon Input Pending Job is being uploaded to RSM. 37

44 User Interface Queued Running Cancelled Job has been placed in the Manager queue and is waiting to be run. Job is running. Job has been terminated via the Abort option. Also applies to jobs that have been aborted because you exited a project without first performing one of the following actions: Saving the project since the update was initiated Saving results retrieved since your last save Finished Failed Job has completed successfully. Also applies to jobs that have been terminated via the Interrupt option or for which you have saved results prior to exiting the project. Job has failed Status Bar Also may be applied to jobs that cannot be cancelled due to fatal errors. The Status bar indicates the status of the currently running operation by displaying either a Busy icon or a Ready icon in its bottom left corner Job Log View The Job Log view provides log messages about the job. The log automatically scrolls to the bottom to keep the most recent messages in view. To stop automatic scrolling, move the vertical slider from its default bottom position to any other location. To resume automatic scrolling, either move vertical slider back to the bottom or select End from the Job Log view context menu. 38

45 Job Log View The right-click context menu provides the following options: Status Copy Select All Home End Save Job Report... Description Copy selected text in the Job Log view. Alternatively, you can use the Ctrl+C key combination. Select all of the text in the Job Log view. Alternatively, you can use the Ctrl+A key combination. Go to the top of the Job Log view. Go to the bottom of the Job Log view. This option allows you to generate a Job Report for the job item selected from the RSM List view. Enabled when the job has completed (i.e., has a final Status of Finished, Failed, or Cancelled). The Job Report will include job details and the contents of the job log shown in Job Log view. When generating the report, you can specify the following report preferences: Include Debug Messages: whether debugging messages are included in the Job Report Include Log Time Stamp: whether a log time stamp is included in the Job Report Include Line Numbering: whether line numbering will be displayed on the Job Report Line Numbering Time Stamp Debug Messages Click the Browse button to select the directory to which the report will be saved, type in report filename (RSMJob.html by default), select the report format (HTML or text format), and click Save. Enable or disable the display of line numbers in the Job Log view. Right-click inside the inside the Job Log view and select or deselect Line Numbering from the context menu. Enable or disable the display of the time stamp for each line in the Job Log view. Right-click inside the Job Log view and select or deselect Time Stamp from the context menu. Enable or disable the display of debugging information. Right-click inside the Job Log view and select or deselect Debug Messages from the context menu to toggle between standard job log messages and debugging messages. When making a support call concerning RSM functionality, send the RSM job report. the HTML-format job report uses color highlighting by row to distinguish the Job Log view contents from other information, which can be helpful for troubleshooting. 39

46 User Interface 3.8. Options Dialog Box From the menu bar, select Tools > Options to open Options dialog. Use the Options dialog to configure Managers or set up desktop alert settings. The Options dialog contains the following functions: The Managers pane lists available RSM Manager machines. To enable or disable a Manager, select or deselect the preceding check box. Disabled Managers will display as grayed out in the Tree view. To add a new Manager, type its name into the Name field and click the Add button. To remove a Manager, highlight it in the list and click the Delete button. To change the name of a Manager, highlight it in the list, edit the name in the Name field, and click the Change button. The Desktop Alert Settings pane contains check boxes to configure the following desktop alerts: Show Running Jobs Show Pending Jobs Show Completed Jobs 3.9. Desktop Alert The desktop alert automatically appears when jobs are active. It displays the running, queued, and completed jobs. The number of queued, running and completed jobs is also displayed in the window title. If all jobs are finished, the desktop alert disappears automatically. If you wish to hide the desktop alert window, use the menu options or tray context (right-click on the RSM icon in the notification area (or system tray) menu to turn it off. If you close the desktop alert, it will not remain hidden permanently. It will display again as long as jobs are active unless the alert is turned off. You can specify what jobs display on the desktop alert via the Options dialog. To access the Options dialog, select Options from the RSM icon context menu or select Tools > Options from the menu bar. 40

47 Accounts Dialog Accounts Dialog The Accounts dialog allows you to add, edit, and delete primary and alternate accounts. You can also define Compute Servers and change the passwords for primary and alternate accounts. To access the Accounts dialog, right-click the Manager node in the Tree view and select Accounts from the context menu. The Add Primary Account button allows you to define primary accounts for RSM. A primary account must be defined for your current user name in order for you to add new primary accounts. If a primary account is not defined for your current user name, the User Name field of the Adding Primary Account dialog defaults to your current user name. When you right-click an existing account, the following context menu options are available: 41

48 User Interface Option Add Alternate Account Change Password Remove Function Create an alternate account for the selected primary account. Available only when a primary account is selected. Change the password for the selected account. Deletes the selected account. When a primary account is removed, any associated alternate accounts are also removed. By default, the primary account can send jobs to all of the Compute Servers. If an alternate account is defined, the check boxes in the Compute Servers list allow you to specify which Compute Servers will use the alternate account. For details on working with accounts, see RSM User Accounts and Passwords (p. 45) RSM Notification Icon and Context Menu When RSM is minimized, it does not display in the task bar, but you can open the interface by doubleclicking on the RSM icon in the notification area (also called the system tray for Windows XP or Linux GNOME). On a Windows system, the notification area or system tray is accessible from the desktop and the RSM icon is loaded to the notification area by default. On a Linux system, you may need to enable the notification area or system tray for the desktop. To open the RSM interface, double-click the notification icon. The icon changes based on the status of jobs, and tooltip on the icon displays the current status of the jobs (i.e., the status and how many of those jobs are running, queued, failed, etc.). Notification Icon Job Status No jobs are running. At least one job is running. At least one job has failed. All jobs have completed. Right-click the RSM icon to access its context menu. The context menu contains most of options that are available on the RSM menu bar, as shown below: 42

49 RSM Notification Icon and Context Menu Menu Option Options Help About All Owner Jobs Desktop Alert Open Job Status Exit Description Displays the Options dialog. Functions in the same way as Tools > Options on the menu bar. Displays the Help system in another browser window. Provides information about the program. Displays or hides jobs that belong to other owners. Works in conjunction with the View > All Owner Jobs option in the menu bar and theall Owner Jobs icon in the toolbar. Enables/disables the desktop alert window (see Desktop Alert (p. 40)). Works in conjunction with the Tools > Desktop Alert option in the menu bar. Displays the RSM main window. Exits the RSM application. Functions in the same way as File > Exit on the menu bar. 43

50 44

51 Chapter 4: RSM User Accounts and Passwords The Accounts dialog allows you to configure accounts and passwords for RSM users. The changes you are able to make depend on your RSM privileges, as follows: RSM Administrative Privileges If you are a member of the RSM Admins user group, you have administrative privileges for RSM. You can use the Accounts dialog to perform the following tasks: Create accounts Modify any account Change the password for any account Change the assignment of Compute Servers to any alternate account To create the RSM Admins user group and add users: 1. Right-click on Computer and select Manage. 2. On the Computer Management dialog, expand Local Users and Groups. 3. Right-click on the Groups folder and select New Group. 45

52 User Accounts and Passwords 4. On the New Group dialog, enter RSM Admins as the Group Name and add members by clicking the Add button. 5. On the Select Users, Computers, Service Accounts, or Groups dialog: Type in user names. Click the Check Names button to check and select each name. 6. Click the Create button to create the new group. RSM Non-Administrative Privileges If you are not a member of the RSM Admins user group, you do not have administrative privileges for RSM. You can use the Accounts dialog to perform the following tasks: Add or remove your own primary and alternate accounts Change the passwords for your own accounts Change the assignment of Compute Servers to your own alternate account RSM configuration data, including user account and password information, is stored in the RSM.Config file. For details, see Accessing the RSM Configuration File (p. 28). The following topics are discussed in this section Adding a Primary Account 4.2. Adding Alternate Accounts 4.3.Working with Account Passwords 4.4. Manually Running the Password Application 4.5. Configuring Linux Accounts When Using SSH 4.1. Adding a Primary Account A primary account is the account that communicates with RSM, typically the account used with the client application (ANSYS Workbench) on the RSM Client machine. You must define a primary account for your current user name in order to add any other new accounts or edit existing ones. To add a primary account: 1. Right-click the Manager node of the tree view and select Accounts from the context menu. 2. On the Accounts dialog, click the Add Primary Account button. 3. Specify account details in the Adding Primary Account dialog. Enter a user name for the account. If a primary account has not yet been defined for the user name under which you ve logged in, by default the User Name field will be populated with your current user name. Cache the password with RSM by entering and verifying an account password. See Working with Account Passwords (p. 48) for details. Click OK. 46

53 Adding Alternate Accounts 4.2. Adding Alternate Accounts For each primary account, you can create and associate one or more alternate accounts. An alternate account allows you to send jobs from the primary account on the RSM Client machine to be run on a specific remote Compute Server under the alternate account. A primary account with one or more associated alternate accounts is called an owner account. An alternate account is necessary if the remote Compute Server machine does not recognize the primary account used on the RSM Client machine. For example, an RSM Client running on Windows with the account name DOMAIN\johnd would need an alternate account to run jobs on a Linux machine acting as a Compute Server, because the Linux machine would be unable to recognize the RSM Client account name. To add an alternate account: 1. In the Accounts list of the Accounts dialog, right-click the primary account and select Add Alternate Account from the context menu. 2. Specify alternate account details in the Adding Alternate Account dialog. Enter a user name for the account. Cache the password with RSM by entering and verifying an account password. See Working with Account Passwords (p. 48) for details. Click OK. 3. Specify the Compute Servers to which the new alternate account will have access. In the Alternates list, select the newly created alternate account. In the Compute Servers list, select the check box for each Compute Server to which the account will send jobs. Each alternate account can have access to a different combination of Compute Servers, but each Compute Server can only be associated with one alternate account at a time. You will receive an error if you attempt to assign more than one alternate account to a single Compute Server. In the Accounts dialog, select an owner account to view the alternate accounts associated with it. Select an alternate account to view the Compute Servers to which it can send jobs. It is also possible to add an alternate account by running the RSM password application manually, rather than via the Accounts dialog. For details, see Manually Running the Password Application (p. 49). 47

54 User Accounts and Passwords 4.3. Working with Account Passwords If you will be sending jobs from the RSM Client machine to a remote Manager machine, you must cache the account password with that Manager. By caching the password, you enable RSM to run jobs on a Compute Server on behalf of that account. When you first configure RSM, if the RSM Client and the Manager are running on different machines, a Set Password reminder will be displayed on the tree view Manager node. It will also be displayed if the owner account is removed. This reminder indicates that you need to cache the password for the owner account. Caching the Account Password To cache an account password: 1. In the RSM tree view, right-click on My Computer [Set Password] and select Set Password. 2. On the Password Setting dialog, the User Name will be auto-populated with the Domain\username of the account under which you re currently logged in. Enter and verify the account password. 3. Click OK. 4. If the [Set Password] reminder is still displayed in the tree view, exit the RSM main window and relaunch it to refresh the indicator to the correct state. It is not necessary to cache your password with the Manager if you are using RSM only for local background jobs. When you create a new account via the Accounts dialog and define the password for it, the password is cached with RSM. It is encrypted and stored by the Manager, which maintains a list of registered accounts. For security reasons, RSM will not allow any job to be run by the "root" user on Linux, including primary and alternate accounts. You should not need to cache the "root" account password in RSM Manager Changing an Account Password To change an account password: 48

55 1. In the RSM tree view, right-click on My Computer and select Accounts. 2. In the Accounts pane of the Accounts dialog, right-click the account and select Change Password. 3. On the Changing Account Password dialog, the User Name will be auto-populated with the Domain\username of the selected account. Enter and verify the password. 4. Click OK. 5. If the [Set Password] reminder is still displayed in the tree view, exit the RSM main window and relaunch it to refresh the indicator to the correct state. Recaching the Account Password Whenever a password is changed, you must recache the password by changing it in RSM. To do this, follow the same sequence of steps used for changing an account password. When you change the password, the Set Password reminder or the tree view Manager node may be redisplayed. Exit the RSM main window and relaunch it to refresh the indicator to the correct state. It is also possible to cache a password by running the RSM password application manually, rather than from the Accounts dialog. For details, see Manually Running the Password Application (p. 49) Manually Running the Password Application It is usually unnecessary to manually run the password caching application; however, you may find it useful in certain circumstances. For example, it may be necessary to manually run the password application on a Linux machine if the terminal used to start the RSM user interface is not available. It is possible to stop and restart the RSM interface via the Ans.RSM.Password.exe password application, located in the [RSMInstall]\bin directory. The instructions provided in this section are included in the event that a general solution is desired. Windows You can run the password application directly by locating Ans.Rsm.Password.exe in the [RSMInstall]\bin directory and double-clicking it. Linux You can open the password application by running the rsmpassword shell script, located in the [RSMInstall]\Config\tools\linux directory. If you run the script with no command options, it displays available options as below: Usage: Ans.Rsm.Password.exe [-m manager][-a account][-o owner][-p password] -m manager: RSM Manager machine (default = localhost). -a account: Target account. If no -o owner, this is a primary account. -o owner: Account owner. Setting password for an alternate account specified with -a. -p password: Account password. -? or -h: Show usage. Manually Running the Password Application 49

56 User Accounts and Passwords NOTES: If no -a or -p, this is normal interactive mode. Accounts can be entered as username or DOMAIN\username. The rsmpassword shell script depends on its relative location in the Workbench installation; it should not be moved. Alternate accounts are typically added to the owner account via the Accounts dialog, but can also be manually added and edited by running the password application. In the example below, DO- MAIN\johnd is the owner account and johndoe is an alternate account to be used on a Compute Server specified in the Accounts dialog. Setting password for primary (default), alternate or new alternate account. Existing alternate accounts: johndoe Enter user name (DOMAIN\johnd):johndoe Enter password for DOMAIN\johnd: ******** Re-enter password: ******** Password set for johndoe: Your password has been encrypted and stored. It can only be decrypted and used to run jobs on behalf of DOMAIN\johnd Configuring Linux Accounts When Using SSH If the Windows and Linux account names are the same (for example, DOMAIN\johnd on Windows and johnd on Linux) then no additional configuration is required. If the account name is different, specify the account in the Linux Account property on the SSH tab of the Compute Server Properties dialog. Client applications may also have a mechanism to specify an alternate account name. For example, you can specify a Linux account in the ANSYS Workbench Solve Process Settings Advanced dialog box. Remember that SSH must be configured for password-less access (see Appendix B). RSM does not store Linux passwords for use with SSH. 50

57 Chapter 5: RSM Administration Users with RSM administrator privileges can perform a variety of additional tasks. For instance, RSM administrators can create and modify Managers and Compute Servers, manage queues, set jobs to highest priority, and delete the jobs of any user. RSM administrators must fulfill one of the following requirements: Windows: Linux: The RSM administrator is a Windows administrator on the Manager machine (i.e., they are in the local or domain administrators group). The RSM administrator has been added as a member of the RSM Admins group on the Manager machine. The RSM administrator is a root user. The RSM administrator has been added as a member of the rsmadmins group on the Manager machine. In both of the above cases, the RSM services ANSYS JobManager Service V15.0 and ANSYS ScriptHost Service V15.0 may need to be restarted in order for administrator privileges to take effect. RSM configuration data, including the configurations for the Manager, Compute Servers, and queues, is stored in the RSM.Config file. For details, see Accessing the RSM Configuration File (p. 28). The following RSM administration tasks are discussed in this section: 5.1. Automating Administrative Tasks with the RSM Setup Wizard 5.2.Working with RSM Administration Scripts 5.3. Creating a Queue 5.4. Modifying Manager Properties 5.5. Adding a Compute Server 5.6.Testing a Compute Server 5.1. Automating Administrative Tasks with the RSM Setup Wizard The ANSYS Remote Solve Manager Setup Wizard is a utility designed to guide you through the process of setting up and configuring Remote Solve Manager. The setup tasks addressed by the wizard include adding and managing Managers, Compute Servers, queues, and accounts. It also allows you to test the final configuration. For information on using the wizard, see Using the ANSYS Remote Solve Manager Setup Wizard (p. 8). 51

58 Administration 5.2. Working with RSM Administration Scripts Sometimes it is more convenient to work with RSM manually, rather than via the user interface. In addition to allowing you to manually run the password application, RSM provides you with a way to manually open the main RSM window and start the RSM Utility application. Opening the Main RSM Window Manually For Windows, you can open the main RSM administration window directly by locating Ans.Rsm.Admin.exe in the [RSMInstall]\bin directory and double-clicking it. For Linux, you can open the main RSM administration window by running the rsmadmin shell script, located in the [RSMInstall]\Config\tools\linux directory. Starting the RSM Utility Application Manually For Windows, you can start the RSM Utilities application by opening a command prompt in the [RSMInstall]\bin directory and running rsm.exe. The example below shows available command line options. The -s configfile command option can be used to create a backup file containing configuration information for each of the queues and Compute Servers you have defined. For example, in the event that you would need to rebuild a machine, you can run this script beforehand. The backup file, configfile, is created in the [RSMInstall]\bin directory and can be saved as a.txt file. Once the machine is rebuilt, you can then use the saved configuration file to reload all the previously defined queues and Compute Servers, rather than having to recreate them. The -migrate vver command option allows you to migrate the existing RSM database into the newer release without having to set up your RSM queues and Compute Servers again. In order to use the -migrate vver command, you must first start the RSM Manager service or process. The migration can also be achieved by running the RSM Setup wizard to set up RSM as a SYSTEM user and then running the rsm.exe migrate vver command via the command prompt. C:\Program Files\ANSYS Inc\v150\RSM\bin>rsm.exe Usage: rsm.exe [-m manager machine][-clr] [-c configfile -s configfile -migrate vver] [-stop mgr svr xmlrpc all [-cancel]][-status mgr svr] -m manager: RSM Manager machine (default = localhost). -c configfile: File containing Queues and Servers. -s configfile: File to save Queues and Servers. -clr: Clear Queues and Servers. If used with -c, clears before configure. -stop mgr svr xmlrpc all: Stop RSM services, where: mgr = Manager, svr = ScriptHost, xmlrpc = XmlRpc Server, all = All three. -cancel Cancel all active Jobs. For use with -stop. -status mgr svr: Query Manager and ScriptHost on localhost or use -m option. -migrate vver: Migrate database from previous version (ex. v145). Can be used with -clr. 52

59 Creating a Queue For Linux, you can open the main RSM window by running the rsmutils shell script, located in the [RSMInstall]\Config\tools\linux directory. The rsmutils shell script has the same command options noted above for Windows. The Linux shell scripts are dependent on their relative location in the ANSYS Workbench installation, so cannot be moved Creating a Queue A queue is a list of Compute Servers available to run jobs. To create a queue: 1. In the tree view, right-click on the Queues node for a desired Manager. 2. Select Add. The Queue Properties dialog displays: 3. Configure the Queue Properties described below, then select OK. The table below lists the fields on the Queue Properties dialog: Property Name Description This field should contain a descriptive name for the queue. Examples of queue names include Local Queue, Linux Servers, or HPC Cluster. If the Compute Server(s) in the queue has a Start/End Time specified you may want to use a name that indicates this to users (e.g., "Night Time Only"). 53

60 Administration Enabled Priority Assigned Servers If True, the Manager dispatches queued jobs to available Compute Servers. If False, jobs remain in a Queued state until the queue is enabled. Possible values are Low, Below Normal, Normal, Above Normal, or High. When determining the next job to run, the Manager pulls jobs from the highest priority queue first. Priority settings are commonly used to create a separate, higher priority queue for smaller jobs, so that they are processed before running large jobs that tie up the computing resource for a long period of time. Select the check box for each Compute Server to be used in this queue. A queue can contain more than one Compute Server. A Compute Server can also be a member of more than one queue Modifying Manager Properties To modify Manager properties: 1. In the tree view, right-click on a Manager node. 2. Select Properties. The Solve Manager Properties dialog appears: 3. Modify Manager properties described below, and then select OK. The table below lists the editable fields on the Solve Manager Properties dialog: Property Job Cleanup Period Description The length of time (in D.H:MM:SS format) that a job stays in the List view after it is released. Default value is 02:00:00 (2 hours). Acceptable values are as follows: D (days) = integer indicating the number of days H (hours) = 0 23 MM (minutes) = 0 59 SS (seconds) =

61 Adding a Compute Server You can enter only the number of days (without the zeros), only the hours/minutes/seconds, or both. Examples: 1.00:00:00 or 1 = one day. 1.12:00:00 = 1.5 days 02:30:00 = 2.5 hours = 15 minutes Project Directory Compression Threshold The base location where the Manager stores input and output files for a job. As jobs are created, a unique subdirectory for each job is created in the Project Directory. When defining the location of your Project Directory, you must enter an absolute path in this field; relative paths are not accepted. If you enter a path to a base location that doesn not exist on the Manager machine, RSM will automatically create the path on the machine. This location can be either on the local disk of the Manager machine or on a network share (for example, \\fileserver\rsmprojects). The threshold at which files are compressed prior to transfer. There is always a trade-off between the time it takes to compress/decompress versus the time to transfer. The appropriate value depends on the specific network environment. Enter a value of 0 to disable compression. Example: If you set this value to 50, files greater than 50 MB will be compressed before being sent over the network Adding a Compute Server A Compute Server is the machine on which the RSM Compute Server process runs. It executes the jobs that are submitted by the RSM Client and distributed by the Manager. You can add and configure a new Compute Server via the Compute Server Properties dialog. Once a Compute Server has been added, you can also use this dialog to edit its properties. To add a new Compute Server: 1. In the Manager tree view, right-click on the Compute Servers node under the machine you are designating as the Manager. 2. Select Add from the context menu. The Compute Server Properties dialog displays. 55

62 Administration The dialog includes three tabs: General, Cluster, and SSH. The General tab contains information that is used for all RSM configurations. The Cluster tab contains information that is necessary only if you are using a cluster computing configuration. The SSH tab contains information that is necessary only if you are using SSH functionality to connect the Compute Server Windows machine with a remote Linux machine. To configure the new Compute Server, go to each tab relevant to your RSM configuration and enter or select property values. When finished, click the OK button. Each tab is described in detail in the following sections. Editing Compute Server Properties To edit the properties of an existing Compute Server: 1. In the Manager tree view, right-click on the Compute Server name, either under the Compute Servers node or under a queue name in the Queues node. 2. Select Properties from the context menu. 3. On the Compute Server Properties dialog, edit the properties. 56

63 Adding a Compute Server 4. When finished, click the OK button. If you do not have permissions to a Compute Server machine (i.e., you have not set your account password in RSM for the Manager node), you cannot add the machine as a Compute Server or edit its properties. For instructions on setting your password, see Working with Account Passwords (p. 48) Compute Server Properties Dialog: General Tab The General tab of the Compute Server Properties dialog allows you to define general information that is necessary for all RSM configurations. By default, only the first three fields display. Click the More>> button to expand the rest of the fields and the <<Less button to collapse them. The Quick Reference table below lists the properties that you can configure on the General tab. Click on the property name for more detailed configuration instructions. Property Display Name Description Descriptive name for the Compute Server machine. Required field. 57

64 Administration Property Machine Name Working Directory Location Working Directory Description Name (the hostname or IP address) of the Compute Server machine. Required field. Determines whether the location of the Working Directory is userdefined or auto-defined by the system. Directory on the Compute Server machine where jobs are run. Enabled when User Specified is selected for Working Directory Location. Server Can Accept Jobs Maximum Number of Jobs Limit Times for Job Submission Start Time End Time Save Job Logs to File Delete Job Files in Working Directory Use SSH protocol for inter- and intra-node communication (Linux only) Required when enabled. Determines whether the Compute Server can accept jobs. The number of jobs that can be run at the same time on the Compute Server. Determines whether there will be limitations on the times when jobs can be submitted to the Compute Server. Time at which the Compute Server becomes available to accept job submissions. Time at which the Compute Server is no longer available to accept job submissions. Determines if the job logs will be kept in a log file or discarded, upon job completion. Determines whether the temporary job subdirectories created in the Compute Server Working Directory are deleted upon completion of the associated job. Specifies that RSM and the solvers use SSH instead of RSH for inter-node and intra-node communications on Linux machines. Display Name The Display Name property requires that you enter a descriptive name for the Compute Server machine. It is an easy-to-remember alternative to the Machine Name, and is the name that is displayed in the Manager tree view. The Display Name defaults to first to New Server and thereafter to New Server n to guarantee its uniqueness. Examples of default display names include New Server, New Server 1, and New Server 2. Examples of display names you might select are Bob s Computer and My Computer to Linux. Machine Name The Machine Name is the name (the hostname or IP address) of the Compute Server machine. Both RSM and the application being used (for example, ANSYS Mechanical or Fluent) must be installed on this machine. This is a required field. 58

65 If the Compute Server is the same physical machine as the Manager, enter localhost. For a remote machine, enter the network machine name. Examples of machine names for remote machines are comp1, comp1.win.domain.com, and Working Directory Location The Working Directory Location property allows you to specify the location of the Working Directory, where the Compute Server will read and write job scripts and solution files. Available options are Automatically Determined and User Specified. Adding a Compute Server Automatically Determined: This option is selected by default. Leave selected if you want the Working Directory location to be determined by the system. When this option is selected, the Compute Server will try to re-use files from the Manager Project Directory. If it cannot find the Project Directory, it will copy files to a temporary subdirectory in the Compute Server TEMP directory. User Specified: Select if you want to manually specify the location of the Working Directory on the Compute Server machine. When this option is selected, you must enter the directory path in the Working Directory field. The Compute Server will copy all files to a temporary subdirectory within Working Directory specified. With a native cluster configuration (i.e., you are not using SSH), your file management settings may cause this property to be restricted to Automatically Determined. See the descriptions of the Working Directory, Shared Cluster Directory, and File Management properties. Working Directory The Working Directory property becomes enabled when you select User Specified for the Working Directory Location property. When this property is enabled, it requires that a path is entered for the Working Directory on the Compute Server machine. This can be done in one of the following ways: You can enter the path here. Alternatively, if you will be using a native cluster configuration (i.e., will not be using SSH) and opt to use the Shared Cluster Directory to store temporary solver files, the Working Directory property will be auto-populated with whatever path you enter for the Shared Cluster Directory property. See the descriptions of the Shared Cluster Directory and File Management properties. When the Compute Server and Manager are two different machines, for each job that runs, a temporary subdirectory is created in the Compute Server Working Directory. This subdirectory is where job-specific scripts, input files, and output files are stored. When the job completes, output files are then immediately transferred back into the Project Directory on the Manager machine. Requirements: The Working Directory must be on the located on Compute Server machine (the machine specified in the Machine Name field). All RSM users must have write access and full permissions to this directory. 59

66 Administration If you will be using a cluster configuration, the directory must be shared and writable to all of the nodes in the cluster. that in some cluster configurations, the Working Directory may also need to exist on each cluster node and/or may share the same physical space as the Shared Cluster Directory. Examples of Working Directory paths are D:\RSMTemp and C:\RSMWorkDir. In a configuration where the Compute Server and Manager are the same machine (i.e., the job is queued from and executed on the same machine), the job execution files are stored directly in the Project Directory on the Manager, rather than in the Working Directory on the Compute Server. In a native cluster configuration (i.e., you are not using SSH), when you specify that you want to use the Shared Cluster Directory to store temporary solver files, essentially you are indicating that the Working Directory and the Shared Cluster Directory are the same location; as such, the Working Directory property is populated with the path entered for the Shared Cluster Directory property in the Cluster tab. See the descriptions of the Shared Cluster Directory and File Management properties. Server Can Accept Jobs The Server Can Accept Jobs property determines whether the Compute Server can accept jobs. Selected by default. Leave selected to indicate that the Compute Server can accept jobs. Deselect to prevent jobs from being run on this Compute Server. Primarily used when the server is offline for maintenance. The Server Can Accept Jobs property can also be set on the client side (i.e., on the RSM Client machine via the Workbench Update Design Point Process properties). This can be done both in scenarios where the Manager runs locally on the same machine as the RSM Client, and in scenarios where the Manager is run remotely on a different machine. In either case, the Server Can Accept Jobs value set on the server side (i.e., on the remote Compute Server machine) takes precedence. Maximum Number of Jobs The Maximum Number of Jobs property allows you to specify the maximum number of jobs that can be run on the Compute Server at the same time. When this number is reached, the server is marked as Busy. The purpose of the Maximum Number of Jobs property is to prevent job collisions, which can occur because RSM cannot detect the number of cores on a machine. The ability to determine a maximum number of jobs is particularly useful when the job is simply forwarding the work to a third-party job scheduler (for example, to an LSF or PBS cluster). Default value is 1. 60

67 Adding a Compute Server In a cluster configuration, this property refers to the maximum number of jobs at the server level, not at the node/cpu level. The Maximum Number of Jobs property can also be set on the client side (i.e., on the RSM Client machine via the Workbench Update Design Point Process properties). This can be done both in scenarios where the Manager runs locally on the same machine as the RSM Client, and in scenarios where the Manager is run remotely on a different machine. In either case, the Maximum Number of Jobs value set on the server side (i.e., on the remote Compute Server machine) takes precedence. When multiple versions of RSM are being run at the same time (for example, 13.0 and 14.0), this property applies only to the current instance of RSM. One version of RSM cannot detect the jobs being assigned to a Compute Server by other versions. Limit Job Times for Submission The Limit Times for Job Submissions property allows you to specify whether there will be limitations on the times when jobs can be submitted to the Compute Server. Deselected by default. Leave the check box deselected for 24-hour availability of the Compute Server. Select the check box to specify submission time limitations in the Start Time and End Time fields. This option is primarily used to limit jobs to low-load times or according to business workflow. Start Time / End Time The Start Time and End Time properties become enabled when the Limit Times for Job Submissions check box is selected. They allow you to specify a time range during which the Compute Server is available to accept submitted jobs. The Start Time property determines when the Compute Server becomes available and the End Time property determines when it becomes unavailable to accept job submissions. A job cannot be run on a Compute Server if it is submitted outside of the Compute Server s range of availability. The job may still be queued to that Compute Server later, however, when the Compute Server again becomes available to accept it. Also, if there are multiple Compute Servers assigned to a queue, a queued job may still be submitted to another Compute Server that is available to accept submissions. It can be useful to define an availability range when Compute Servers or application licenses are only available at certain times of the day. You can either enter the time (in 24-hour HH:MM:SS format) or select a previously entered time from the drop-down list. Do not indicate 24-hour availability by entering identical values in the Start Time and End Time fields; doing so will cause an error. Instead, indicate unlimited availability by deselecting the Limit Times for Job Submissions check box. 61

68 Administration Save Job Logs to File The Save Job Logs to File property allows you specify if job logs should be saved as files. Deselected by default. Leave the check box deselected to specify that no job log files will be saved. Select the check box to save job log files. When a job runs, a log file named RSM_<ServerDisplayName>.log is saved to the TEMP directory on the anager machine unless the TMP environment variable has been defined; in this case, job log files are saved to the location defined in the TMP environment variable. To access the default TMP directory for Windows, go to %TMP% in Windows Explorer or to /tmp in Linux. Selecting this option could potentially increase disk usage on the Manager. Job log files are primarily used for troubleshooting. The log file for a job contains the same information displayed on the Job Log view when the job is selected in the List view of the main RSM application window. When this option is enabled, the log file on disk is only updated/saved once when a job finishes running. The user can always see the same live log from RSM UI when the job is running. Delete Job Files in Working Directory The Delete Job Files in Working Directory property determines whether the temporary job subdirectories created in the Compute Server Working Directory are deleted upon completion of the associated job. Selected by default. Leave the check box selected to delete temporary job subdirectories and their contents upon completion of the associated job. Deselect the check box to save temporary job subdirectories and their contents after the completion of the associated job. The job files in the Working Directory are primarily used for troubleshooting. When a submitted job fails, saved job-specific scripts and files can be helpful for testing and debugging. You can find these files by looking at the RSM log (either in the Job Log view of the main application window or in the job log file saved to the Working Directory on the Manager machine) and finding the line that specifies the Compute Server Working Directory. This option does not control whether job-specific files in the Project Directory on the Manager machine are deleted. When the Compute Server and Manager are the same machine and job-specific files are stored in the anager Project Directory instead of the 62

69 Adding a Compute Server Compute Server Working Directory, job-specific files will not be deleted from the Project Directory until the job is released. When a job is stopped abruptly rather than released (for instance, via the Abort option in the right-click context menu of the List view) or is not released immediately, you may need to take additional steps to ensure that its files are deleted from the Project Directory on the Manager. You can ensure that job-specific files are deleted by one of the following two methods: Remove the job from the List view. You can do this by right-clicking the job and selecting Remove from the context menu. Alternatively, you can highlight the job and either select the Tools > Remove option or press the Delete key. Configure the system to remove the files automatically by setting the Job Cleanup Property on the Solve Manager Properties dialog. Use SSH protocol for inter- and intra-node communication (Linux only) The Use SSH protocol for inter- and intra-node communication (Linux only) property determines whether RSM and solvers use RSH or SSH for inter-node and intra-node communications on Linux machines. Deselected by default. Leave the check box deselected to use RSH. Select the check box to use SSH. This setting will be applied to all Linux Compute Servers, not only those in clusters, allowing for solvers to run in distributed parallel mode on a single machine. When ANSYS Fluent, ANSYS CFX, ANSYS Mechanical, and ANSYS Mechanical APDL are configured to send solves to RSM, their solvers will use the same RSH/SSH settings as RSM. Related Topics: Adding a Compute Server (p. 55) Compute Server Properties Dialog: Cluster Tab (p. 63) Compute Server Properties Dialog: SSH Tab (p. 67) Compute Server Properties Dialog: Cluster Tab The Cluster tab of the Compute Server Properties dialog allows you to define information necessary for a cluster computing configuration. By default, only the first three fields display. Click the More>> button to expand the rest of the fields and the <<Less button to collapse them. 63

70 Administration The Quick Reference table below lists the properties that you can configure on the Cluster tab. Click on the property name for more detailed configuration instructions. Property Cluster Type Shared Cluster Directory File Management Parallel Environment (PE) Names Job Submission Arguments Description Type of job scheduling system used to manage the cluster. Enabled only when a value other than None is selected for Cluster Type. Required when enabled. Location of the Shared Cluster Directory, which primarily serves as a central file-staging location, but also can be used to store the temporary working files created by the application solver. Enabled only when a value other than None is selected for Cluster Type. Required when enabled. Location of temporary directory for temporary working files created by the application solver. Enabled only when SGE is selected for Cluster Type. Required when enabled. Names of Shared Memory Parallel and Distributed Parallel environments. The environment(s) must have already been created by your cluster administrator. Enabled only when a value other than None is selected for Cluster Type. Optional. 64

71 Adding a Compute Server Property Description Arguments that will be added to the job submission command line of your third-party job scheduler. Cluster Type The Cluster Type property allows you select they type of job scheduling system that will be used to manage the cluster. Select one of the following options: None: Selected by default. Leave selected if you won t be using a job scheduler for cluster management. When this option is selected, the rest of the properties on the tab are disabled. Windows HPC: Select to use MS HPC. When this option is selected, the SSH tab is disabled because SSH is applicable only to Linux cluster configurations. LSF: Select to use LSF. PBS: Select to use PBS. SGE: Select to use SGE or UGE. Custom: Select to customize RSM integration. See Customizing ANSYS Remote Solve Manager (p. 73). Shared Cluster Directory The Shared Cluster Directory property allows you to enter the path for Shared Cluster Directory. Requirements: This directory must be shared and writable to the entire cluster. that in some cluster configurations, the Shared Cluster Directory may also need to exist on each cluster node and/or may share the same physical location as the Working Directory. Examples of Shared Cluster Directory paths are \\<MachineName>\Temp, /user/temp, and /staging. With a Shared Cluster Directory path of /staging, RSM might create a temporary job subdirectory of /staging/dh3h543j.djn. The primary purpose of the Shared Cluster Directory is to serve as a central file-staging location for a cluster configuration. When the Manager queues a job for execution, a temporary job subdirectory is created for the job inside the Shared Cluster Directory. All job files are transferred to this subdirectory so they can be accessed by all of the execution nodes when needed. A secondary (and optional purpose) for this directory is to store the temporary working files created by each application solver as a solution progresses. Depending on your configuration, the Shared Cluster Directory may share the same location as either the Working Directory or the Linux Working Directory. Implementation of this purpose is controlled by the File Management property on the Cluster tab. File Management As part of the solution process, each application solver produces temporary working files that are stored in a temporary directory. The File Management property allows you to specify where the temporary directory will be created, which in turn impacts solver and file transfer performance. 65

72 Administration Select Reuse Shared Cluster Directory to store solver files in the central file-staging directory along with all the other RSM job files. This option is recommended when one or more of the following is true: You are using a native cluster setup (i.e., you are not using SSH). You have a fast network connection between the execution nodes and the Shared Cluster Directory. You are using a solver that produces fewer, relatively small files as part of the solution and does not make heavy use of local scratch space (for example, the CFX or the Fluent solver). When you select this option, you specify that the Shared Cluster Directory is being used for its secondary purpose: to store temporary working files created by the solver. Depending on your configuration, this indicates that the Shared Cluster Directory shares the same location with either the Working Directory or the Linux Working Directory. When the Reuse Shared Cluster Directory option is selected: If you will be sending CFX jobs to a Microsoft HPC Compute Server, the Reuse Shared Cluster Directory option will always be used, regardless of the File Management property setting. If you are using a native cluster (i.e., have selected a Cluster Type and are not using SSH), the Working Directory property on the General tab is populated with the Shared Cluster Directory path. If you are using SSH for Linux cluster communications, the Linux Working Directory property on the SSH tab is populated with the Shared Cluster Directory path. If you are using SSH for Linux cluster communications, the Linux Working Directory property on the SSH tab is populated with the Shared Cluster Directory path. Be careful when using slower NAS storage and running many concurrent jobs. For each specific disk setup there will be a specific upper limit of jobs that can be run concurrently without affecting each other. Refer to your disk suppliers analysis tools for verification. Select Use Execution Node Local Disk to store solver files in a local directory on the Compute Server machine (also called using scratch space ). This option is recommended to optimize performance when one or both of the following is true: You have a slower network connection between the execution nodes and the Shared Cluster Directory. You are using a solver that produces numerous, relatively large files as part of the solution and makes heavy use of local scratch space (for example, Mechanical solvers). When you select this value, for performance reasons, it is recommended that you use set the Working Directory to a path with a fast I/O rate. In this case, on the General tab you should set Working Directory Location to User Specified and then set 66

73 Adding a Compute Server Working Directory to the path with the fastest I/O rate on each compute node of the cluster Select Custom Handling of Shared Cluster Directory to specify that file management for the cluster will be customized, so RSM will not copy or stage files to a Shared Cluster Directory. This option is used when the cluster s file staging area is not visible to the RSM Client machine via a network share, mapped drive, Samba share, or mounted directory. For more information on solver working files, see Product File Management in the Workbench User's Guide. Parallel Environment (PE) Names Names of Shared Memory Parallel and Distributed Parallel environments. The environment(s) must have already been created by your cluster administrator. Defaults to pe_smp and pe_mpi. To use one of the default names, your cluster administrator must create a parallel environment with the same name. The default PE names can also be edited to match the names of your existing parallel environments. Enabled when SGE is selected for Cluster Type. Required when enabled. Job Submission Arguments The Job Submission Arguments property allows you to enter arguments that will be added to the job submission command line of your third-party job scheduler. For example, you can enter job submission arguments to specify the queue (LSF, PBS, SGE) or the nodegroup (MS HPC) name. For valid entries, see the documentation for your job scheduler. Related Topics: Adding a Compute Server (p. 55) Compute Server Properties Dialog: General Tab (p. 57) Compute Server Properties Dialog: SSH Tab (p. 67) Compute Server Properties Dialog: SSH Tab The SSH tab of the Compute Server Properties dialog allows you to indicate if you intend to use SSH (secure shell) communications protocol to establish a connection between the Compute Sever machine (specified in the Machine Name property of the General tab) and a remote Linux machine (specified in the Linux Machine property on this tab). If you are using SSH, you can specify details about the remote Linux machine that will serve as a proxy for the Compute Server. Since the SSH protocol is applicable only to cross-platform communications (i.e., Windows- Linux), this tab is disabled if the Windows HPC option is selected in the Cluster Type dropdown of the Cluster tab. 67

74 Administration SSH is not a recommended communication protocol and should be used only if it is required by your IT policy. For ease of configuration and enhanced performance, RSM native mode is the recommended communication protocol. For more information on using native mode for cross-platform communications, see Configuring RSM to Use a Remote Computing Mode for Linux (p. 12). The Quick Reference table below lists the properties that you can configure on the SSH tab. Click on the property name for more detailed configuration instructions. Property Use SSH Linux Machine Linux Working Directory Linux Account Description Determines whether the SSH communications protocol will be used. Name (the hostname or IP address) of the remote Linux machine. Displayed when the Use SSH check box is selected. Required when displayed. Location on the remote Linux machine where the temporary working files created by the application solver will be saved. Displayed when the Use SSH check box is selected. Required when displayed. Name of the account being used to login to the remote Linux machine. 68

75 Adding a Compute Server Property Description Displayed when the Use SSH check box is selected. Required when displayed. Use SSH The Use SSH check box allows you to specify whether you intend to use the SSH communications protocol. The SSH protocol may be used for communications either to a remote Linux machine or to the head node of a Linux cluster. Deselected by default. Leave the check box deselected to indicate that you will not be using SSH. When this check box is deselected, the rest of the fields on the tab do not display. Select the check box to allow the Compute Server (specified in the Machine Name field of the General tab) to submit jobs via SSH to a remote Linux machine (specified in the Linux Machine field of this tab) for execution. Linux Machine Displays when the Use SSH check box is selected. Enter the name (the hostname or IP address) of the remote Linux machine. This value is accessed by the Task.ProxyMachine property in the job script. Custom job scripts can use this value for any purpose. Examples of Linux Machine names are linuxmachine01, lin05.win.abc.com, and Required field if the Use SSH check box is selected. Linux Working Directory The Linux Working Directory property displays when the Use SSH check box is selected. When this property displays, it requires that a path is entered for the Working Directory on the Linux machine (the machine specified in the Linux Name property). This can be done in one of the following ways: If the File Management property on the Cluster tab is set to Use Execution Node Local Disk, set the Linux Working Directory path to a local disk path (e.g. /tmp). The full RSM-generated path (e.g. /tmp/abcdef.xyz) will exist on the machine specified on that tab, as well as the node(s) that the cluster software selects to run the job. If the File Management property is set to Reuse Shared Cluster Directory, the Linux Working Directory path is populated with the path specified for Shared Cluster Directory on the Cluster tab and cannot be edited. This is where the cluster job runs, as expected. Requirements All RSM users must have write access and full permissions to this directory. The Linux Working Directory must be shared and writable to the remote Linux machine. For a cluster configuration, it must be shared and writable to all of the nodes in the cluster. that in some cluster configurations, the Linux Working Directory may also need to exist on each cluster node and/or may share the same physical location as the Shared Cluster Directory. 69

76 Administration This value is accessed by the Task.ProxyPath property in the job script. Custom job scripts can use this value for any purpose. Examples of Linux Working Directory paths are /scratch/josephuser, \\lsfcluster- Node1\RSMTemp, and \\msccheadnode\rsm_temp. In a Linux cluster configuration that uses the SSH protocol, when you specify that you want to use the Shared Cluster Directory to store temporary solver files, essentially you are indicating that the Linux Working Directory and the Shared Cluster Directory are the same location; as such, the Linux Working Directory property is populated with the path entered for the Shared Cluster Directory property in the Cluster tab. See the descriptions of the Shared Cluster Directory and File Management properties. Linux Account The Linux Account property displays when the Use SSH check box is selected. It requires that you enter the name of the account being used to login to the remote Linux machine. Enter the name of the account being used to login to the remote Linux machine. This value is accessed by the Task.ProxyAccount property in the job script. Custom job scripts can use this value for any purpose. For instructions integrating Windows with Linux using SSH/SCP, see Appendix B. Related Topics: Adding a Compute Server (p. 55) Compute Server Properties Dialog: General Tab (p. 57) Compute Server Properties Dialog: Cluster Tab (p. 63) 5.6. Testing a Compute Server To test a Compute Server configuration, right-click on the Compute Servers node in the tree view and select Test Server from the context menu that displays. This runs a test job using the settings provided. The Job Log view displays a log message indicating if the test finished or failed. If the test finishes, you can successfully run jobs on the Compute Server. If you do not have full permissions to the Compute Server working directory, Compute Server tests will fail. If tests fail, try deselecting the Delete Job Files in Working Directory check box on the General tab of the Compute Server Properties dialog. You can then examine the contents of the temporary job directories for additional debugging information. When this option is deselected, RSM will keep the temporary directories on the server after the job is completed. You can find the location of these temporary directories by looking for the line that specifies the "Compute Server Working Directory" in the RSM log. 70

77 Testing a Compute Server The Test Server job will always keep the temporary client working directory created by RSM on the client machine, regardless of the Delete Files in Working Directory setting. You can find the location of the temporary client working directory by looking for the line that specifies the "Client Directory" in the RSM log. 71

78 72

79 Chapter 6: Customizing ANSYS Remote Solve Manager This section discusses various methods of customizing ANSYS Remote Solve Manager. The following topics are addressed: 6.1. Understanding RSM Custom Architecture 6.2. Custom Cluster Integration Setup 6.3.Writing Custom Code for RSM Integration 6.1. Understanding RSM Custom Architecture The [RSMInstall]\Config directory contains job templates, code templates, job scripts, and other files that are used to define and control RSM jobs. The RSM architecture allows the user to customize how jobs are executed on a cluster or Compute Server by providing a custom version of some of the files. This section briefly describes the types of files used in the customization. This section addresses each file type in the RSM customization architecture: Job Templates Code Templates Job Scripts HPC Commands File Job Templates Job Templates define the code template, inputs, and outputs of a job. RSM job templates are located in the [RSMInstall]\Config\xml directory. Examples of job templates in this directory are GenericJob.xml, Addin_ANSYSJob.xml, and Addin_CFXJob.xml. An example job template for a server test job is shown below: <?xml version="1.0"?> <JobTemplate> <script>servertestcode.xml</script> <debug>true</debug> <cleanup>true</cleanup> <inputs> <file type="ascii">*.in</file> </inputs> <outputs> <file type="ascii">*.out</file> </outputs> </JobTemplate> Code Templates Code templates are used by the corresponding job template and determine which scripts will be used to run a specific job. Code templates contain sections for the actual code files (job scripts), referenced assemblies (.dlls), and support files. These code templates are chosen at runtime based upon the job template and cluster type selected to run the job. 73

80 Customizing RSM RSM code templates are located in the [RSMInstall]\Config\xml directory. An example code template for a server test job is shown below: <?xml version="1.0"?> <codedefinition transfer="true" prebuilt="false" assemblyname="ans.rsm.test.dll"> <codefiles> <codefile>test.cs</codefile> </codefiles> <references> <reference>ans.testdlls.dll</reference> <references/> <supportfiles> <supportfile transfer="true" type="ascii">testingscript.py</supportfile> <supportfiles/> </codedefinition> Job Scripts The job scripts for a particular type of job are defined in the <codefiles> section from code template. The term script refers generically to the code used for running the different types of RSM jobs, such as native jobs, SSH jobs, cluster jobs, etc. Depending on the Compute Server configuration, different sets of scripts may also be compiled dynamically during the run time of the job. Job scripts also include actual command scripts that you may provide to customize the cluster job behavior. These scripts are included in the <supportfiles> section. RSM job script files are located in the [RSMInstall]\Config\scripts directory. Specialized job scripts for integrating RSM with third-party job schedulers are invoked based upon the Cluster Type property on the Cluster tab of the Compute Server Properties dialog. Your selection from the Cluster Type drop-down is appended to the name of the base code template. These files are generically in the format of <BaseName>_<Keyword>.xml. For example, if the base code template is named TestCode.xml from the job template and you set Cluster Type to LSF, then LSF will be your keyword and RSM will look for the corresponding code template TestCode_LSF.xml. This code template then invokes the scripts necessary to run a test job on an LSF cluster. If you chose a Cluster Type of Custom, then Custom is not used as the keyword; you are required to provide a name of for your Custom Cluster Type. The name will become your keyword, allowing you to customize the cluster and modify these files without breaking any functionality HPC Commands File The cluster-specific HPC commands file is the configuration file used to specify the commands or queries that will be used in the cluster integration. The file is in xml format and is located in the [RSMInstall]\Config\xml directory. By default, the file name is hpc_commands_<clustertype>.xml. When using a custom cluster type, you will need to provide a copy of the HPC command file that matches your custom cluster type name in the format hpc_commands_<keyword>.xml, as discussed in the setup sections of Custom Cluster Integration Setup (p. 75). The commands inside the HPC command file can point directly to cluster software specific commands (like bsub or qstat). When the operations are more complex, the commands can reference scripts or executables that call the cluster software functions internally. These scripts can be in any language that can be run by the Compute Server. The HPC command file is described in greater detail in Custom Cluster Integration Setup (p. 75). 74

81 Custom Cluster Integration Setup 6.2. Custom Cluster Integration Setup ANSYS Remote Solve Manager (RSM) provides built-in functionality that allows Workbench jobs to be submitted to a commercial cluster. The built-in functionality includes the ability to transfer files automatically to/from the cluster from a remote client and the ability to submit, cancel, and monitor Workbench jobs. The current supported commercial clusters are Windows LSF, Linux LSF, Linux PBS Pro, Linux UGE (SGE), and Microsoft HPC (MSCC). RSM also provides a custom cluster integration mechanism that allows third parties to use custom scripts to perform the tasks needed to integrate Workbench with the cluster. The custom integration scenarios can be grouped into the following categories in order of complexity: Commercial clusters (listed above) for which the customers need some additional operation to be performed as part of the RSM job execution. This is a type of server-side integration. Unsupported clusters, not included in the list above, that the customers want to use for executing a job via RSM. This is also a type of server-side integration. Customers with specialized requirements that need to fully replace RSM functionality with 3rd-party scripts for handling all aspects of job submission including file transfer. This is called client-side integration. The terms server-side and client-side integration refer to the location (in the RSM architecture) where the custom script files are going to be located. In the typical RSM usage with a (supported or unsupported) cluster, the head node of the cluster is typically configured as RSM Manager and Compute Server. The cluster acts as a Compute Server with respect to the RSM client from where the jobs are submitted; therefore the customization of RSM files on the cluster is referred to as server-side integration. For server-side integration, you must be able to setup the RSM services on the cluster head node and file transfers cannot use SSH. The methods of file transfer discussed in Setting Up RSM File Transfers (p. 21) are available, except for SSH File Transfer (p. 28) and Custom Client Integration (p. 28). The client-side integration refers to the case where the RSM functionality is completely replaced by the 3rd-party scripts. In this case, the RSM Manager and Compute Server are located on the Client machine. However, only a thin layer of the RSM architecture is involved, in order to provide the APIs for execution of the custom scripts, which are located on the Client machine. The RSM services are not installed on the cluster machine. Please note that for supported clusters it is also possible to include additional job submission arguments to the command executed by the cluster. The addition of custom submission arguments does not require the creation of custom scripts. For more details, please refer to Compute Server Properties Dialog: Cluster Tab (p. 63). The following sections describe the general steps for customization with server-side and client-side integration. The detailed instructions for writing the custom code are similar for the two cases. They are addressed in Writing Custom Code for RSM Integration (p. 89). The following topics are addressed: Customizing Server-Side Integration Customizing Client-Side Integration Configuring File Transfer by OS Type and Network Share Availability 75

82 Customizing RSM Customizing Server-Side Integration RSM allows you to customize your integration with supported cluster types (LSF, PBS, HPC, and SGE) by starting with examples of production code for one of the standard cluster types and then changing command lines or adding custom code where necessary. If an unsupported cluster is being used, the recommended procedure is still to start from the example files for one of the supported clusters. This section will walk through the process of how to integrate such changes into RSM. On the server-side RSM installation, you will need to log into the remote cluster (RSM Manager) machine to perform all the tasks (steps 1 through 5). To override or modify selective cluster commands, you must: 1. Configure RSM to use a cluster-specific code template. 2. Create copies of existing code and rename files using your new Custom Cluster Type keyword. 3. Edit the cluster-specific code template to use your new cluster type. 4. Edit the cluster-specific hpc_commands_<keyword> file to reference the code you want to execute. 5. Provide a cluster-specific script/code/command that does the custom action and returns the required RSM output. The following sections discuss the steps needed for customizing your integration cluster: Configuring RSM to Use Cluster-Specific Code Template Creating Copies of Standard Cluster Code Using Custom Cluster Keyword Modifying Cluster-Specific Job Code Template to Use New Cluster Type Modifying Cluster-Specific HPC Commands File Configuring RSM to Use Cluster-Specific Code Template On the server-side RSM installation, you will need to log into the remote cluster (RSM Manager) machine to perform all the tasks (steps 1 through 5) in Customizing Server-Side Integration (p. 76). After creating a new Compute Server, set up the Compute Server Properties dialog box under the Cluster tab. You must set Cluster Type to Custom and create a short phrase/word in the Custom Cluster Type as the custom cluster name. The name is arbitrary, but you should make it simple enough to append to file names. This name will be referred to as the keyword from now on. For supported clusters, you can include the original cluster name in the new custom name, for clarity. For example, if your cluster is actually an LSF or PBS cluster but you need to customize the RSM interaction with it, you might use the keyword CUS_LSF or CUS_PBS. If the underlying cluster is not a supported platform, it could be called CUSTOM or any other arbitrary name. The names are in capital letters for simplicity, but the only requirement is that the capitalization is the same in all places where this keyword is referenced. A full example of a typical cluster setup using the remote RSM Manager and custom properties is shown below. 76

83 Custom Cluster Integration Setup 77

84 Customizing RSM The Working Directory you choose must be readable and writable by all users of RSM and also by rsmadmins. The Working Directory should not be shared between nodes but it should have the same name (and parent directories) on all nodes. The Shared Cluster Directory you choose must be readable and writable by all users of RSM and also by rsmadmins. The Shared Cluster Directory must be shared between all nodes of the cluster. Please refer toadding a Compute Server (p. 55) in the Remote Solve Manager User's Guide for information about the properties in the General and Cluster tabs of the Compute Server Properties dialog Creating Copies of Standard Cluster Code Using Custom Cluster Keyword As part of the setup, you must create a custom copy of the cluster-specific code template and hpc_commands files and modify them to load the custom job code for your custom integration. You must also create a custom copy of the xml file that contains the definition of the HPC commands to be used for the job execution. The starting point for the code template and command files can be created by copying them from existing RSM files as shown below: Locate the directory [ANSYS V15 Install]/RSM/Config/xml. Please note that all the actions listed below should be performed on the cluster installation. 78

85 Custom Cluster Integration Setup Locate the GenericJobCode file that pertains to your cluster type (for instance, if you are starting from PBS, the file is GenericJobCode_PBS.xml). You cannot use the SSH versions of these files. Copy the content of the GenericJobCode_PBS.xml code template into a new code template GenericJobCode_<YOURKEYWORD>.xml. If your keyword for the custom cluster was CUS_PBS like the example in Configuring RSM to Use Cluster-Specific Code Template (p. 76), the new file should be called GenericJobCode_CUS_PBS.xml. Locate the commands file that pertains to your cluster type (for instance, if you are using PBS, the file is hpc_commands_pbs.xml). Copy the content of the hpc_commands_pbs.xml file into a new file hpc_commands_<your- KEYWORD>.xml. If your keyword for the custom cluster was CUS_PBS like the example in Configuring RSM to Use Cluster-Specific Code Template (p. 76), the new file should be called hpc_commands_cus_pbs.xml. In order to use the native RSM cluster functionality (i.e. using a fully supported cluster type in your setup, e.g. LSF, PBS, etc.), you must not change the file names or contents of the corresponding cluster-specific templates provided by RSM. This can cause those standard cluster setups to fail and will make it harder to start over if you need to change something later on. Here we have created a custom cluster type, but used copies of a standard template to start from; this is the recommended method Modifying Cluster-Specific Job Code Template to Use New Cluster Type The code sample pasted below provides an example of a modified job code file. Modified/added portions are in bold text and you will need similar edits for any cluster type: GenericJobCode_CUS_PBS.xml. <?xml version="1.0"?> <codedefinition transfer="true" prebuilt="false" assemblyname="ans.rsm.genericjob_pbs.dll"> <codefiles> <codefile>genericjobtp.cs</codefile> <codefile>genericjobbase.cs</codefile> <codefile>genericcommand.cs</codefile> <codefile>iproxyscheduler.cs</codefile> <codefile>pbsutilities.cs</codefile> <codefile>schedulerbase.cs</codefile> <codefile>clustercustomization.cs</codefile> <codefile>utilities.cs</codefile> </codefiles> <references> <reference>ans.rsm.scriptapi.dll</reference> <reference>ans.rsm.utilities.dll</reference> </references> <supportfiles> <supportfile transfer="true" type="ascii">clusterjobs.py</supportfile> <supportfile transfer="true" type="ascii">clusterjobcustomization.xml</supportfile> <supportfile transfer="true" type="ascii">hpc_commands_cus_pbs.xml</supportfile> <supportfile transfer="true" type="ascii">submit_pbs_example.py</supportfile> </supportfiles> </codedefinition> 79

86 Customizing RSM The lines highlighted in the sample code show the modification made to the original file: 1. The hpc_commands file was changed from the cluster-specific hpc_commands_pbs.xml to the custom file we created: hpc_commands_cus_pbs.xml. All cluster types will need to change this file similarly, using <YOURKEYWORD>. 2. A custom script submit_pbs_example.py was added as a new support file. Its location is in the note below. Adding support files in the job code file tells RSM to find these files in the scripts (or xml, respectively) directory and transfer them to the working directory for use when the jobs runs. All custom scripts used by RSM should be referenced here. The submit_pbs_example.py script is provided in the directory [RSMInstall]/RSM/Config/scripts/EXAMPLES that can be used as a starting point for a customized Submit command. The script should be copied into the scripts directory, [RSMInstall]/RSM/Config/scripts, or a full path to the script must be provided along with the name Modifying Cluster-Specific HPC Commands File The command file prior to the modification is pasted below. While a detailed description of the command is beyond the scope this documentation, it can be noted that the command file provides the information on how actions related to job execution (submit a job, cancel a job, getting the job status) are executed. The file also refers to a number of environment variables. <?xml version="1.0" encoding="utf-8"?> <jobcommands version="2" name="custom Cluster Commands"> <environment> <env name="rsm_hpc_parse">pbs</env> </environment> <command name="submit"> <application> <app>qsub</app> </application> <arguments> <arg> <value>-q %RSM_HPC_QUEUE%</value> <env name="rsm_hpc_queue">any_value</env> </arg> <arg> <value>-l select=%rsm_hpc_cores%:ncpus=1:mpiprocs=1</value> <env name="rsm_hpc_distributed">true</env> </arg> <arg> <value>-l select=1:ncpus=%rsm_hpc_cores%:mpiprocs=%rsm_hpc_cores%</value> <env name="rsm_hpc_distributed">false</env> </arg> <arg> <value>%rsm_hpc_nativeoptions% -V -o %RSM_HPC_STDOUTFILE% -e %RSM_HPC_STDERRFILE%</value> </arg> <arg> <value>-- %RSM_HPC_COMMAND%</value> <env name="rsm_hpc_usewrapper">false</env> </arg> <arg> <value>%rsm_hpc_command%</value> <env name="rsm_hpc_usewrapper">true</env> </arg> </arguments> </command> <command name="cancel"> 80

87 Custom Cluster Integration Setup <application> <app>qdel</app> </application> <arguments> <arg>%rsm_hpc_jobid%</arg> </arguments> </command> <command name="status"> <application> <app>qstat</app> </application> <arguments> <arg>%rsm_hpc_jobid%</arg> </arguments> </command> <command name="queues"> <application> <app>qstat</app> </application> <arguments> <arg>-q</arg> </arguments> </command> </jobcommands> The section in bold text is the section that provides the Submit action, which we want to customize in this example. In the original version the Submit command invokes the cluster qsub with arguments determined via environment variables. The actual executable that is submitted is submitted to the cluster is determined by RSM during runtime and can be specified by via an environment variable named RSM_HPC_COMMAND. For details, see Submit Command (p. 91). The example below shows the same section after it is customized to execute the Python file submit_pbs_example.py. In this example, we defined the type of application to execute (runpython, accessed from the ANSYS installation) and the name of the Python file to be executed (submit_pbs_ex- AMPLE.py). <command name="submit"> <application> <app>%awp_root150%/commonfiles/cpython/linx64/python/runpython</app> </application> <arguments> <arg> <value>submit_pbs_example.py</value> </arg> </arguments> </command> The custom Submit command appears much simpler than the original one. However, the details of the submission are handled inside the Python file, which contains the same arguments used in the original section. The Python file will also contain any custom code to be executed as part of the submission. Other commands or queries can be overridden using the same procedure. You can find the command name in the cluster-specific hpc_commands file and replace the application that needs to be executed and the arguments needed by the application. Details on how to provide custom commands, as well as the description of the environment variables, are provided in Writing Custom Code for RSM Integration (p. 89) Customizing Client-Side Integration The mechanism and operations for custom client-side integration are very similar to the ones for custom server-side integration. However, the underlying architecture is different. In the server-side integration, the customization affects the scripts used for RSM execution on the server/cluster side. In the clientside integration, only a thin layer of the RSM on the client side is involved. The layer provides the APIs 81

88 Customizing RSM for the execution of the custom scripts, which are located on the Client machine. RSM is not installed on the server/cluster. It is responsibility of the custom scripts to handle all aspects of the job execution, including transfer of files to and from the server. The RSM installation provides some prototype code for client integration that can be tailored and modified to meet specific customization needs. As indicated above, the steps needed for client-side integration are very similar to those for server-side integration. On the client-side RSM installation, you will be using the local Client machine (and Manager) to perform all the tasks (steps 1 through 5), as follows: 1. Configure RSM to use cluster-specific code template. 2. Create copies of prototype code for the custom cluster type. 3. Edit cluster-specific code template to use your new cluster type. 4. Edit the cluster-specific hpc commands_<keyword> file to reference the custom commands. 5. Provide cluster-specific script\code\commands that perform the custom actions and return the required RSM output. The following sections discuss the steps to customize your integration: Configuring RSM to Use Cluster-Specific Code Template on the Client Machine Creating Copies of Sample Code Using Custom Client Keyword Modifying Cluster-Specific Job Code Template to Use New Cluster Type Modifying Cluster-Specific HPC Commands File Configuring RSM to Use Cluster-Specific Code Template on the Client Machine On the client-side RSM installation, you will be using the local Client machine (and Manager) to perform all the tasks (steps 1 through 5) in Customizing Client-Side Integration (p. 81). After creating a new Compute Server, set up the Compute Server Properties dialog under the Cluster tab. You must select Cluster Type to Custom and then create a short phrase/word in the Custom Cluster Type property as the custom cluster name. The name is arbitrary, but you should make it simple enough to append to file names. This name will be referred to as the keyword from now on. For supported clusters, you can include the original cluster name in the new custom name, for clarity. For example, if your cluster is actually an LSF or PBS cluster but you need to customize the RSM interaction with it, you might use the keyword CUS_LSF or CUS_PBS. If the underlying cluster is not a supported one, it could be called CUSTOM or any other arbitrary name. The names are in capital letters for simplicity, but the only requirement is that the capitalization is the same in all places where this keyword is referenced. For the Cluster tab File Management property, select Custom Handling of Shared Cluster Directory. This option turns off any attempt to copy files to shared cluster directory. The custom scripts are responsible for getting files to and from the appropriate location on the cluster side. See Configuring File Transfer by OS Type and Network Share Availability (p. 86) for more details on the file transfer scenarios by OS and network share availability. A full example of a typical cluster setup using the local Manager and custom client definition is shown below. 82

89 Custom Cluster Integration Setup 83

90 Customizing RSM Creating Copies of Sample Code Using Custom Client Keyword As part of the setup, you must create a custom copy of the cluster-specific code template and hpc_commands files and modify them to load the custom job code for your custom integration. The starting point for the code template and command files can be created by copying them from sample files that are provided in the RSM installation. The sample files are marked with the suffix CIS (Client Integration Sample) and provide an example of LSF-based integration. 1. Using the RSM installation on your Client machine, locate the directory [RSMInstall]\Config\xml. Please note that all the actions listed below should be performed on the Client machine. 2. Locate the sample file for code template GenericJobCode_CIS.xml. 3. Copy the content of the GenericJobCode_CIS.xml code template into a new code template GenericJobCode_<YOURKEYWORD>.xml. If your keyword for the custom cluster was CUS_LSF like the example in Configuring RSM to Use Cluster-Specific Code Template on the Client Machine (p. 82), the new file should be called GenericJobCode_CUS_LSF.xml. 4. Locate the sample file for command execution hpc_commands_cis.xml. 5. Copy the content of the hpc_commands_cis.xml command file into a new command file template hpc_commands_<yourkeyword>.xml. If your keyword for the custom cluster was CUS_LSF like the example in Configuring RSM to Use Cluster-Specific Code Template on the Client Machine (p. 82), the new file should be called GenericJobCode_CUS_LSF.xml. The client-side integration requires a custom implementation to be provided for all the commands to be executed on the cluster. The standard RSM installation includes sample scripts for all these commands, which should be used as a starting point for the customization. The sample scripts are named submit_cis.py, cancel_cis.py, status_cis.py, transfer_cis.py, and cleanup_cis.py. They are located in the [RSMInstall]\RSM\Config\scripts directory. While it is not absolutely necessary to create a copy and rename the scripts, we have done so for consistency; in the rest of the example, it is assumed that they have been copied and renamed to add the same keyword chosen for the custom cluster, e.g. (submit_cus_lsf.py, cancel_cus_lsf.py, status_cus_lsf.py, transfer_cus_lsf.py, and cleanup_cus_lsf.py). These scripts will have to be included in the custom job template, as shown in the following section, Modifying Cluster-Specific Job Code Template to Use New Cluster Type (p. 84). These CIS scripts are actually sample scripts that utilize a fully custom client integration on a standard LSF cluster, for example only. Generally, custom client integrations do not use standard cluster types, and thus there are no samples for custom client integrations on other cluster types. Any additional custom code that you want to provide as part of the customization should also be located in the [RSMInstall]\RSM\Config\scripts directory corresponding to your local (client) installation. Alternatively, a full path to the script must be provided along with the name Modifying Cluster-Specific Job Code Template to Use New Cluster Type The code sample pasted below provides an example of a modified GenericJobCode_CUS_LSF.xml. 84

91 Custom Cluster Integration Setup <?xml version="1.0"?> <codedefinition transfer="false" prebuilt="false" assemblyname="ans.rsm.customcluster.dll"> <codefiles> <codefile>genericjobtp.cs</codefile> <codefile>genericjobbase.cs</codefile> <codefile>genericcommand.cs</codefile> <codefile>iproxyscheduler.cs</codefile> <codefile>schedulerbase.cs</codefile> <codefile>utilities.cs</codefile> <codefile>clustercustomization.cs</codefile> <codefile>genericcluster.cs/codefile> <!-- Generic class loads hpc commands file below --> </codefiles> <references> <reference>ans.rsm.scriptapi.dl</reference><!-- Required. Interface to RSM --> <reference>ans.rsm.utilities.dl</reference> </references> <supportfiles> <!-- files to copy to RSM job working directory. if no path, *.xml assumed to be in RSM/Config/xml, otherwise RSM/Config/scripts --> <supportfile transfer="true" type="ascii">clusterjobs.py</supportfile> <!-- Compute Server keyword is CUS_LSF. --> <supportfile transfer="true" type="ascii">hpc_commands_cus_lsf.xml</supportfile> <!-- In this example, these are the customer provided external scripts. --> <supportfile transfer="true" type="ascii">submit_cus_lsf.py</supportfile> <supportfile transfer="true" type="ascii">status_cus_lsf.py</supportfile> <supportfile transfer="true" type="ascii">cancel_cus_lsf.py</supportfile> <supportfile transfer="true" type="ascii">transfer_cus_lsf.py</supportfile> <supportfile transfer="true" type="ascii">cleanup_cus_lsf.py</supportfile> </supportfiles> </codedefinition> The lines highlighted in the code sample show the modification made to the original file: 1. The hpc_command file was changed from the sample name hpc_commands_cis.xml to your custom file name hpc_commands_cus_lsf.xml. All cluster types will need to change this file similarly, using <YOURKEYWORD>. 2. Custom scripts described in the previous section are added as new support files. All the scripts listed should be in the directory [RSMInstall]\RSM\Config\scripts, or a full path to the scripts must be provided along with the name Modifying Cluster-Specific HPC Commands File The cluster-specific HPC commands file is the configuration file used to specify the commands that will be used in the cluster integration. The file is in xml format and is located in the [RSMInstall]\RSM\Config\xml directory. This section provides an example of a modified file hpc_commands_cus_lsf.xml. The cluster commands are provided by the CIS sample scripts referred to in the previous section. These scripts have been copied from the samples provided in the RSM installation and renamed to match the keyword chosen to the custom cluster. The hpc_commands file provides the information on how commands or queries related to job execution are executed. The file can also refer to a number of environment variables. Details on how to provide 85

92 Customizing RSM custom commands, as well as the description of the environment variables, are provided in Writing Custom Code for RSM Integration (p. 89). <jobcommands version="2" name="custom Cluster Commands"> <environment> <env name="rsm_hpc_parse">custom</env> <env name="rsm_hpc_parse_marker">start</env> </environment> <command name="submit"> <application> <app>%awp_root150%/commonfiles/cpython/winx64/python/python.exe</app> </application> <arguments> <arg>submit_cus_lsf.py</arg> </arguments> </command> <command name="status"> <application> <app>%awp_root150%/commonfiles/cpython/winx64/python/python.exe</app> </application> <arguments> <arg>status_cus_lsf.py</arg> </arguments> </command> <command name="cancel"> <application> <app>%awp_root150%/commonfiles/cpython/winx64/python/python.exe</app> </application> <arguments> <arg>cancel_cus_lsf.py</arg> </arguments> </command> <command name="transfer"> <application> <app>%awp_root150%/commonfiles/cpython/winx64/python/python.exe</app> </application> <arguments> <arg>transfer_cus_lsf.py</arg> </arguments> </command> <command name="cleanup"> <application> <app>%awp_root150%/commonfiles/cpython/winx64/python/python.exe</app> </application> <arguments> <arg>cleanup_cus_lsf.py</arg> </arguments> </command> </jobcommands> Configuring File Transfer by OS Type and Network Share Availability A remote job execution on a cluster usually requires the transfer of files to and from a cluster directory. With client-side custom integration, the cluster job file management can vary according to whether the cluster staging area is visible to the RSM Client machine. The Compute Server settings are used to specify information about the cluster staging area and local scratch directory. For client-side custom integration, the Compute Server settings can also be overridden through environment variables. The environment variables (such as RSM_HPC_PLATFORM, RSM_HPC_SCRATCH, and RSM_HPC_STAGING) are set on the Client machine where the RSM job will run. For details on these variables, see Environment Variables Set by Customer (p. 94). The following sections contain example configuration settings for different scenarios: Windows Client to Windows Cluster Windows Client to Linux Cluster Linux Client to Linux Cluster 86

93 For each scenario, the Shared Cluster Directory (aka staging directory) can be: Visible to the RSM Client machine via a network share, Samba share, or mapped drive. In these cases, RSM will attempt the copying to and transfer of files from the cluster staging area. Not visible to the RSM Client machine. In these cases, file management is handled by external HPC commands/scripts. RSM is not involved in the copying of files to/from the cluster Windows Client to Windows Cluster In the following two scenarios, a Windows Client machine is integrated with a Windows cluster Windows-to-Windows, Staging Visible In this scenario, the Windows Client can see the Windows cluster staging area via a network share or mapped drive. 1. On the Compute Server Properties dialog General tab, set Working Directory to Automatic. 2. On the Cluster tab: Set Cluster Shared Directory to the path of the actual shared directory on the cluster. RSM will copy jobs to and from this location. Set File Management to either Use Execution Node Local Disk or Reuse Shared Cluster Directory. 3. Set the RSM_HPC_SCRATCH environment variable: If using local scratch, set to the path of the desired local scratch space on the cluster. If you are doing scratch management, set to CUSTOM Windows-to-Windows, Staging Not Visible In this scenario, the Windows Client cannot see the Windows cluster staging area. 1. On the Compute Server Properties dialog General tab, set Working Directory to Automatic. 2. On the Cluster tab, set File Management to Custom Handling of Shared Cluster Directory. This option will prevent RSM from file copying on the client side. 3. Set the RSM_HPC_SCRATCH environment variable: If using local scratch, set to the path of the desired local scratch space on the cluster. If you are doing scratch management, set to CUSTOM. 4. Set the RSM_HPC_STAGING environment variable to the path of the staging directory for the cluster Windows Client to Linux Cluster Custom Cluster Integration Setup In the following two scenarios, a Windows Client machine is integrated with a Linux cluster. 87

94 Customizing RSM Windows-to-Linux, Staging Visible In this scenario, the Windows Client can see the Linux cluster staging area via a Samba UNC or mapped drive. 1. On the Compute Server Properties dialog General tab, set Working Directory to Automatic. 2. On the Cluster tab: Set Cluster Shared Directory to the path of the actual Windows-side shared directory on the cluster. Set File Management to either Use Execution Node Local Disk or Reuse Shared Cluster Directory. 3. Set the RSM_HPC_SCRATCH environment variable: If using local scratch, set to the path of the desired local scratch space on the cluster. If you are doing scratch management, set to CUSTOM. 4. Set the RSM_HPC_STAGING environment variable to the path of the Linux-side staging directory. Once a unique directory for the job is created on the Windows-side (for instance, \\machine\stagingshare\abcdef.xyz), then RSM_HPC_STAGING is internally updated by RSM to include the unique directory name (for instance /staging/abcdef.xyz). 5. Set the RSM_HPC_PLATFORM environment variable to linx Windows-to-Linux, Staging Not Visible In this scenario, the Windows Client cannot see the Linux cluster staging area. 1. On the Compute Server Properties dialog General tab, set Working Directory to Automatic. 2. On the Cluster tab, set File Management to Custom Handling of Shared Cluster Directory. 3. Set the RSM_HPC_SCRATCH environment variable: If using local scratch, set to the path of the desired local scratch space on the cluster. If you are doing scratch management, set to CUSTOM. 4. Set the RSM_HPC_STAGING environment variable to the path of the staging directory on the cluster. Once a unique directory for the job is created on the Windows-side (for instance, \\machine\stagingshare\abcdef.xyz), then RSM_HPC_STAGING is internally updated by RSM to include the unique directory name (for instance, /staging/abcdef.xyz). 5. Set the RSM_HPC_PLATFORM environment variable to linx64. 88

95 Writing Custom Code for RSM Integration Linux Client to Linux Cluster In the following two scenarios, a Linux Client machine is integrated with a Linux cluster Linux-to-Linux, Staging Visible In this scenario, the Linux Client can see the Linux cluster staging area because the staging area is mounted on the Client machines. 1. On the Compute Server Properties dialog General tab, set Working Directory to Automatic. 2. On the Cluster tab: Set Cluster Shared Directory to the path of the actual shared directory on the cluster. Set File Management to either Use Execution Node Local Disk or Reuse Shared Cluster Directory. 3. Set the RSM_HPC_SCRATCH environment variable: If using local scratch, set to the path of the desired local scratch space on the cluster. If you are doing scratch management, set to CUSTOM Linux-to-Linux, Staging Not Visible In this scenario, the Linux Client cannot see the Linux cluster staging area. 1. On the Compute Server Properties dialog General tab, set Working Directory to Automatic. 2. On the Cluster tab, set File Management to Custom Handling of Shared Cluster Directory. 3. Set the RSM_HPC_SCRATCH environment variable: If using local scratch, set to the path of the desired local scratch space on the cluster. If you are doing scratch management, set to CUSTOM. 4. Set the RSM_HPC_STAGING environment variable to the path of the staging directory on the cluster. Once a unique directory for the job is created on the client side (for instance, /tmp/abcdef.xyz), then RSM_HPC_STAGING is internally updated by RSM to include the unique directory name (for instance, /staging/abcdef.xyz) Writing Custom Code for RSM Integration This section provides detailed information about the code that should be provided for custom integration with RSM. The custom code can be in any form convenient to you, typically in the form of scripts or executables. Generally, scripts are used to wrap the underlying cluster software (for example, LSF) commands. You can review sample Python scripts in the [RSMInstall]\Config\scripts directory. 89

96 Customizing RSM The scripts have access to environment variables that are set to override default RSM behavior and to environment variables that are dynamically set by RSM to provide information about job related variables. A detailed description of the environment variables that the scripts can access is given in Custom Integration Environment Variables (p. 93). This section discusses the following topics: Parsing of the Commands Output Customizable Commands Custom Integration Environment Variables Providing Client Custom Information for Job Submission Parsing of the Commands Output Since some of the commands used for custom integration are wrappers around cluster-specific commands, it is necessary to parse the output of the cluster commands. The output of the cluster command provides information such as cluster Job ID or status. It can also be used to report error and debugging messages Commands Output in the RSM Job Log The output for all cluster command scripts should be sent directly to stdout or stderr. The contents of stdout will be added to the RSM job log as standard messages. This content is also searched in order to parse the information necessary as a result of the command execution. The handling of the command output depends upon the value of the environment variable RSM_HPC_PARSE. The environment variable defines what type of output RSM should expect from the command. If the underlying cluster used for the integration is one of the supported types (LSF/PBS/SGE/MSCC) you should set the value of RSM_HPC_PARSE to the corresponding type. Printing the output of the command will allow the RSM code to extract the appropriate information. For example, if the LSF option is used, RSM is expecting the output of the Submit command to contain output from LSF bsub command. If your cluster is not one of the supported ones, you should set RSM_HPC_PARSE to CUSTOM. In this case, it is your responsibility to parse the output of the commands and provide to RSM a variable with the result. An optional RSM_HPC_PARSE_MARKER option can be set to a marker string of an output line in order to indicate the line after which parsing should start. If no "start marker" is found, then RSM will parse all of the output as the start marker was at the beginning of the output Error Handling Error messages and warnings information are written to stdout as necessary. If they are properly labeled as indicated below they appear in the RSM log as orange for warnings and bold red for errors. Output format: RSM_HPC_ERROR=<errormessage> RSM_HPC_WARN=<warning> Example Python snippet: Print( RSM_HPC_WARN=This is what a warning displays like ) 90

97 Writing Custom Code for RSM Integration Debugging Debugging information, typically used for troubleshooting purposes, is shown on the RSM job log only if the Debug Messages option is selected from the job log context menu. (To access this option, rightclick anywhere inside the job log pane of the RSM application main window.) Output format: RSM_HPC_DEBUG=<debugmessage> Customizable Commands RSM will invoke a custom implementation for the following commands: Submit Command Status Command Cancel Command Transfer Command Cleanup Command Submit Command The Submit command is invoked to submit a job to the cluster. The command should return as soon as the queuing system has taken ownership of the job and a unique Job ID is available. If using CUSTOM parsing, the command must write a unique Job ID with the following format: RSM_HPC_JOBID=<jobid>. If using (LSF/PBS/SGE/MSCC) parsing, the script only needs to send the direct output from the submission command (bsub/qsub/job submit). The custom integration infrastructure provides the Python script, ClusterJobs.py, in the [RSMInstall]\Config\scripts directory. The script serves as a layer of abstraction that allows a userselected operation (such as a component update for one or more of the applications or a design point update) to be invoked without the need to be aware of the command line arguments and options required for the appropriate submission of the job. In the Submit command, the ClusterJobs.py script should be invoked (rather than executing the individual applications). This Python script should be considered as a layer that builds the appropriate command line and sets the appropriate environment variables for the remote execution. The usage of application specific command line in the Submit script is strongly discouraged and cannot be properly supported in a general way. For user convenience, the complete Python command that contains the job to be executed by the Submit command (for instance, by LSF bsub) is provided through the environment variable RSM_HPC_COMMAND. Examples: Custom server examples for LSF, PBS, SGE, and MSCC are located in the [RSMInstall]\Config\scripts\EXAMPLES directory. A custom client example (for LSF) is provided in the file submit_cis.py, located in the [RSMInstall]\Config\scripts directory. More examples may be available on the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to 91

98 Customizing RSM Status Command The Status command has access to the Job ID through the environment variable RSM_HPC_JOBID. Given a Job ID, the command should query the cluster for the status of the job and return the status of that job in string format. If using CUSTOM parsing, the output should be parsed in order to provide the status information with format RSM_HPC_STATUS=<jobstatus>, where jobstatus is: CANCELED FAILED FINISHED QUEUED RUNNING Examples: Custom server examples are not provided for this command. A custom client example (for LSF) is provided in the file status_cis.py, located in the [RSMInstall]\Config\scripts directory. More examples may be available on the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to Cancel Command The Cancel command has access to the Job ID through the environment variable RSM_HPC_JOBID. Given a Job ID, the command should invoke the cluster command to cancel the job. No output is required from the Cancel command. However, an output statement should be given for verification in the RSM log. Examples: Custom server examples are not provided for this command. A custom client example (for LSF) is provided in the file cancel_cis.py, located in the[rsminstall]\config\scripts directory. A custom client example (for LSF) is provided in the file status_cis.py, located in the [RSMInstall]\Config\scripts directory Transfer Command The Transfer command is invoked in order to transfer files to and from the cluster. No output is required from the Transfer command. However, it is suggested to output the files that are being copied for verification in the RSM log. The Transfer command can check if the environment variable RSM_HPC_FILEDIRECTION equals UPLOAD or DOWNLOAD to detect whether files should be uploaded to the cluster or downloaded from the cluster. 92

99 The Transfer command is invoked to upload files to and retrieve files from the cluster, as follows: Uploading of files is invoked for input files and also when the user interrupts an application. (Applications typically look for an interrupt file in a specified location.) Retrieving of files is invoked for output files once the job is completed. It is also invoked for inquiring (downloading) files during the execution of the job. Inquiring of files is typically invoked from Workbench for small files (such as convergence information). The list of files to be uploaded or downloaded is provided through a semi-colon delimited list in the environment variable RSM_HPC_FILELIST. File names can possibly contain wildcards (e.g. *.out). The files are located in the current Working Directory in which the script is invoked (i.e. the RSM job Working Directory). The command can also access the environment variable RSM_HPC_FILECONTEXT is set to INPUTS (beginning of job), OUTPUTS (end of job), CANCEL (cancelling a job) or INQUIRE (request for files while job running). This information may be useful especially in the case of inquire, when extra processing may be required to locate files for a running job. Examples: Custom server integrations do not use this command. A custom client example is provided in the file transfer_cis.py, located in the [RSMInstall]\Config\scripts directory. More examples may be available on the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to Cleanup Command The Cleanup command is called at the very end of the execution when all the other actions have been completed. It can be used by the user to perform clean-up operation or other actions that are needed at the end of a job. No output is required from the Cleanup command. However, an output statement should be given for verification in the RSM log. Examples: Custom server examples are not provided for this command. A custom client example (for LSF) is provided in the file cleanup_cis.py, located in the [RSMInstall]\Config\scripts directory. More examples may be available on the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to Custom Integration Environment Variables Writing Custom Code for RSM Integration Workbench/RSM makes job settings available to custom commands via environment variables. Some environment variables are set automatically by RSM at runtime, providing necessary information to the custom scripts or executables in the HPC commands file. Other environment variables can be set by your RSM administrator, if appropriate to your job management process. 93

100 Customizing RSM Environment Variables Set by Customer The following optional environment variables can be set by the set by your RSM administrator on the Compute Server side and they will be passed to the Compute Server as environment variables to be used in scripting. Additionally, the user can set any number of variables that follow in Providing Client Custom Information for Job Submission (p. 96). For client-side custom integration, the RSM Compute Server is running on the Client machine. Environment Variable RSM_HPC_CONFIG Description Optional. Specifies the file name of the HPC commands file. Set this variable only if the file name to be used is different than the default file name, hpc_commands.xml or hpc_commands_keyword.xml, where keyword (Custom Cluster Type) is defined on the Cluster tab of the RSM Compute Server Properties dialog. RSM_HPC_PARSE RSM_HPC_PARSE_MARKER RSM_HPC_PLATFORM Example: RSM_HPC_CONFIG=hpc_commands_TEST.xml Specifies what type of output RSM should expect from these commands, choose LSF/PBS/SGE/MSCC or CUSTOM. If the underlying cluster used for the integration is one of the supported types (LSF/PBS/SGE/MSCC), set the value of RSM_HPC_PARSE to the corresponding type. For these supported types, RSM can extract the relevant information from the output of the command. For unsupported types, set RSM_HPC_PARSE to CUSTOM and see Customizable Commands (p. 91) for what variables must be set in each command. Optional. Specifies a marker string of an output line. The marker string is used in order to indicate the line after which parsing should start. Optional. Specifies the cluster platform being used. Set this variable only if the cluster platform is different than the machine from which the RSM job is submitted for example, when the job is submitted from a Windows client to a Linux cluster via client custom integration. RSM_HPC_SCRATCH Example: RSM_HPC_PLATFORM=linx64 Optional. Path for the cluster s job scratch directory (i.e. solver files are stored in a local directory on the Compute Server machine). Set this variable only if the path is different than the one specified in the Working Directory property on the General tab of the Compute Server Properties dialog. Example: RSM_HPC_SCRATCH=/tmp 94

101 Writing Custom Code for RSM Integration RSM_HPC_STAGING : Specifying a value of CUSTOM for RSM_HPC_SCRATCH instructs the code that is executed on the cluster side to NOT to create any scratch directories. The directory where the scripts are executed is considered to be the scratch directory for the job. Optional. Path for the cluster s central staging area for job files. Typically needed when client and cluster platforms are different Environment Variables Set by RSM Set this variable only if the path is different than the one specified in the Shared Cluster Directory property on the Cluster tab of the Compute Server Properties dialog. Set by both the administrator and RSM. Workbench/RSM modifies the original administrator-entered path specified here so that a unique subdirectory is added to the end. Example: RSM_HPC_STAGING=/staging RSM will set the following environment variables at runtime, communicating job-specific data to the HPC commands. These variables will need to be used in your scripts to do the job handling. Environment Variable RSM_HPC_CORES RSM_HPC_DISTRIBUTED Description The number of cores requested by the user for the job. Indicates whether a distributed (multi-node) cluster job is allowed. Set to TRUE if the target solver (specified in RSM_HPC_JOBTYPE) supports distributed execution. RSM_HPC_FILECONTEXT RSM_HPC_FILEDIRECTION RSM_HPC_FILELIST RSM_HPC_JOBID Set to FALSE if cores can be used on only one node. Used only by Transfer command/script. Specifies the context in which files are being transferred in case any special handling is required. Possible values are CANCEL, INPUTS, INQUIRE, and OUTPUTS. Used only by Transfer command/script. Specifies the direction of file transfers. Possible values are UPLOAD (which moves files from the client to the cluster) or DOWNLOAD (which moves files from the cluster to the client). Used only by Transfer command/script. Semi-colon delimited list of files to transfer for the job submission or status request. Dynamically generated because the list can depend on the job type or the specific UI action. May contain wildcards. Identifier for the cluster job returned by the successful Submit command. RSM sets this variable so it is available to subsequent commands. 95

102 Customizing RSM RSM_HPC_JOBTYPE The solver being used for the job. Possible values are ANSYS, Addin_ANSYS, Addin_CFX, Addin_FLUENT, Addin_POLYFLOW, AUTODYN, Contact, FrameworkUpdateDPs, and RBD. The job types with the add-in suffix are jobs executed from within Workbench as part of the component update. FrameworkUpdateDPs is the job type corresponding to the execution of the Workbench Update Design Points operation. RSM_HPC_NATIVEOPTIONS RSM_HPC_QUEUE RSM_HPC_STAGING RSM_HPC_STDERRFILE RSM_HPC_STDOUTFILE The other jobs types correspond to jobs submitted through RSM without the Workbench mediation. Value(s) of the Job Submission Arguments property on the Cluster tab of the Compute Server Properties dialog. Workbench/RSM does not define or manipulate these administrator-specified options. The queue requested by the user for the job. The list of available queues is defined by the Workbench/RSM administrator. Path in the Shared Cluster Directory property on the Cluster tab of the Compute Server Properties dialog. Set by both administrator and RSM. Workbench/RSM modifies the original administrator-entered path so that a unique subdirectory is added to the end. A request that cluster job stderr be redirected into the named file. The contents of this file will be added to the RSM job log. A request that cluster job stdout be redirected into the named file. The contents of this file will be added to the RSM job log Providing Client Custom Information for Job Submission When executing a job, you can provide custom information from the client side that allows you to perform custom actions prior the submission of a job to the Compute Server or cluster. Custom information that you define on the RSM Client machine can be picked up by RSM and then passed to the Compute Server or cluster machine where the job is being executed. For a custom client Integration, the Compute Server is the Client machine therefore the information is made available to the custom scripts on the Client machine. In this case, the environment variables are also passed to cluster machine on remote side. Examples of custom information that can be provided to the cluster are: 96

103 The username of the submitter (which, for instance, provides the ability to monitor jobs submitted by a particular user for accounting purposes) and The license necessary to execute the job, which can be used to integrate with cluster resource management to check ANSYS license availability before a job starts running. For more information on how to integrate licensing with cluster software, please contact your cluster administrator or ANSYS customer support. As an example, we ll pass the submitter s username from the client to a PBS cluster. The following sections detail the steps for providing custom information for job submissions to clusters Defining the Environment Variable on the Client Passing the Environment Variable to the Compute Server Verify the Custom Information on the Cluster Defining the Environment Variable on the Client Writing Custom Code for RSM Integration First, you must define the information on the RSM Client machine by creating an environment variable. The environment variable must begin with the prefix RSM_CLIENT_ in order for RSM to detect it and pass the information from the Client machine to the Compute Server or cluster. In the example below, we ve defined the environment variable RSM_CLIENT_USERNAME. The name is arbitrary as long as it begins with the RSM_CLIENT_ prefix Passing the Environment Variable to the Compute Server Once you ve defined the environment variable on the RSM Client machine, this environment variable will be passed along with other job files to the Compute Server or cluster machine. You can access this environment variable value from your custom cluster job scripts. In our example, we will add the client job user name as a new command line argument to PBS qsub command defined in the commands file RSM uses for PBS clusters, hpc_commands_pbs.xml (located in the [RSMInstall]\Config\xml directory). In the code sample below, you can see that the environment variable is added to the qsub command., also, that it is preceded by A, which defines the account string associated with the job for the PBS cluster. <command name="submit"> <application> <app>qsub</app> </application> 97

104 Customizing RSM <arguments> <arg> <value>-q %RSM_HPC_QUEUE%</value> <env name="rsm_hpc_queue">any_value</env> </arg> <arg> <value>-a %RSM_CLIENT_USERNAME%</value> <env name="rsm_client_username">any_value</env> </arg> <arg> <value>-l select=%rsm_hpc_cores%:ncpus=1:mpiprocs=1</value> <env name="rsm_hpc_distributed">true</env> </arg> <arg> <value>-l select=1:ncpus=%rsm_hpc_cores%:mpiprocs=%rsm_hpc_cores%</value> <env name="rsm_hpc_distributed">false</env> </arg> <arg> <value>%rsm_hpc_nativeoptions% -V -o %RSM_HPC_STDOUTFILE% -e %RSM_HPC_STDERRFILE%</value> </arg> <arg> <value>-- %RSM_HPC_COMMAND%</value> <env name="rsm_hpc_usewrapper">false</env> </arg> <arg> <value>%rsm_hpc_command%</value> <env name="rsm_hpc_usewrapper">true</env> </arg> </arguments> </command> To view a sample of this file before the addition of custom information, see Modifying Cluster-Specific HPC Commands File (p. 85) Verify the Custom Information on the Cluster To verify that the custom information has been successfully passed from the RSM Client to the cluster, run a job that will call the script you ve customized. The environment variable should show up in the Reading environment variables section of the RSM job log. Reading environment variables... RSM_CLIENT_USERNAME = myname Since we added the environment variable to the qsub command in the PBS commands file, it will also show up in the area of the job log indicating that the qsub command has been run. qsub -q %RSM_HPC_QUEUE% -A %RSM_HPC_USERNAME% -1 select=1:ncpus=%rsm_hpc_cores%:mpiprocs=%rsm_hpc_cores%... qsub -q WB_pbsnat -A myname -1 select=1:ncpus=1:mpiprocs=

105 Chapter 7: RSM Troubleshooting This section contains troubleshooting tips for RSM. Generating RSM Service Startup Scripts for Linux The scripts for manually starting RSM services are usually generated during installation. In the event that the scripts are not generated as part of the install or you ve removed the generated scripts, you can generate the scripts manually in either of the following ways: Generate scripts for all of the services by running rsmconfig (without command line options). Generate the script for a specific service by running generate_service_script. Specify the service by using command line options, as shown below: tools/linux>./generate_service_script Usage: generate_service_script -mgr -svr -xmlrpc Options: -mgr: Generate RSM Job Manager service script. -svr: Generate RSM Compute Server service script. -xmlrpc: Generate RSM XML-RPC Server service script. Configuring RSM for Mapped Drives and Network Shares for Windows If RSM is used to solve local or remote jobs on mapped network drives, you may need to modify security settings to allow code to execute from those drives because code libraries may be copied to working directories within the project. You can modify these security settings from the command line using the CasPol utility, located under the.net Framework installation. For a 32 bit machine: C:\Windows\Microsoft.NET\Framework\v For a 64 bit machine: C:\Windows\Microsoft.NET\Framework64\v In the example below for a 32 bit machine, full trust is opened to files on a z:\ mapped drive to enable software to run from that share: C:\Windows\Microsoft.NET\Framework\v \CasPol.exe -q -machine -ag 1 -url "file://z:/*" FullTrust -name "RSM Work Dir" In the example below for a 64 bit machine, full trust is opened to files on a shared network drive to enable software to run from that share: C:\Windows\Microsoft.NET\Framework64\v \CasPol.exe -q -machine -ag 1 -url "file://fileserver/sharename/*" FullTrust -name "Shared Drive Work Dir" 99

106 Troubleshooting For more information on configuring RSM Clients and Compute Servers using a network installation, please refer to Network Installation and Product Configuration. Temporary Directory Permissions on Windows Clusters Some applications executed through RSM (e.g. Fluent) require read/write access to the system temporary directory on local compute nodes. The usual location of this directory is C:\WINDOWS\Temp. All users should have read/write access to that directory on all nodes in the cluster to avoid job failure due to temporary file creation issues. Firewall Issues If you have a local firewall turned on for the server and/or RSM Client machines, you will need to attach two ports to the Exceptions List for RSM, as follows: Add port 8150 to (Ans.Rsm.SHHost.exe). Add port 9150 to (Ans.Rsm.JMHost.exe). Enabling or Disabling Microsoft User Account Control (UAC) To enable or disable UAC: 1. Open Control Panel > User Accounts > Change User Account Control settings. 2. On the User Account Control settings dialog, use the slider to specify your UAC settings: Always Notify: UAC is fully enabled. Never Notify: UAC is disabled. Disabling UAC can cause security issues, so check with your IT department before changing UAC settings. Internet Protocol version 6 (IPv6) Issues When running a cluster, you will receive an error if connection to a remote Manager is not possible because the Manager has not been configured correctly as localhost. If you are not running a Microsoft HPC cluster, test this by opening a command prompt and running the command, ping localhost. If you get an error instead of the IP address: 1. Open the C:\Windows\System32\drivers\etc\hosts file. 2. Verify that localhost is not commented out (with a # sign in front of the entry). If localhost is commented out, remove the # sign. 3. Comment out any IPv6 information that exists. 100

107 4. Save and close the file. If you are running on a Microsoft HPC cluster with Network Address Translation (NAT) enabled, Microsoft has confirmed this to be a NAT issues and is working on a resolution. Multiple Network Interface Cards (NIC) Issues When multiple NIC cards are used, RSM may require additional configuration to establish desired communications between tiers (i.e., the RSM Client, Manager, and Compute Server machines). The most likely scenario is that the issues originate with the Manager and/or Compute Server. First, try configuring the Manager and/or Compute Server machine(s): 1. In a text editor, open the Ans.Rsm.JMHost.exe.config file (Manager) and/or Ans.Rsm.SH- Host.exe.config file (Compute Server). These files are located in Program Files\ANSYS Inc\v150\RSM\bin. 2. To both files, add the machine s IP address to the TCP channel configuration. Substitute the machine s correct IP address for the value of machinename. The correct IP address is the address seen in the output of a ping from a remote machine to the Fully Qualified Domain Name (FQDN). <channel ref="tcp" port="9150" secure="false" machinename=" "> 3. Save and close both files. 4. Restart the following services: ANSYS JobManager Service V15.0 and ANSYS ScriptHost Service V15.0. For Windows: On your Administrative Tools or Administrative Services page, open the Services dialog. Restart the services by right-clicking on the service and selecting Restart. For Linux: Log into a Linux account with administrative privileges and ensure that Ans.Rsm.* processes are not running. Open a terminal window in the [RSMInstall]/Config/tools/linux directory and run the following command:./rsmmanager restart If the Manager and/or Compute Server does not resolve the problem, the RSM Client machine may have multiple NICs and require additional configuration. For example, a virtual NIC used for a VPN connection on an RSM Client machine can cause a conflict, even if not connected. If configuring the Manager and/or Compute Server machines doesn t work, configure the Multi-NIC RSM Client machine: 1. Using a text editor, create a file named Ans.Rsm.ClientApi.dll.config in Program Files\ANSYS Inc\v150\RSM\bin. If this file does not exist, RSM uses a default configuration. 2. From this file, copy and paste from the text below into Ans.Rsm.ClientApi.dll.config: <?xml version="1.0" encoding="utf-8"?> <configuration> <system.runtime.remoting> <application> <channels> <channel ref="tcp" port="0" secure="true" machinename="ip_address"> <clientproviders> 101

108 Troubleshooting <formatter ref="binary" typefilterlevel="full"/> </clientproviders> </channel> </channels> </application> </system.runtime.remoting> </configuration> 3. Replace the contents of ip_address with a valid IP address. 4. Save and close the file. RSH Protocol Not Supported The RSH protocol is not officially supported at 14.5 and will be completely removed from future releases. Windows Server 2008, Windows Vista, and Windows 7 do not include the RSH client. SSH File Size Limitation The PuTTY SSH/SCP client has file size limitations that RSM circumvents by splitting and joining very large files (greater than 2 GB). The Windows Compute Server and the Linux machine may also have file system limitations that are beyond the control of RSM. You must configure the Linux machine with large file support, and the Windows file system must be NTFS in order to transfer files larger than approximately 2 GB. If any job output file is not successfully retrieved, all job output files are left on the Linux machine. Consult the job log in the RSM Job Log view to learn the temporary directory name used to store the job files. You can then manually retrieve the files from the temporary directory (using Samba or a similar application) so the results can be loaded back into your ANSYS client application. 102

109 Appendix A. ANSYS Inc. Remote Solve Manager Setup Wizard The ANSYS Remote Solve Manager Setup Wizard is designed to guide you through the process of setting up and testing Remote Solve Manager (RSM). Once the setup and testing is complete, you will be able to use RSM to submit jobs from Workbench to be executed on remote machines or clusters. The following sections contain detailed information on using the ANSYS Remote Solve Manager Setup Wizard: A.1. Overview of the RSM Setup Wizard A.2. Prerequisites for the RSM Setup Wizard A.3. Running the RSM Setup Wizard A.4.Troubleshooting in the Wizard A.1. Overview of the RSM Setup Wizard The RSM Setup Wizard can help you to configure all the machines that will be part of your RSM Layout (the actual physical configuration of machines to be used for initiating, queuing, and solving jobs). It allows you to perform the following tasks: Automate the Workbench setup before starting RSM services for certain cluster scenarios. As part of the optional auto-configuration process, the wizard performs the following setup tasks to ensure that Workbench is available to each node in the cluster: If it does not already exist, create a share to the cluster head node Workbench installation directory. Run ANSYS Workbench configuration prerequisites. In order for the wizard to install prerequisites, UAC must be disabled on any cluster node where prerequisites are missing and need to be installed. Installed prerequisites include MS.NET Framework 4.0 Redistributable and MS VC Redistributable x64. (If needed, packages from previous versions, such as MS VC Redistributable x64, among others, could be included by editing the AutoConfig.AllPrereqInstaller entry in the Ans.Rsm.Wizard.exe.config file). Once you have completed the RSM setup, it is recommended that you reboot the machine(s) on which MS.NET Framework 4.0 and/or MS VC Redistributable x64 have been installed. Set up Workbench environment variables on each node in the cluster. Start RSM services locally or remotely for the Manager and Compute Server (i.e., both on the local machine on which you are currently running the wizard and on remote machines). 103

110 ANSYS Inc. Remote Solve Manager Setup Wizard It is best to perform these tasks from the Manager/Compute Server machine. Configure machines locally and/or remotely to serve as an RSM Client, the Manager, or a Compute Server. It is best to perform these tasks from the Manager/Compute Server machine. Integrate RSM with the following third-party job schedulers (without requiring job script customization): LSF (Windows and Linux) PBS (Linux only) Microsoft HPC SGE (UGE) Configure a cluster. It is best to perform cluster configuration tasks from the machine that is the head node of the cluster. When you indicate that you are configuring the cluster head node, the wizard will walk you through the steps to configure it as both Manager and Compute Server and to configure all the compute nodes in the cluster. Create a Project Directory, Working Directory, and where applicable, a Shared Cluster Directory for the storage of project inputs, outputs, solver files, and results. Options for allowing the wizard to select directory paths and to automatically configure the Working Directory and Shared Cluster Directory are available. Automation of these steps helps to ensure consistency for your RSM setup. Define one or more Queues that will receive jobs from the Manager and send the jobs to one or more Compute Servers. Create primary accounts or alternate accounts. Alternate accounts may be required to allow access to all the Compute Servers to which jobs will be sent. Test the Compute Servers to ensure that your RSM configuration is working properly. When there are issues, the wizard will attempt to diagnose and provide you with information on the problem. If the wizard cannot diagnose the problem, it will offer suggestions for troubleshooting outside of the wizard. It is best to perform testing from the RSM Client machine. For details, see Step 3: Test Your RSM Configuration (p. 109). that there are a number of setup tasks that the RSM Setup Wizard cannot perform. The wizard cannot: Start Compute Server or Manager services from a network installation. You must start services locally on the Compute Server or Manager machine before running the wizard. Perform certain tasks without correct permissions. For details on necessary Windows and Linux permissions, see Prerequisites for the RSM Setup Wizard (p. 105). Detect file permissions issues in the Compute Server or Manager until the final step of the setup. Perform some cluster setup tasks and checks remotely from the Manager or Compute Server machine; these tasks must be performed locally on each of the machines in the cluster. 104

111 Create parallel environments (PEs), which are required for SGE (UGE) Clusters. Diagnose Test Compute Server configuration issues from a machine other than the RSM Client. Correct some connection problems, typically issues related to hardware, firewalls, IPv6, and multiple NIC, etc. For details on these issues, see RSM Troubleshooting (p. 99). A.2. Prerequisites for the RSM Setup Wizard Prerequisites for the RSM Setup Wizard 1. RSM must already be installed on all the machines to be included in the RSM Layout. For a machine that will serve as an RSM Client or a Compute Server (in any combination), the installation of ANSYS Workbench, RSM, and client applications is required. For a machine that will serve solely as the Manager, the installation of RSM is required (so it can connect with the RSM Client and Compute Server machines). However, if it will also serve as an RSM Client or Compute Server, you must install ANSYS Workbench and client applications as well. RSM and Workbench are both installed by default as product components to most ANSYS, Inc. products. RSM can also be installed independently as a standalone package. For cluster configurations, when you configure the head node of the cluster as a Manager, it will also be configured as a Compute Server. The compute nodes in the cluster will be configured via the head node. 2. Before starting the wizard, exit Workbench and verify that no RSM jobs are running. 3. Different privileges are necessary for different parts of the setup process. Verify that you have the appropriate privileges for the setup tasks you will perform. For Windows, administrative privileges means that the user either has Windows administrative privileges on the Manager machine, launches the wizard via the right-click Run as administrator menu option, or is added to the RSM Admins user group. For RSM Admins privileges, you must create the RSM Admins user group and add users to it manually. For instructions, see RSM User Accounts and Passwords (p. 45). For Linux, administrative privileges can be root or non-root. Non-root administrative privileges means that the user is added to the rsmadmins user group. As a member of this group, you have administrative, non-root permissions, which are necessary for certain parts of the setup. When a root user starts RSM services, if the rsmadmins user group and rsmadmin account do not already exist, the rsmadmins group is automatically created on the Manager machine and an rsmadmin account is added to the group. This account can then be used to add additional users to the group. For Linux, if the user prefers to start the non-daemon services from the RSM Setup Wizard (as opposed to installing and starting the services as daemons with a root account), then a user account from the rsmadmins user group must be used. that if the RSM services are not installed as daemons, the rsmadmins user group is not automatically created. Therefore, in order to start non-daemon services via the wizard, prior to running the wizard your IT department must: Create the rsmadmins user group manually 105

112 ANSYS Inc. Remote Solve Manager Setup Wizard Add the user(s) who will be running/starting non-daemon services to the rsmadmins group Starting RSM services. For Windows, you must have administrative privileges. To start RSM services when UAC is enabled on Windows 7, you must use the rightclick Run as administrator menu option to launch the wizard. For instructions on enabling or disabling UAC, see RSM Troubleshooting (p. 99). For Linux, you must have either root user or rsmadmins (non-root administrative) privileges. If you start the services with an rsmadmins non-root user account, the service will be run by that account in non-daemon mode. Root user privileges are required for starting RSM services as daemons. If you start RSM services as daemons, any nondaemon services will be killed. Configuring new or existing machines, queues, and accounts. For Windows, you must have administrative privileges. For Linux, you must have rsmadmins (non-root administrative) privileges. (You cannot perform this step with root permissions.) To test the final RSM configuration, you must be logged in as a user who will be sending jobs from the RSM client. For Windows, you can have either administrative or non-administrative privileges. For Linux, you can have either rsmadmin (non-root administrative) or non-administrative privileges. 4. In most cluster scenarios, client users (other than the user who set up the cluster) must cache their password with the cluster prior to using the wizard for RSM configuration testing. The exceptions are as follows: For MS HPC clusters, if you are logged in with administrative privileges, the wizard asks you to cache and verify your password in order to use the wizard s auto-configuration functionality. For LSF Windows clusters, password caching via the wizard has been disabled for security reasons. You must cache your password with the LSF Windows cluster before logging into the head node and starting the wizard. 5. If you are running an SGE (UGE) cluster, parallel environments (PEs) must have already been defined by your cluster administrator. For more information, see Compute Server Properties Dialog: Cluster Tab (p. 63). 6. If you are running a Microsoft HPC cluster with multiple network interface cards (NIC), additional configurations are required to establish communications between the RSM Client and Compute Server machines. For more information, see RSM Troubleshooting (p. 99). 106

113 Running the RSM Setup Wizard A.3. Running the RSM Setup Wizard This section divides running the RSM Setup Wizard into the following steps: A.3.1. Step 1: Start RSM Services and Define RSM Privileges A.3.2. Step 2: Configure RSM A.3.3. Step 3:Test Your RSM Configuration A.3.1. Step 1: Start RSM Services and Define RSM Privileges In this part of the setup, you will start RSM services and define RSM administrative privileges for yourself or other users. Required Privileges For Windows, you must either have Windows administrative privileges on the Manager machine, have RSM Admin privileges, or launch the wizard via the right-click Run as administrator menu option. To start RSM services when UAC is enabled on Windows 7, you must use the rightclick Run as administrator menu option to launch the wizard. For Linux, you must have either root user or rsmadmins (non-root administrative) privileges. (To start RSM services as daemons, root user privileges are required. In some cases, these tasks may need to be performed by a member of your IT department.) 1. Log into the machine that will serve as the Solve Manager. If you are configuring a cluster, this is the head node of the cluster. 2. Launch the wizard: For Windows, select Start > All Programs > ANSYS 15.0 > Remote Solve Manager > RSM Setup Wizard Alternatively, you can navigate to the [RSMInstall]\bin directory and double-click Ans.Rsm.Wizard.exe. For Linux, open a terminal window in the [RSMInstall]\Config\tools\linux directory and run rsmwizard. For a quick-start guide on using the wizard, see the Readme file. To access this file: For Windows: Select Start > All Programs > ANSYS 15.0 > Remote Solve Manager > Readme - RSM Setup Wizard For Linux: Navigate to the [RSMInstall]\Config\tools\linux directory and open rsm_wiz.pdf. 3. Specify if you are configuring the head node of a cluster. If yes, specify the cluster type. 107

114 ANSYS Inc. Remote Solve Manager Setup Wizard If yes and Windows (MS HPC or LSF) cluster, indicate whether you want to automate the setup to ensure that Workbench is available to each node in the cluster. UAC must be disabled on any cluster node where ANSYS Workbench prerequisites are missing and need to be installed. If no, verify prerequisites when prompted and then specify the service role(s) for which the local machine is being configured. 4. Start RSM services on the local machine. If the necessary services haven t already been started, the wizard will start them when you click the Start Services button. 5. Provide RSM administrative privileges to users as necessary. For Windows, to provide users with RSM administrative privileges, you must manually create an RSM Admins user group and add users to this group. For Linux, When the RSM services are started by running the wizard with root user privileges, if the rsmadmins user group and an rsmadmin account do not already exist, the group is automatically created on the Manager machine. An rsmadmin user account is created in the new user group. This account has administrative, non-root privileges and can be used to perform RSM administrative and configuration tasks via the wizard on Linux. On Linux, to provide additional users with RSM administrative privileges, you must add them to the rsmadmins user group. 6. If you are logged in with: Windows administrative or RSM Admin permissions, you can continue the RSM setup process via your current wizard session. Linux root permissions, there are no further steps that you can perform with the wizard. All further wizard configurations must be performed by a user with rsmadmin permissions. You can close the wizard now via the exit button and log back in with rsmadmins permissions to continue the setup. A.3.2. Step 2: Configure RSM In this part of the setup, you will configure the Manager and Compute Server(s) in your RSM Layout. If you are using a cluster, you will do the configurations that can be performed by the wizard. You will also define queues and accounts. Required Privileges For Windows, you must have administrative permissions. For Linux, you must have rsmadmins (nonroot administrative) privileges. If you are on a Windows Manager and continuing your existing wizard session, you have already performed the first three steps. Skip to step #4. 108

115 1. Log into the machine that will serve as the Manager. If you are configuring a cluster, this is the head node of the cluster. 2. Launch the wizard as described in Step 1: Start RSM Services and Define RSM Privileges (p. 107). 3. Specify if you are configuring the head node on a cluster as described in Step 1: Start RSM Services and Define RSM Privileges (p. 107). 4. Follow the steps provided by the wizard to perform the following setup tasks. The tasks will vary according to your RSM Layout. Configure the Manager(s) Configure the Compute Server(s) Configure Queues Create Accounts 5. When the configuration is complete, exit the wizard. A.3.3. Step 3: Test Your RSM Configuration Running the RSM Setup Wizard In this part of the setup, you will accomplish two tasks: you will specify the Manager to be used and test the final RSM configuration before submitting jobs. You should perform these tasks by logging into a machine that will serve as an RSM Client. Under certain circumstances, testing can also be performed from a Manager machine with remote access to the RSM Client. However, testing from the Manager may prevent the wizard from performing some of the setup tasks, such as those for cluster configuration. Required Privileges For Windows, you can have either administrative or non-administrative permissions. For Linux, you must have non-root permissions. 1. Once the setup is finished, log into a machine that will be an RSM Client. You must log in under an account that will be used to send jobs via RSM. In most cluster scenarios, client users (other than the user who set up the cluster) must cache their password with the cluster prior to using the wizard for RSM configuration testing. The exceptions are as follows: For MS HPC: If you are logged in with administrative privileges, the wizard asks you to cache and verify your password in order to use the wizard s auto-configuration functionality. 109

116 ANSYS Inc. Remote Solve Manager Setup Wizard For LSF: For security reasons, password caching has been disabled for Windows LSF clusters. You must cache your password with the Windows LSF cluster before logging into the head node and starting the wizard. For instructions on caching the password, see Manually Running the Password Application in the Remote Solve Manager User's Guide. 2. Launch the wizard as described in Step 1: Start RSM Services and Define RSM Privileges (p. 107). 3. Follow the steps in the wizard as before, identifying your local machine as an RSM Client. 4. When you reach the Test Compute Servers step, select the Queue and Compute Servers to be tested and click Start Test. 5. If the tests pass, you can exit the wizard. If the tests fail, click the Diagnose Failure button for information on the reason for the failure. If the wizard specifies what caused the error, correct the problems identified in the error message and retry the test. If the wizard is unable to identify the exact problem, it will suggest possible troubleshooting steps. For details, see RSM Troubleshooting (p. 99). A.4. Troubleshooting in the Wizard This section contains information on the sorts of problems that the wizard can diagnose. The wizard can potentially diagnose the following problems. Manager problems, such as: RSM services have not been started File send or compression errors Script or job errors Compute Server problems, such as: Account authentication issues Job code compilation or load failures Missing files Job Script problems, such as: AWP_ROOT environment variable undefined Remote command execution errors Command runtime exceptions Script class exceptions Shared directory path creation failure 110

117 Troubleshooting in the Wizard Project Directory or Working Directory creation or path issues Cluster-specific problems, such as: Invalid cluster type Unavailable cluster nodes AWP_ROOT environment variable undefined on execution node Queue issues (RSM queue does not exist on cluster, queue list unavailable) Execution node directory issues (failure to create Working Directory, failure to locate cluster shared directory) Cluster control file reading errors SSH-specific problems, such as: Authentication failures (issues with public, private, or host keys) KEYPATH errors (environment variable undefined, KEYPATH file missing) Proxy machine name undefined Host nonexistent or unavailable Network error Client API problems, such as: File transfer exceptions (upload, download) File compression exceptions (compression, decompression) Manager Project Directory unshared Manager project file missing For instructions on addressing problems that the wizard cannot diagnose, see RSM Troubleshooting (p. 99) and view the following entries: Firewall Issues (p. 100) Multiple Network Interface Cards (NIC) Issues (p. 101) Internet Protocol version 6 (IPv6) Issues (p. 100) 111

118 112

119 Appendix B. Integrating Windows with Linux using SSH/SCP RSM supports using SSH/SCP (Secure Shell/Secure Copy) in custom job scripts. The built-in job scripts for the RSM job submissions have been tested using the PuTTY SSH client ( SSH/SCP is used for integrating a Windows Manager with a Linux Compute Server. The Manager and the Compute Server proxy (the Compute Server defined on the General tab of the Compute Server Properties dialog) are typically on the same Windows machine. The actual Compute Server is on a remote Linux machine (defined on the SSH tab of the Compute Server Properties dialog). Jobs are sent via the SSH/SCP protocol from the Windows Compute Server proxy to the actual Linux Compute Server for processing. Communications to the Compute Server can be configured either for single Linux machine or for a Linux cluster. that this section focuses primarily on setting up SSH for connection to a single remote Linux Compute Server. If you are using SSH with n Linux LSF or PBS cluster, you can use the cluster setup instructions contained in the Configure PuTTY SSH (p. 114) section of this appendix. Then, for detailed instructions on configuring the LSF or PBS cluster Compute Server, refer to the cluster configuration instructions in Appendix C (p. 121). SSH is not a recommended communication protocol and should be used only if it is required by your IT policy. For ease of configuration and enhanced performance, native RSM is the recommended communication protocol. Before proceeding with this configuration, see Configuring RSM to Use a Remote Computing Mode for Linux (p. 12) and Configuring Native Cross-Platform Communications (p. 12) for more information. Before You Begin These instructions assume the following: Workbench User's GuideWorkbench User's Guide and RSM have been installed on the Windows machine. RSM has been installed on both the Windows and Linux machines. PS, AWK, GREP, LS, and the ANSYS150 command must exist on the Linux machine. You are able to install and run ANSYS, Inc. products, including Licensing, on both Windows and Linux systems. For information on product and licensing installations, go to the Downloads page of the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to SSH Job Limitations File Size Limitation The PuTTY SSH/SCP client has file size limitations that RSM circumvents by splitting and joining very large files (greater than 2 GB). The Windows Compute Server and the Linux 113

120 Integrating Windows with Linux using SSH/SCP machine may also have file system limitations that are beyond the control of RSM. You must configure the Linux machine with large file support, and the Windows file system must be NTFS in order to transfer files larger than approximately 2 GB. If any job output file is not successfully retrieved, all job output files are left on the Linux machine. Consult the job log in the RSM Job Log view to learn the temporary directory name used to store the job files. You can then manually retrieve the files from the temporary directory (using Samba or a similar application) so the results can be loaded back into your ANSYS client application. High Maximum Number of Jobs Value When you use SSH as protocol to run RSM jobs and set a high maximum number of jobs, some jobs could fail, providing a message such as Server unexpectedly closed network connection. This happens because too many SSH calls are made simultaneously from different jobs. In this case, you may need to reduce the maximum number of jobs that can be run concurrently. To do so, go to the General tab of the Compute Server Properties dialog and lower the value for the Maximum Number of Jobs field. B.1. Configure PuTTY SSH In order to send RSM jobs to a remote Linux machine using SSH, you must configure SSH to allow access from a Windows machine. SSH configuration involves creating a cryptographic key on the Windows Manager machine and placing public portions of the key on the Linux machine. SSH configuration must be completed by your IT administrator. This section provides instructions for a PuTTY SSH implementation. Other SSH implementations are possible, and your IT administrator can determine which one is best for your site. Download and install PuTTY. Download and install PuTTY from the following location: putty/download.html If this link is invalid, perform a web search for "PuTTY". Create a cryptographic key. Create a cryptographic key using PuTTYGen (puttygen.exe) as follows: 1. On the PuTTY Key Generator dialog, click Generate. 2. Change the Key comment to include your machine name and Windows username. 3. Do not enter a key passphrase. 4. Save the private key file without a passphrase. For example, <drive>:\program Files\Putty\id_rsa.ppk. If you use a pass phrase, jobs will hang a prompt for you to enter the pass phrase. Be sure to secure the private key file using some other means. For example, if only you will be using the key, save it to a location where only you and administrators have access to the file, such as the My Documents folder. If multiple users share the same key, allow the owner full control, then create a group and give only users in that group access to this file. 114

121 5. If your Linux cluster uses OpenSSH, convert the key to OpenSSH format by selecting Conversions > Export Open SSH key in the PuTTY Key Generator dialog. 6. Move the public portion of the key to the Linux machine. This requires you to edit the ~/.ssh/authorized_keys file on the Linux machine as follows: Open an SSH session to one of your cluster nodes, cd into ~/.ssh, and open the authorized_keys file in your favorite editor (for example, VI or EMACS). Copy all the text from the box under Public key for pasting and paste it into ~/.ssh/authorized_keys. All of this text should be one line. If the authorized_keys file does not exist, create one. Alternately, paste it into a text file and move that file to the Linux machine for editing. Modify system environment variables. 1. Open the Windows System Properties dialog. Configure PuTTY SSH 2. On the Advanced tab, select Environment Variables. The Environment Variables dialog appears. 3. On the Environment Variables dialog, locate the Path variable in the System variables pane. 4. Select the Path variable and then click the Edit button. The Edit System Variable dialog appears. 115

122 Integrating Windows with Linux using SSH/SCP 5. Add the PuTTY install directory to the Variable value field (for example, C:\Program Files\putty) and then click OK. 6. In the System variables pane, click the New button. The New System Variable dialog appears. 7. In the New System Variable dialog, create a new environment variable named KEYPATH with a value containing the full path to the private key file (for example, <drive>:\program Files\Putty\id_rsa.ppk). Use a user variable if the key file is used only by you. Use a system variable if other users are sharing the key file. For example, if a Windows XP user has a key file in My Documents, the variable value should be %USERPROFILE%\My Documents\id_rsa.ppk (this expands to <drive>:\documents and Settings\<user>\My Documents\id_rsa.ppk). 8. Click OK. 9. Reboot the computer for environment changes to take effect. Perform an initial test of the configuration. 1. Run the following from the command prompt (quotes around %KEYPATH% are required): plink -i %KEYPATH% unixlogin@unixmachinename pwd 2. When prompted by plink: If plink prompts you to store the key in cache, select Yes. 116

123 Add a Compute Server If plink prompts you to trust the key, select Yes. B.2. Add a Compute Server Underneath the Manager node on the RSM tree view, right-click on the Compute Server node and select Add. The Compute Server Properties dialog displays. See Adding a Compute Server (p. 55) for more detailed information. General Tab The General tab is used to set the properties of the Windows Compute Server. On the General tab, set properties as described below. For Display Name, enter a descriptive name for the Windows Compute Server. Set Machine Name to the network machine name for the Windows Compute Server. If the Manager and Compute Server will be on the same Windows machine, enter localhost. In the example below, the Manager and the Compute Server are on the same machine. For Working Directory Location, select Automatically Determined to allow the system to determine the location for the Working Directory; in this case, you do not need to enter a path and the property is disabled. Alternatively, you can select User Specified to specify the location of the Working Directory yourself. Enter the path to your Working Directory if you ve opted to specify the location. If the location is determined by the system, this property is blank and disabled. Select Use SSH protocol for inter- and intra-node communications (Linux only) so that RSM and solvers will use SSH for inter-node and intra-node communications for Linux machines. This setting applies to all Linux Compute Servers. When ANSYS Fluent, ANSYS CFX, ANSYS Mechanical, and ANSYS Mechanical APDL are configured to send solves to RSM, their solvers will use the same RSH/SSH settings as RSM. 117

124 Integrating Windows with Linux using SSH/SCP See Compute Server Properties Dialog: General Tab (p. 57) for more detailed information on available properties. Cluster Tab If you are using SSH to connect to a single remote Linux Compute Server, you do not need to fill out the Cluster tab. You can skip this tab and go straight to the SSH tab. If you are using SSH to connect to a Linux cluster, however, you must fill out the Cluster tab. For instructions, see Appendix C. SSH Tab The SSH tab is used to configure SSH communications between the Windows Compute Server (defined on the General tab) and a remote Linux Compute Sever (defined here). On the SSH tab, set properties as described below. Select the Use SSH check box. This enables the rest of the properties on the tab. For Machine Name, enter the hostname or IP address of the Linux Compute Server. For the Linux Working Directory property: Enter the path for your Linux Working Directory. 118

125 Add a Compute Server If the File Management property on the Cluster tab is set to Use Execution Node Local Disk, set the Linux Working Directory path to a local disk path (e.g. /tmp). The full RSM-generated path (e.g. /tmp/abcdef.xyz) will exist on the machine specified on that tab, as well as the node(s) that the cluster software selects to run the job. If the File Management property is set to Reuse Shared Cluster Directory, the Linux Working Directory path is populated with the path specified for Shared Cluster Directory on the Cluster tab and cannot be edited. This is where the cluster job runs, as expected. If the Windows and Linux account names are the same (for example, DOMAIN\testuser on Windows and testuser on Linux) then no additional configuration is required. If the account name is different, use the Linux Account field to enter the name of the account being used to log into the remote Linux machine. This Linux account is an alternate account that allows you to send jobs from the primary Windows account on the RSM Client and run them under the alternate account on a remote Linux Compute Server. Both accounts are defined on the RSMAccounts dialog. For more information, see RSM User Accounts and Passwords (p. 45). For the Linux Account property, enter the name of the account being used to log into the remote Linux machine. See Compute Server Properties Dialog: SSH Tab (p. 67) for more detailed information on available properties. 119

126 Integrating Windows with Linux using SSH/SCP Test the Compute Server configuration. On the Linux Compute Server machine, ensure that the ANSYS Product environment variable AWP_ROOT150 is set to the location of your ANSYS product installation. This is done by adding the environment variable definition to your.cshrc (C shell) resource file or.bashrc (bash shell) resource file. To test the Compute Server configuration, right-click on the name of the Compute Server in the tree view and select Test Server. This runs a test job using the settings provided. The Job Log view displays a log message that shows if the test finished or failed. If the test finishes, you can successfully run jobs on the Compute Server. 120

127 Appendix C. Integrating RSM with a Linux Platform LSF, PBS, or SGE (UGE) Cluster The following sections divide the setup and configuration process for integrating RSM with a Linuxbased Platform LSF (Load Sharing Facility), PBS (Portable Batch System), or SGE/UGE (Sun Grid Engine) cluster into sequential parts. The sequential parts are followed by general integration details. Before You Begin These instructions assume the following: Both the Manager and Compute Server machines are set up on the network. An LSF, PBS, or SGE (UGE) cluster has been established and configured. You are not using the SSH protocol but instead are using native RSM mode. For information on native RSM, see Configuring RSM to Use a Remote Computing Mode for Linux (p. 12). If you will be using SSH for Windows-Linux communications, see Appendix B for SSH setup instructions. Then refer back to this appendix for instructions on configuring RSM to send jobs to a Linux LSF, PBS, or SGE (UGE) cluster. You have the machine name of the LSF, PBS, or SGE (UGE) submission host. RSM has been installed on the LSF, PBS, or SGE (UGE) submission host. If you are using an SGE (UGE) cluster, parallel environments have already been defined by your cluster administrator. You are able to install and run ANSYS, Inc., products, including Licensing, on both the Manager and Compute Server machines. For information on product and licensing installations, go to the Downloads page of the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to C.1. Add a Linux Submission Host as a Compute Server In this step, we ll add the Linux submission host as a Compute Server. Underneath the Manager node on the RSM tree view, right-click on the Compute Server node and select Add. The Compute Server Properties dialog displays. See Adding a Compute Server (p. 55) for more detailed information. General Tab On the General tab, set properties as described below. 121

128 Integrating RSM with a Linux Platform LSF, PBS, or SGE (UGE) Cluster If both the Manager and Compute Server services will be on the submission host of the cluster, set Machine Name to localhost. Otherwise, enter the network name of the submission host node that will run the Compute Server. In the example below, pbsclusternode1 is the name of the submission host being defined as the Compute Server. For Working Directory Location, select Automatically Determined to allow the system to determine the location; in this case, you do not enter a Working Directory path. Alternatively, you can select User Specified to specify the location of the Working Directory. The Working Directory property is blank and disabled if the Working Directory Location is Automatically Determined. If the Working Directory Location is User Specified, enter the path to your Working Directory; this directory must be shared and writeable for the entire cluster. (If the directory not shared and instead is a local directory, it must exist on each compute node in the job scheduler queue.) Select Use SSH protocol for inter- and intra-node communications (Linux only) so that RSM and solvers will use SSH for inter-node and intra-node communications for Linux machines. This setting applies to all Linux Compute Servers. When ANSYS Fluent, ANSYS CFX, ANSYS Mechanical, and ANSYS Mechanical APDL are configured to send solves to RSM, their solvers will use the same RSH/SSH settings as RSM. 122

129 See Compute Server Properties Dialog: General Tab (p. 57) for more detailed information on available properties. Cluster Tab On the Cluster tab, set properties as described below. Set Cluster Type to LSF, PBS, or SGE. Add a Linux Submission Host as a Compute Server SGE (UGE) clusters are not supported for Polyflow. Enter the path for your Shared Cluster Directory. This is the central file-staging directory. For the File Management property: Select Reuse Shared Cluster Directory if you want to store temporary solver files in the Shared Cluster Directory. When you select this option, the Shared Cluster Directory and the Working Directory are in the same location. As such, when you select Shared Cluster Directory, the Shared Cluster Directory path will be populated to the Working Directory Path property on the General tab. Also, the Working Directory Location property on the General tab will be set to Automatically Determined. See the image below. The Shared Cluster Directory is on the machine defined on the General tab. The RSM job creates a temporary directory here. Mount this directory on all execution hosts so that the LSF, PBS, or SGE (UGE) job has access. Select Use Execution Node Local Disk if you want to store temporary solver files locally on cluster execution node. When you select this option, the Shared Cluster Directory and the Working Directory are in different locations, so the path you entered for the Working Directory property on the General tab remains. The path specified for the Working Directory property is used to specify the local scratch space on the execution nodes. The path must exist on all nodes. 123

130 Integrating RSM with a Linux Platform LSF, PBS, or SGE (UGE) Cluster If you set Cluster Type to SGE, enter names for the predefined Shared Memory Parallel and Distributed Parallel environments that will be used for parallel processing. These fields default to pe_smp and pe_mpi. To use one of the default names, your cluster administrator must create a PE with the same name. The default PE names can also be edited to match the names of your existing parallel environments. See Compute Server Properties Dialog: Cluster Tab (p. 63) for more detailed information on available properties. When you are finished entering values on the Cluster tab, click the OK button. Since you are not using the SSH protocol, you can skip SSH tab. (The Use SSH check box is deselected by default.) Test the Compute Server configuration. Test the configuration by right-clicking on the newly added Compute Server in tree view and selecting Test Server from the right-click context menu. When the Compute Server is part of a cluster: When the server test is performed from a compute server node under a Queue parent node, the name of the parent queue will be used by default. For cluster types other than Microsoft HPC, you must have already defined a queue in order to perform a server test from a compute server node under a Compute Servers parent node. If no queue is defined, you will receive an error. For both of these scenarios, you can define a cluster queue and specify that it is used for subsequent server tests. To do so: 124

131 Additional Cluster Details 1. Right-click on the compute server node and select Properties. 2. In the Compute Server Properties dialog, open the Cluster tab. 3. In the Job Submission Arguments (optional) field, enter the following argument: -q queuename 4. Click OK. If -q <queuename> is entered from Job Submission Arguments (optional) field, this queue name is always used, even when you submit a job or perform server test from a compute server node under a Queue parent node. In other words, the -q <queuename> argument takes a higher priority in specifying the cluster queue to be used. C.2. Complete the Configuration Create a queue. To complete the configuration, create a new queue and add the Compute Server to it. The RSM queue name must match the cluster queue name exactly (where the cluster queue name can be found by executing the LSF bqueues command or the PBS qstat -Q command on the cluster head node). Jobs can now be submitted to this queue and then forwarded to the cluster queue for scheduling. See Creating a Queue (p. 53) for details. Test the configuration. Test the configuration by sending a job to RSM. C.3. Additional Cluster Details Adjusting the Maximum Number of Jobs You can set the Max Running Jobs property on the General tab to the value appropriate to your cluster. that the RSM job could be in a Running state, but LSF, PBS, or SGE (UGE) may not yet be able to execute the job due to limited resources. Refer to the Job Log view to determine the job ID and state. Integration Details RSM essentially forwards the job to the LSF, PBS, or SGE (UGE) job scheduler. This RSM job must build and execute the job submission command of the scheduler you ve selected in the Cluster Type dropdown of the Cluster tab of the Compute Server Properties dialog. The RSM job does not really do any real work; rather, it monitors the status of the job it has submitted to the job scheduler, performing the actions listed below: 1. Reads the control file containing paths, inputs and outputs. 2. Makes temporary directories on all nodes assigned for the job. 125

132 Integrating RSM with a Linux Platform LSF, PBS, or SGE (UGE) Cluster 3. Copies inputs to the Working Directory of the execution host. 4. Runs the command (for example, solver). 5. Copies outputs to the staging folder on the submission host. 6. Cleans up. 126

133 Appendix D. Integrating RSM with a Windows Platform LSF Cluster The following sections divide the setup and configuration process for integrating RSM with a Windowsbased Platform LSF (Load Sharing Facility) cluster into sequential parts. The sequential parts are followed by general integration details. Before You Begin These instructions assume the following: Both the Manager and Compute Server machines are set up on the network. An LSF cluster has been established and configured. You have administrative privileges on the LSF submission host of the cluster you are configuring. This is a node on the cluster for which bsub and lsrcp (requires RES service) commands are available. You have the machine name of the LSF submission host. RSM has been installed on the LSF submission host. You are able to install and run ANSYS, Inc. products, including Licensing, on both the Manager and Compute Server machines. For information on product and licensing installations, go to the Downloads page of the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to Limitations LSF clusters for Windows are not supported for standalone Fluent, standalone CFX, or Polyflow. PBS clusters for Windows are not supported. D.1. Add the LSF Submission Host as a Compute Server Underneath the Manager node on the RSM tree view, right-click on the Compute Server node and select Add. The Compute Server Properties dialog displays. See Adding a Compute Server (p. 55) for more detailed information on properties available in the Compute Server Properties dialog. General Tab On the General tab, set properties as described below. If both the Manager and Compute Server services will be on the submission host of the cluster, set Machine Name to localhost Otherwise, enter the network machine name of the submission host node that will run the Compute Server. In the example below, LSFClusterNode1 is the name of the submission host being defined as the Compute Server. 127

134 Integrating RSM with a Windows Platform LSF Cluster For Working Directory Location, select Automatically Determined to allow the system to determine the location; in this case, you do not enter a Working Directory path. Alternatively, you can select User Specified to specify the location of the Working Directory. The Working Directory property is blank and disabled if the Working Directory Loca mattion is Automatically Determined. If the Working Directory Location is User Specified, enter the path to your Working Directory; this directory must be shared and writeable for the entire cluster. See Compute Server Properties Dialog: General Tab (p. 57) for more detailed information on available properties. Cluster Tab On the Cluster tab, set properties as described below. Set Cluster Type to LSF. Enter the path for your Shared Cluster Directory. This is the central file-staging directory. This directory must be accessible by all execution nodes in the cluster. For the File Management property: Select Reuse Shared Cluster Directory if you want to store temporary solver files in the Shared Cluster Directory. When you select this option, the Shared Cluster Directory and the Working Directory are in the same location. As such, when you select Shared Cluster Directory, the Shared Cluster Directory path will be populated to the Working Directory Path property on the General tab. Also, the 128

135 Add the LSF Submission Host as a Compute Server Working Directory Location property on the General tab will be set to Automatically Determined. See the image below. The Shared Cluster Directory is on the machine defined on the General tab. This directory should be accessible by all execution nodes and must be specified by a UNC (Universal/Uniform Naming convention) path. Select Use Execution Node Local Disk if you want to store temporary solver files locally on the cluster execution node. When you select this option, the Shared Cluster Directory and the Working Directory are in different locations, so the path you entered for the Working Directory property on the General tab remains. The path specified for the Working Directory property is used to specify the local scratch space on the execution nodes. The path must exist on all nodes. See Compute Server Properties Dialog: Cluster Tab (p. 63) for more detailed information on available properties. When you are finished entering values on the Cluster tab, click the OK button. Since you are not using the SSH protocol, you can skip SSH tab. (The Use SSH check box is deselected by default.) Test the Compute Server configuration. Test the configuration by right-clicking on the newly added Compute Server in tree view and selecting Test Server from the right-click context menu. 129

136 Integrating RSM with a Windows Platform LSF Cluster D.2. Complete the Configuration Create a queue. To complete the configuration, create a new queue and add the Compute Server to it. The RSM queue name must match the cluster queue name exactly (where the cluster queue name can be found by executing the LSF bqueues command on the cluster head node). Jobs can now be submitted to this queue and then forwarded to the cluster queue for scheduling. See Creating a Queue (p. 53) for details. Test the configuration. Test the configuration by sending a job to RSM. The first time RSM launches an LSF Windows cluster job, you may receive the following error: CMD.EXE was started with the above path as the current directory. UNC paths are not supported. Defaulting to Windows directory. To resolve this issue, create a text file with the following contents and save to a file (e.g. commandpromptunc.reg): Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\Microsoft\Command Processor] "CompletionChar"=dword: "DefaultColor"=dword: "EnableExtensions"=dword: "DisableUNCCheck"=dword: Next, run the following command on the head node and all of the compute nodes in the cluster: regedit -s commandpromptunc.reg D.3. Additional Cluster Details Adjusting the Maximum Number of Jobs You can set the Max Running Jobs property on the General tab to the value appropriate to your cluster. that the RSM job could be in a Running state, but LSF or PBS may not yet be able to execute the job due to limited resources. Refer to the Progress Pane to determine job ID and state. Integration Details RSM essentially forwards the job to the LSF job scheduler. This RSM job must build and execute the job submission command of the scheduler you ve selected in the Cluster Type drop-down of the Cluster tab of the Compute Server Properties dialog. The RSM job does not really do any real work; rather, it monitors the status of the job it has submitted to LSF, performing the actions listed below: 1. Reads the control file containing paths, inputs and outputs. 2. Makes temporary directories on all nodes assigned for the job. 3. Copies inputs to the Working Directory of the execution host. 130

137 Additional Cluster Details 4. Runs the command (for example, solver). 5. Copies outputs to the staging folder on the submission host. 6. Cleans up. Temporary Directory Permissions on Windows Clusters Some applications executed through RSM (e.g. Fluent) require read/write access to the system temporary directory on local compute nodes. The usual location of this directory is C:\WINDOWS\Temp. All users should have read/write access to that directory on all nodes in the cluster to avoid job failure due to temporary file creation issues. 131

138 132

139 Appendix E. Integrating RSM with a Microsoft HPC Cluster The following sections divide the setup and configuration process for integrating RSM with a Windowsbased Microsoft HPC (High-Performance Computing) cluster. The sequential parts are followed by additional information about working with an HPC cluster. Before You Begin These instructions assume the following: A Microsoft HPC cluster has been established and configured. You have administrative privileges on the head node of the HPC cluster you are configuring. You have the machine name of the HPC head node. You have already configured and verified communications between RSM and the HPC head node. See the HPC installation tutorials on the Downloads page of the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to RSM is installed on the HPC head node. This allows you to use both the Manager and Compute Server (also known as ScriptHost) services, or just use the Compute Server service. If the latter is chosen, the Manager runs on the RSM Client machine, or on a central, dedicated Manager machine. You are able to install and run ANSYS, Inc. products, including Licensing, on both Windows and Linux systems. For information on product and licensing installations, go to the Downloads page of the ANSYS Customer Portal. For further information about tutorials and documentation on the ANSYS Customer Portal, go to E.1. Configure RSM on the HPC Head Node 1. In your RSM installation directory, navigate to C:\Program Files\ANSYS Inc\v150\RSM\bin. 2. Configure and start RSM services on the head node by running the following command from the command prompt: AnsConfigRSM.exe -mgr -svr 3. Set your RSM password. This is the password RSM will use to run jobs on the Compute Server. 4. that you need to update your RSM password when you update your password on the RSM client machine. For details, see Working with Account Passwords (p. 48). 133

140 Integrating RSM with a Microsoft HPC Cluster E.2. Add the HPC Head Node as a Compute Server Underneath the Manager node on the RSM tree view, right-click on the Compute Server node and select Add. The Compute Server Properties dialog displays. See Adding a Compute Server (p. 55) for more detailed information on properties available on the Compute Server Properties dialog. General Tab On the General tab, set properties as described below. Set Machine Name to localhost if both the Manager and Compute Server services will run on the head node of the cluster. Otherwise, enter the network name of the head node machine that will run the Compute Server. In the example below, HPCHeadNode is the network name of the head node being defined as the Compute Server. For Working Directory Location, select Automatically Determined to allow the system to determine the location; in this case, you do not enter a Working Directory path. Alternatively, you can select User Specified to specify the location of the Working Directory. The Working Directory property is blank and disabled if the Working Directory Location is Automatically Determined. If the Working Directory Location is User Specified, enter the path to your Working Directory; this directory must be shared and writeable for the entire cluster. See Compute Server Properties Dialog: General Tab (p. 57) for more detailed information on available properties. 134

141 Add the HPC Head Node as a Compute Server Cluster Tab On the Cluster tab, set properties as described below. Set the Cluster Type property to Windows HPC. (This selection enables the rest of the properties on the tab and disables the SSH tab.) Enter the path for your Shared Cluster Directory. This is the central file-staging directory on the head node and must be accessible by all nodes in the cluster. For the File Management property: Select Reuse Shared Cluster Directory if you want to store temporary solver files in the Shared Cluster Directory. When you select this option, the Shared Cluster Directory and the Working Directory are in the same location. As such, when you select Shared Cluster Directory, the Shared Cluster Directory path will be populated to the Working Directory Path property on the General tab. Also, the Working Directory Location property on the General tab will be set to Automatically Determined. See the image below. Select Use Execution Node Local Disk if you want to store temporary solver files locally on the Compute Server machine. When you select this option, the Shared Cluster Directory and the Working Directory are in different locations, so the path you entered for the Working Directory property on the General tab remains. The path specified for the Working Directory property is used to specify the local scratch space on the execution nodes. The path must exist on all nodes. If you will be sending CFX jobs to a Microsoft HPC Compute Server, the Reuse Shared Cluster Directory option will always be used, regardless of the File Management property setting. 135

ANSYS, Inc. Installation and Licensing Tutorials

ANSYS, Inc. Installation and Licensing Tutorials ANSYS, Inc. Installation and Licensing Tutorials ANSYS, Inc. Release 15.0 Southpointe November 2013 275 Technology Drive 000508 Canonsburg, PA 15317 ANSYS, Inc. is ansysinfo@ansys.com certified to ISO

More information

Remote Solve Manager (RSM)

Remote Solve Manager (RSM) Remote Solve Manager (RSM) ANSYS, Inc. ANSYS Release 12.1 Southpointe November 2009 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317 certified to ISO ansysinfo@ansys.com 9001:2008. http://www.ansys.com

More information

Remote Solve Manager (RSM)

Remote Solve Manager (RSM) Remote Solve Manager (RSM) ANSYS, Inc. ANSYS Release 12.0 Southpointe April 2009 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317 certified to ISO ansysinfo@ansys.com 9001:2008. http://www.ansys.com

More information

Running ANSYS Fluent Under SGE

Running ANSYS Fluent Under SGE Running ANSYS Fluent Under SGE ANSYS, Inc. Southpointe 275 Technology Drive Canonsburg, PA 15317 ansysinfo@ansys.com http://www.ansys.com (T) 724-746-3304 (F) 724-514-9494 Release 15.0 November 2013 ANSYS,

More information

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream User Manual Onsight Management Suite Version 5.1 Another Innovation by Librestream Doc #: 400075-06 May 2012 Information in this document is subject to change without notice. Reproduction in any manner

More information

For Active Directory Installation Guide

For Active Directory Installation Guide For Active Directory Installation Guide Version 2.5.2 April 2010 Copyright 2010 Legal Notices makes no representations or warranties with respect to the contents or use of this documentation, and specifically

More information

Moxa Device Manager 2.3 User s Manual

Moxa Device Manager 2.3 User s Manual User s Manual Third Edition, March 2011 www.moxa.com/product 2011 Moxa Inc. All rights reserved. User s Manual The software described in this manual is furnished under a license agreement and may be used

More information

RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware

RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware Contact Information Go to the RSA corporate website for regional Customer Support telephone

More information

Interworks. Interworks Cloud Platform Installation Guide

Interworks. Interworks Cloud Platform Installation Guide Interworks Interworks Cloud Platform Installation Guide Published: March, 2014 This document contains information proprietary to Interworks and its receipt or possession does not convey any rights to reproduce,

More information

Virtual CD v10. Network Management Server Manual. H+H Software GmbH

Virtual CD v10. Network Management Server Manual. H+H Software GmbH Virtual CD v10 Network Management Server Manual H+H Software GmbH Table of Contents Table of Contents Introduction 1 Legal Notices... 2 What Virtual CD NMS can do for you... 3 New Features in Virtual

More information

Adaptive Log Exporter Users Guide

Adaptive Log Exporter Users Guide IBM Security QRadar Version 7.1.0 (MR1) Note: Before using this information and the product that it supports, read the information in Notices and Trademarks on page page 119. Copyright IBM Corp. 2012,

More information

Deploying Windows Streaming Media Servers NLB Cluster and metasan

Deploying Windows Streaming Media Servers NLB Cluster and metasan Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................

More information

Acronis Backup & Recovery 11.5 Quick Start Guide

Acronis Backup & Recovery 11.5 Quick Start Guide Acronis Backup & Recovery 11.5 Quick Start Guide Applies to the following editions: Advanced Server for Windows Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server

More information

Budget Developer Install Manual 2.5

Budget Developer Install Manual 2.5 Budget Developer Install Manual 2.5 ARGUS Budget Developer Install Version 2.5 2/13/2013 ARGUS Software An Altus Group Company Application Server Installation for ARGUS Enterprise Version 9.1 2/13/2013

More information

Veritas Cluster Server Database Agent for Microsoft SQL Configuration Guide

Veritas Cluster Server Database Agent for Microsoft SQL Configuration Guide Veritas Cluster Server Database Agent for Microsoft SQL Configuration Guide Windows 2000, Windows Server 2003 5.0 11293743 Veritas Cluster Server Database Agent for Microsoft SQL Configuration Guide Copyright

More information

How To Install An Aneka Cloud On A Windows 7 Computer (For Free)

How To Install An Aneka Cloud On A Windows 7 Computer (For Free) MANJRASOFT PTY LTD Aneka 3.0 Manjrasoft 5/13/2013 This document describes in detail the steps involved in installing and configuring an Aneka Cloud. It covers the prerequisites for the installation, the

More information

LepideAuditor Suite for File Server. Installation and Configuration Guide

LepideAuditor Suite for File Server. Installation and Configuration Guide LepideAuditor Suite for File Server Installation and Configuration Guide Table of Contents 1. Introduction... 4 2. Requirements and Prerequisites... 4 2.1 Basic System Requirements... 4 2.2 Supported Servers

More information

Installation and Licensing Documentation

Installation and Licensing Documentation Installation and Licensing Documentation Copyright and Trademark Information 2009 SAS IP, Inc. All rights reserved. Unauthorized use, distribution or duplication is prohibited. ANSYS, ANSYS Workbench,

More information

VERITAS Backup Exec TM 10.0 for Windows Servers

VERITAS Backup Exec TM 10.0 for Windows Servers VERITAS Backup Exec TM 10.0 for Windows Servers Quick Installation Guide N134418 July 2004 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software

More information

Silect Software s MP Author

Silect Software s MP Author Silect MP Author for Microsoft System Center Operations Manager Silect Software s MP Author User Guide September 2, 2015 Disclaimer The information in this document is furnished for informational use only,

More information

Installing and Configuring vcenter Multi-Hypervisor Manager

Installing and Configuring vcenter Multi-Hypervisor Manager Installing and Configuring vcenter Multi-Hypervisor Manager vcenter Server 5.1 vcenter Multi-Hypervisor Manager 1.1 This document supports the version of each product listed and supports all subsequent

More information

Dell Statistica 13.0. Statistica Enterprise Installation Instructions

Dell Statistica 13.0. Statistica Enterprise Installation Instructions Dell Statistica 13.0 2015 Dell Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide is furnished under a software license or

More information

KofaxExpress. Installation Guide 3.1.0 2012-05-01

KofaxExpress. Installation Guide 3.1.0 2012-05-01 KofaxExpress 3.1.0 Installation Guide 2012-05-01 2008-2012 Kofax, Inc., 15211 Laguna Canyon Road, Irvine, California 92618, U.S.A. All rights reserved. Use is subject to license terms. Third-party software

More information

NETWRIX FILE SERVER CHANGE REPORTER

NETWRIX FILE SERVER CHANGE REPORTER NETWRIX FILE SERVER CHANGE REPORTER ADMINISTRATOR S GUIDE Product Version: 3.3 April/2012. Legal Notice The information in this publication is furnished for information use only, and does not constitute

More information

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

NETWRIX USER ACTIVITY VIDEO REPORTER

NETWRIX USER ACTIVITY VIDEO REPORTER NETWRIX USER ACTIVITY VIDEO REPORTER ADMINISTRATOR S GUIDE Product Version: 1.0 January 2013. Legal Notice The information in this publication is furnished for information use only, and does not constitute

More information

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015 Metalogix SharePoint Backup Publication Date: August 24, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this

More information

TSM for Windows Installation Instructions: Download the latest TSM Client Using the following link:

TSM for Windows Installation Instructions: Download the latest TSM Client Using the following link: TSM for Windows Installation Instructions: Download the latest TSM Client Using the following link: ftp://ftp.software.ibm.com/storage/tivoli-storagemanagement/maintenance/client/v6r2/windows/x32/v623/

More information

How To Manage Storage With Novell Storage Manager 3.X For Active Directory

How To Manage Storage With Novell Storage Manager 3.X For Active Directory www.novell.com/documentation Installation Guide Novell Storage Manager 4.1 for Active Directory September 10, 2015 Legal Notices Condrey Corporation makes no representations or warranties with respect

More information

Wolfr am Lightweight Grid M TM anager USER GUIDE

Wolfr am Lightweight Grid M TM anager USER GUIDE Wolfram Lightweight Grid TM Manager USER GUIDE For use with Wolfram Mathematica 7.0 and later. For the latest updates and corrections to this manual: visit reference.wolfram.com For information on additional

More information

VERALAB LDAP Configuration Guide

VERALAB LDAP Configuration Guide VERALAB LDAP Configuration Guide VeraLab Suite is a client-server application and has two main components: a web-based application and a client software agent. Web-based application provides access to

More information

QUANTIFY INSTALLATION GUIDE

QUANTIFY INSTALLATION GUIDE QUANTIFY INSTALLATION GUIDE Thank you for putting your trust in Avontus! This guide reviews the process of installing Quantify software. For Quantify system requirement information, please refer to the

More information

NETWRIX ACCOUNT LOCKOUT EXAMINER

NETWRIX ACCOUNT LOCKOUT EXAMINER NETWRIX ACCOUNT LOCKOUT EXAMINER ADMINISTRATOR S GUIDE Product Version: 4.1 July 2014. Legal Notice The information in this publication is furnished for information use only, and does not constitute a

More information

Embarcadero Performance Center 2.7 Installation Guide

Embarcadero Performance Center 2.7 Installation Guide Embarcadero Performance Center 2.7 Installation Guide Copyright 1994-2009 Embarcadero Technologies, Inc. Embarcadero Technologies, Inc. 100 California Street, 12th Floor San Francisco, CA 94111 U.S.A.

More information

FileMaker Server 11. FileMaker Server Help

FileMaker Server 11. FileMaker Server Help FileMaker Server 11 FileMaker Server Help 2010 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc. registered

More information

CycleServer Grid Engine Support Install Guide. version 1.25

CycleServer Grid Engine Support Install Guide. version 1.25 CycleServer Grid Engine Support Install Guide version 1.25 Contents CycleServer Grid Engine Guide 1 Administration 1 Requirements 1 Installation 1 Monitoring Additional OGS/SGE/etc Clusters 3 Monitoring

More information

NETWRIX EVENT LOG MANAGER

NETWRIX EVENT LOG MANAGER NETWRIX EVENT LOG MANAGER QUICK-START GUIDE FOR THE ENTERPRISE EDITION Product Version: 4.0 July/2012. Legal Notice The information in this publication is furnished for information use only, and does not

More information

Sophos for Microsoft SharePoint startup guide

Sophos for Microsoft SharePoint startup guide Sophos for Microsoft SharePoint startup guide Product version: 2.0 Document date: March 2011 Contents 1 About this guide...3 2 About Sophos for Microsoft SharePoint...3 3 System requirements...3 4 Planning

More information

Configuring and Launching ANSYS FLUENT 16.0 - Distributed using IBM Platform MPI or Intel MPI

Configuring and Launching ANSYS FLUENT 16.0 - Distributed using IBM Platform MPI or Intel MPI Configuring and Launching ANSYS FLUENT 16.0 - Distributed using IBM Platform MPI or Intel MPI Table of Contents BEFORE YOU PROCEED... 1 Launching FLUENT Using Shared Memory... 2 Configuring FLUENT to run

More information

Event Manager. LANDesk Service Desk

Event Manager. LANDesk Service Desk Event Manager LANDesk Service Desk LANDESK SERVICE DESK EVENT MANAGER GUIDE This document contains information that is the proprietary and confidential property of LANDesk Software, Inc. and/or its affiliated

More information

Legal Notes. Regarding Trademarks. 2012 KYOCERA Document Solutions Inc.

Legal Notes. Regarding Trademarks. 2012 KYOCERA Document Solutions Inc. Legal Notes Unauthorized reproduction of all or part of this guide is prohibited. The information in this guide is subject to change without notice. We cannot be held liable for any problems arising from

More information

How To Create An Easybelle History Database On A Microsoft Powerbook 2.5.2 (Windows)

How To Create An Easybelle History Database On A Microsoft Powerbook 2.5.2 (Windows) Introduction EASYLABEL 6 has several new features for saving the history of label formats. This history can include information about when label formats were edited and printed. In order to save this history,

More information

Configuring Security Features of Session Recording

Configuring Security Features of Session Recording Configuring Security Features of Session Recording Summary This article provides information about the security features of Citrix Session Recording and outlines the process of configuring Session Recording

More information

FileMaker Server 10 Help

FileMaker Server 10 Help FileMaker Server 10 Help 2007-2009 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker, the file folder logo, Bento and the Bento logo

More information

Installation Notes for Outpost Network Security (ONS) version 3.2

Installation Notes for Outpost Network Security (ONS) version 3.2 Outpost Network Security Installation Notes version 3.2 Page 1 Installation Notes for Outpost Network Security (ONS) version 3.2 Contents Installation Notes for Outpost Network Security (ONS) version 3.2...

More information

Installing and Configuring vcloud Connector

Installing and Configuring vcloud Connector Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

Sage HRMS 2014 Sage Employee Self Service Tech Installation Guide for Windows 2003, 2008, and 2012. October 2013

Sage HRMS 2014 Sage Employee Self Service Tech Installation Guide for Windows 2003, 2008, and 2012. October 2013 Sage HRMS 2014 Sage Employee Self Service Tech Installation Guide for Windows 2003, 2008, and 2012 October 2013 This is a publication of Sage Software, Inc. Document version: October 17, 2013 Copyright

More information

Dell Recovery Manager for Active Directory 8.6. Quick Start Guide

Dell Recovery Manager for Active Directory 8.6. Quick Start Guide Dell Recovery Manager for Active Directory 8.6 2014 Dell Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide is furnished

More information

Quick Start Guide for VMware and Windows 7

Quick Start Guide for VMware and Windows 7 PROPALMS VDI Version 2.1 Quick Start Guide for VMware and Windows 7 Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the

More information

Deploying the BIG-IP LTM system and Microsoft Windows Server 2003 Terminal Services

Deploying the BIG-IP LTM system and Microsoft Windows Server 2003 Terminal Services Deployment Guide Deploying the BIG-IP System with Microsoft Windows Server 2003 Terminal Services Deploying the BIG-IP LTM system and Microsoft Windows Server 2003 Terminal Services Welcome to the BIG-IP

More information

Appendix E. Captioning Manager system requirements. Installing the Captioning Manager

Appendix E. Captioning Manager system requirements. Installing the Captioning Manager Appendix E Installing and configuring the Captioning Manager The Mediasite Captioning Manager, a separately sold EX Server add-on, allows users to submit and monitor captioning requests through Automatic

More information

Aspera Connect User Guide

Aspera Connect User Guide Aspera Connect User Guide Windows XP/2003/Vista/2008/7 Browser: Firefox 2+, IE 6+ Version 2.3.1 Chapter 1 Chapter 2 Introduction Setting Up 2.1 Installation 2.2 Configure the Network Environment 2.3 Connect

More information

Universal Management Service 2015

Universal Management Service 2015 Universal Management Service 2015 UMS 2015 Help All rights reserved. No parts of this work may be reproduced in any form or by any means - graphic, electronic, or mechanical, including photocopying, recording,

More information

Customizing Remote Desktop Web Access by Using Windows SharePoint Services Stepby-Step

Customizing Remote Desktop Web Access by Using Windows SharePoint Services Stepby-Step Customizing Remote Desktop Web Access by Using Windows SharePoint Services Stepby-Step Guide Microsoft Corporation Published: July 2009 Updated: September 2009 Abstract Remote Desktop Web Access (RD Web

More information

Moxa Device Manager 2.0 User s Guide

Moxa Device Manager 2.0 User s Guide First Edition, March 2009 www.moxa.com/product 2009 Moxa Inc. All rights reserved. Reproduction without permission is prohibited. Moxa Device Manager 2.0 User Guide The software described in this manual

More information

Setting Up a Unisphere Management Station for the VNX Series P/N 300-011-796 Revision A01 January 5, 2010

Setting Up a Unisphere Management Station for the VNX Series P/N 300-011-796 Revision A01 January 5, 2010 Setting Up a Unisphere Management Station for the VNX Series P/N 300-011-796 Revision A01 January 5, 2010 This document describes the different types of Unisphere management stations and tells how to install

More information

Version 3.8. Installation Guide

Version 3.8. Installation Guide Version 3.8 Installation Guide Copyright 2007 Jetro Platforms, Ltd. All rights reserved. This document is being furnished by Jetro Platforms for information purposes only to licensed users of the Jetro

More information

BrightStor ARCserve Backup for Linux

BrightStor ARCserve Backup for Linux BrightStor ARCserve Backup for Linux Agent for MySQL Guide r11.5 D01213-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the end user's

More information

Digipass Plug-In for IAS. IAS Plug-In IAS. Microsoft's Internet Authentication Service. Installation Guide

Digipass Plug-In for IAS. IAS Plug-In IAS. Microsoft's Internet Authentication Service. Installation Guide Digipass Plug-In for IAS IAS Plug-In IAS Microsoft's Internet Authentication Service Installation Guide Disclaimer of Warranties and Limitations of Liabilities Disclaimer of Warranties and Limitations

More information

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster #1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

RSA Authentication Manager 7.1 Basic Exercises

RSA Authentication Manager 7.1 Basic Exercises RSA Authentication Manager 7.1 Basic Exercises Contact Information Go to the RSA corporate web site for regional Customer Support telephone and fax numbers: www.rsa.com Trademarks RSA and the RSA logo

More information

IBM WebSphere Application Server Version 7.0

IBM WebSphere Application Server Version 7.0 IBM WebSphere Application Server Version 7.0 Centralized Installation Manager for IBM WebSphere Application Server Network Deployment Version 7.0 Note: Before using this information, be sure to read the

More information

Installation Guide. Novell Storage Manager 3.1.1 for Active Directory. Novell Storage Manager 3.1.1 for Active Directory Installation Guide

Installation Guide. Novell Storage Manager 3.1.1 for Active Directory. Novell Storage Manager 3.1.1 for Active Directory Installation Guide Novell Storage Manager 3.1.1 for Active Directory Installation Guide www.novell.com/documentation Installation Guide Novell Storage Manager 3.1.1 for Active Directory October 17, 2013 Legal Notices Condrey

More information

Milestone Systems Software Manager 1.5. Administrator's Manual

Milestone Systems Software Manager 1.5. Administrator's Manual Milestone Systems Software Manager 1.5 Contents INTRODUCTION... 4 PREREQUISITES... 5 SUPPORTED OPERATING SYSTEMS... 5 PREREQUISITES FOR ALL REMOTE SERVERS... 5 IMPORTANT INFORMATION... 5 VERSION SPECIFIC

More information

Quick Start Guide for Parallels Virtuozzo

Quick Start Guide for Parallels Virtuozzo PROPALMS VDI Version 2.1 Quick Start Guide for Parallels Virtuozzo Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the current

More information

Scheduling in SAS 9.3

Scheduling in SAS 9.3 Scheduling in SAS 9.3 SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc 2011. Scheduling in SAS 9.3. Cary, NC: SAS Institute Inc. Scheduling in SAS 9.3

More information

Sharp Remote Device Manager (SRDM) Server Software Setup Guide

Sharp Remote Device Manager (SRDM) Server Software Setup Guide Sharp Remote Device Manager (SRDM) Server Software Setup Guide This Guide explains how to install the software which is required in order to use Sharp Remote Device Manager (SRDM). SRDM is a web-based

More information

Using Logon Agent for Transparent User Identification

Using Logon Agent for Transparent User Identification Using Logon Agent for Transparent User Identification Websense Logon Agent (also called Authentication Server) identifies users in real time, as they log on to domains. Logon Agent works with the Websense

More information

DameWare Server. Administrator Guide

DameWare Server. Administrator Guide DameWare Server Administrator Guide About DameWare Contact Information Team Contact Information Sales 1.866.270.1449 General Support Technical Support Customer Service User Forums http://www.dameware.com/customers.aspx

More information

NetWrix USB Blocker. Version 3.6 Administrator Guide

NetWrix USB Blocker. Version 3.6 Administrator Guide NetWrix USB Blocker Version 3.6 Administrator Guide Table of Contents 1. Introduction...3 1.1. What is NetWrix USB Blocker?...3 1.2. Product Architecture...3 2. Licensing...4 3. Operation Guide...5 3.1.

More information

Installing Windows Rights Management Services with Service Pack 2 Step-by- Step Guide

Installing Windows Rights Management Services with Service Pack 2 Step-by- Step Guide Installing Windows Rights Management Services with Service Pack 2 Step-by- Step Guide Microsoft Corporation Published: October 2006 Author: Brian Lich Editor: Carolyn Eller Abstract This step-by-step guide

More information

TIBCO Hawk SNMP Adapter Installation

TIBCO Hawk SNMP Adapter Installation TIBCO Hawk SNMP Adapter Installation Software Release 4.9.0 November 2012 Two-Second Advantage Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR

More information

NovaBACKUP xsp Version 15.0 Upgrade Guide

NovaBACKUP xsp Version 15.0 Upgrade Guide NovaBACKUP xsp Version 15.0 Upgrade Guide NovaStor / November 2013 2013 NovaStor, all rights reserved. All trademarks are the property of their respective owners. Features and specifications are subject

More information

Installation and Program Essentials

Installation and Program Essentials CS PROFESSIONAL SUITE ACCOUNTING PRODUCTS Installation and Program Essentials version 2015.x.x TL 28970 3/26/2015 Copyright Information Text copyright 1998 2015 by Thomson Reuters. All rights reserved.

More information

Archive Attender Version 3.5

Archive Attender Version 3.5 Archive Attender Version 3.5 Getting Started Guide Sherpa Software (800) 255-5155 www.sherpasoftware.com Page 1 Under the copyright laws, neither the documentation nor the software can be copied, photocopied,

More information

PREFACE http://www.okiprintingsolutions.com 07108001 iss.01 -

PREFACE http://www.okiprintingsolutions.com 07108001 iss.01 - Network Guide PREFACE Every effort has been made to ensure that the information in this document is complete, accurate, and up-to-date. The manufacturer assumes no responsibility for the results of errors

More information

CA Spectrum. Microsoft MOM and SCOM Integration Guide. Release 9.4

CA Spectrum. Microsoft MOM and SCOM Integration Guide. Release 9.4 CA Spectrum Microsoft MOM and SCOM Integration Guide Release 9.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com info@flexense.com 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

How To Enable A Websphere To Communicate With Ssl On An Ipad From Aaya One X Portal 1.1.3 On A Pc Or Macbook Or Ipad (For Acedo) On A Network With A Password Protected (

How To Enable A Websphere To Communicate With Ssl On An Ipad From Aaya One X Portal 1.1.3 On A Pc Or Macbook Or Ipad (For Acedo) On A Network With A Password Protected ( Avaya one X Portal 1.1.3 Lightweight Directory Access Protocol (LDAP) over Secure Socket Layer (SSL) Configuration This document provides configuration steps for Avaya one X Portal s 1.1.3 communication

More information

ACTIVE DIRECTORY DEPLOYMENT

ACTIVE DIRECTORY DEPLOYMENT ACTIVE DIRECTORY DEPLOYMENT CASAS Technical Support 800.255.1036 2009 Comprehensive Adult Student Assessment Systems. All rights reserved. Version 031809 CONTENTS 1. INTRODUCTION... 1 1.1 LAN PREREQUISITES...

More information

Sage 200 Web Time & Expenses Guide

Sage 200 Web Time & Expenses Guide Sage 200 Web Time & Expenses Guide Sage (UK) Limited Copyright Statement Sage (UK) Limited, 2006. All rights reserved If this documentation includes advice or information relating to any matter other than

More information

StreamServe Persuasion SP5 Control Center

StreamServe Persuasion SP5 Control Center StreamServe Persuasion SP5 Control Center User Guide Rev C StreamServe Persuasion SP5 Control Center User Guide Rev C OPEN TEXT CORPORATION ALL RIGHTS RESERVED United States and other international patents

More information

NETWRIX EVENT LOG MANAGER

NETWRIX EVENT LOG MANAGER NETWRIX EVENT LOG MANAGER ADMINISTRATOR S GUIDE Product Version: 4.0 July/2012. Legal Notice The information in this publication is furnished for information use only, and does not constitute a commitment

More information

Application Servers - BEA WebLogic. Installing the Application Server

Application Servers - BEA WebLogic. Installing the Application Server Proven Practice Application Servers - BEA WebLogic. Installing the Application Server Product(s): IBM Cognos 8.4, BEA WebLogic Server Area of Interest: Infrastructure DOC ID: AS01 Version 8.4.0.0 Application

More information

CA Spectrum and CA Service Desk

CA Spectrum and CA Service Desk CA Spectrum and CA Service Desk Integration Guide CA Spectrum 9.4 / CA Service Desk r12 and later This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter

More information

Team Foundation Server 2012 Installation Guide

Team Foundation Server 2012 Installation Guide Team Foundation Server 2012 Installation Guide Page 1 of 143 Team Foundation Server 2012 Installation Guide Benjamin Day benday@benday.com v1.0.0 November 15, 2012 Team Foundation Server 2012 Installation

More information

Installation and Upgrade Guide. PowerSchool Student Information System

Installation and Upgrade Guide. PowerSchool Student Information System PowerSchool Student Information System Released August 2011 Document Owner: Engineering This edition applies to Release 7.x of the PowerSchool software and to all subsequent releases and modifications

More information

Team Foundation Server 2013 Installation Guide

Team Foundation Server 2013 Installation Guide Team Foundation Server 2013 Installation Guide Page 1 of 164 Team Foundation Server 2013 Installation Guide Benjamin Day benday@benday.com v1.1.0 May 28, 2014 Team Foundation Server 2013 Installation Guide

More information

Backup & Disaster Recovery Appliance User Guide

Backup & Disaster Recovery Appliance User Guide Built on the Intel Hybrid Cloud Platform Backup & Disaster Recovery Appliance User Guide Order Number: G68664-001 Rev 1.0 June 22, 2012 Contents Registering the BDR Appliance... 4 Step 1: Register the

More information

2X ApplicationServer & LoadBalancer Manual

2X ApplicationServer & LoadBalancer Manual 2X ApplicationServer & LoadBalancer Manual 2X ApplicationServer & LoadBalancer Contents 1 URL: www.2x.com E-mail: info@2x.com Information in this document is subject to change without notice. Companies,

More information

NetBackup Backup, Archive, and Restore Getting Started Guide

NetBackup Backup, Archive, and Restore Getting Started Guide NetBackup Backup, Archive, and Restore Getting Started Guide UNIX, Windows, and Linux Release 6.5 Veritas NetBackup Backup, Archive, and Restore Getting Started Guide Copyright 2007 Symantec Corporation.

More information

Sage 100 ERP. Installation and System Administrator s Guide

Sage 100 ERP. Installation and System Administrator s Guide Sage 100 ERP Installation and System Administrator s Guide This is a publication of Sage Software, Inc. Version 2014 Copyright 2013 Sage Software, Inc. All rights reserved. Sage, the Sage logos, and the

More information

User Document. Adobe Acrobat 7.0 for Microsoft Windows Group Policy Objects and Active Directory

User Document. Adobe Acrobat 7.0 for Microsoft Windows Group Policy Objects and Active Directory Adobe Acrobat 7.0 for Microsoft Windows Group Policy Objects and Active Directory Copyright 2005 Adobe Systems Incorporated. All rights reserved. NOTICE: All information contained herein is the property

More information

Remote Management System

Remote Management System RMS Copyright and Distribution Notice November 2009 Copyright 2009 ARTROMICK International, Inc. ALL RIGHTS RESERVED. Published 2009. Printed in the United States of America WARNING: ANY UNAUTHORIZED

More information

DS License Server V6R2013x

DS License Server V6R2013x DS License Server V6R2013x DS License Server V6R2013x Installation and Configuration Guide Contains JAVA SE RUNTIME ENVIRONMENT (JRE) VERSION 7 Contains IBM(R) 64-bit SDK for AIX(TM), Java(TM) Technology

More information

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014 HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job

More information

Attix5 Pro Server Edition

Attix5 Pro Server Edition Attix5 Pro Server Edition V7.0.3 User Manual for Linux and Unix operating systems Your guide to protecting data with Attix5 Pro Server Edition. Copyright notice and proprietary information All rights reserved.

More information

Portions of this product were created using LEADTOOLS 1991-2009 LEAD Technologies, Inc. ALL RIGHTS RESERVED.

Portions of this product were created using LEADTOOLS 1991-2009 LEAD Technologies, Inc. ALL RIGHTS RESERVED. Installation Guide Lenel OnGuard 2009 Installation Guide, product version 6.3. This guide is item number DOC-110, revision 1.038, May 2009 Copyright 1992-2009 Lenel Systems International, Inc. Information

More information

Application Manager. Installation and Upgrade Guide. Version 8 FR6

Application Manager. Installation and Upgrade Guide. Version 8 FR6 Application Manager Installation and Upgrade Guide Version 8 FR6 APPLICATION MANAGER INSTALLATION AND UPGRADE GUIDE ii AppSense Limited, 2012 All rights reserved. No part of this document may be produced

More information