Flux 7.7, January 20, 2009

Size: px
Start display at page:

Download "Flux 7.7, January 20, 2009"

Transcription

1 Flux Manual Java Job Scheduler File Transfer Workflow Business Process Management Flux 7.7, January 20, 2009 Copyright Flux Corporation. All rights reserved. No part of this document may be copied without the express written permission of Flux. Flux is a registered trademark of Flux Corporation. Questions? Our Technical Support department is here to help you. Telephone +1 (406) or support@fluxcorp.com for help.

2 Table of Contents 1 Introduction to Flux Features Technical Specifications Java Archive (JAR) Files Utilized by Flux JDBC Database Drivers Performance Metrics Flux 7.7 Performance Metrics Java Version End-of-life Information Background Architecture Flux Clients Flux with J2SE Flux with In-Memory Databases Flux with J2EE The Cycles and States of the Flux Engine Traversing Firewalls Flow Chart Runs Creating and Initializing Flux Creating Flux Creating Flux Using a Database Creating Flux Using a Data Source Creating Flux as an RMI Server Looking up a Flux RMI Server Instance Creating Flux Using a Configuration Object Looking Up a Remote Flux Instance Using a Configuration Object Creating Flux from a Configuration File Looking Up a Remote Flux Instance from a Configuration File Flux Manual 1

3 6 XML Wrapper Creating Flow Charts Flow Charts Listener Class Path Triggers Actions Java Action Sub Flow Chart Action Trigger and Action JavaBean Properties Flow Charts that Start with an Action Flows Conditional Flows Conditional Flow Syntax Else Flows Error Flows Looping Flows Default Flow Chart Error Handlers Runtime Data Map Properties for Triggers and Actions Substitution Global Substitution Date Substitution Runtime Configuration Substitution Running Flow Charts Adding Flow Charts to the Engine Adding Flow charts at a High Rate of Speed Flow Chart Order of Execution Order of Execution Examples Execution Time and Priority Timestamps Concurrency Throttling Flux Manual 2

4 9 Modifying Flow Charts Core Triggers and Actions Timer Trigger One-Shot Timer Recurring Timer Business Interval Delay Trigger Java Action Dynamic Java Action RMI Action Dynamic RMI Action Process Action Error Condition Syntax Destroy On Signal Process Actions and Agents Running Process Action Commands as a Different User For Each Number Action Console Action Manual Trigger Null Action Decision Error Action File Triggers and Actions File Criteria Include Exclude Regex Filter Base Directory Network Hosts Ftp Hosts Secure FTP Host Flux Manual 3

5 FTP Over SSL Supported FTP Servers in Flux Universal Naming Convention Zip File Criteria File Triggers File Exist Trigger File Modified Trigger File Not Exist Trigger File Trigger Properties Polling Delay Stable Period Active Window Effect of Active Window on Polling Delay Sort Order File Trigger Results File Actions File Copy Action Renamer Preserve Directory Structure File Move Action File Create Action File Delete Action File Rename Action For Each Delimited Text File Action File Transform Action For Each Collection Element Action File Action Results Custom Triggers and Actions Adapter Factories Custom Triggers Custom Actions Transient Variables Flux Manual 4

6 12.5 Using Custom Triggers and Actions In the Flux standalone or Web-based Designer J2EE Actions EJB Session Action Dynamic EJB Session Action EJB Entity Action Dynamic EJB Entity Action JMS Action Mail Action Prescripts and Postscripts on Triggers and Actions Overriding Action Results with Postscripts P r e d e f i n e d V a r i a b l e s a n d C l a s s e s V a r i a b l e s Classes Coordinating and Synchronizing Jobs Using Messaging and Checkpoints Publishing Styles Message Data and Properties Message Selectors Job Dependencies and Job Queuing Translating JMS Messages into Flux Messages Coordinating Actions Using Signals Timeouts Complex Dependencies and Parallelism with Splits and Joins Multiple Start Actions Flux Manual 5

7 19.2 Complex Dependencies Join Points Conditional Join Splits Transaction Breaks Business Process Management Modeling a Business Process Business Process Engine Login to the Business Process Engine Flux Agent Architecture Flux Engine - Agent Relationship Creating a Flux Agent States of a Flux Agent Agents and the Flux Operations Console Exception Handling Controlling when the Database Exception Action Runs Exception Condition Syntax Security The Definition of Groups, Roles, and Users Securing a Local Flux Engine Securing a Local Flux Engine for Remote Access Using a Secured Remote Flux Engine Runtime Configuration Tree of Configuration Properties Concurrency Throttles Flow Chart Priority Default Flow Chart Error Handler Other Properties Flux Manual 6

8 25 Scheduling Options Concurrency Throttles Different Concurrency Throttles for Different Kinds of Jobs Concurrency Throttles Based on Other Running Jobs Pinning Jobs on Specific Flux Instances Pausing and Resuming Jobs Time Expressions Cron-style Time Expressions "Or" Constructs For Loops Using For-Loops to Condense Multiple Timer Triggers Using the Cron Helper Object to Create For-Loops Real World Examples from our Technical Support Department Relative Time Expressions More Examples Another Real World Example from our Technical Support Department Business Intervals Forecasting Caching Local Caching Networked Caching Enabling Caching Cache Size Updating JBoss Cache Caching Performance Throughput Performance Flow Chart Retrieval Performance Split Performance Sub Flow Chart Performance Flux Manual 7

9 30 Databases and Persistence Database Deadlock Database Properties Initializing Database Connections Oracle Notes Oracle LOBs DB2 Notes DB2 Troubleshooting Tips SQL Server Notes Sybase Notes MySQL Mappings and Settings HSQL Mappings and Settings PostgreSQL Mappings and Settings Informix Mappings and Settings Oracle Rdb Mappings and Settings Database Properties Summary Database Indexes Oracle DB Persistent Variables Rules for Persistent Variables Pre-defined Persistent Variables Persistent Variable Lifecycle Event Callbacks Database Failures and Application Startup Transactions J2SE Client Transactions J2EE Client Transactions Clustering and Failover Flux Manual 8

10 33 Application Servers WebLogic Deploying the Flux WebARchive (WAR) File in WebLogic Integrating Flux Security with WebLogic WebSphere Running WebSphere 5.x on the J2EE 1.3 Architecture JBoss Deploying the Flux JMX MBean on JBoss Configuring the Flux JMS Action to run with JBoss Starting a Secure Engine in JBoss Tomcat Oracle9i (OC4J) Orion Sybase EAServer Embedding the Web-based Designer Logging and Audit Trail Six Different Loggers Defined in Flux Using Different Logging Tools with Flux Creating Audit Trail Listeners Multiple Loggers Heartbeat Best Practices Design Best Practices Performance Best Practices Frequently Asked Questions JMX MBean Flux Command Line Interface Displaying the Flux Version Flux Manual 9

11 38.2 Obtaining Help Creating a Flux Server Creating a Flux Server from a Configuration File Starting, Stopping, and Shutting Down a Server Running the Flux Designer Starting a Flux Agent Flux Designer Creating, Editing, Removing, Controlling, and Monitoring Jobs Using Flux Designer Components in your own GUI Flux Operations Console Installing the Flux Operations Console Configuring the Flux Operations Console to Run with a Custom Flux Engine Running the Operations Console with a Custom Flux Engine Disabling the "Remember Me" feature on the engine sign-in page Deploying Custom Classes to the Flux Operations Console Flux Tag Library Troubleshooting Examples Upgrading from Flux Packages Initial Context Factory SchedulerSource Job Job Listeners Scheduling a Job Flux Manual 10

12 44 Migrating Collections in Persistent Variables from Flux 3.2, 3.3, and 3.4 to Flux 3.5 and Higher Flux API Documentation Trigger and Action Properties (JavaBeans Documentation) Typical Flux Activities Starting the Flux Designer Creating a Flow Chart Defining a Flow Chart s Properties Adding Triggers and Actions Cleaning Up the Flow Chart Setting Action Properties Verifying the Flow Chart Running the Flow Chart Watching The Flow Chart Run Shutting Down your Flux Engine Engine Configuration File Connecting to a Database Using Encrypted Passwords Database Tables Database Properties Clustering and Failover Database Migration from Flux 5 to Flux Trigger and Action Results Action Results Trigger Results Runtime Data Map Type Conversion Support Flux Manual 11

13 52 Integrating Flux Security with WebLogic Setup Flux configuration Setup WebLogic Login Module Define and Map Flux-WebLogic Permissions Sample WebLogic Startup class Change Logs Figures Sample Flow Chart or Job Diagram 14 Flux Engine Architecture 25 Flux on the J2SE platform 26 Flux J2SE Architecture with an Embedded Database 26 Flux Running with J2EE Architecture 27 Flux Cycle 28 Flux States 28 Simple Flux Firewall Traversal 30 IP Tunneling 31 Transaction Behavior with Error Flows 48 Destroy on Signal 65 Polling Delay Restarts in an Active Window 76 Polling Delay is Larger than Active Window 76 Using Signals to Form Dependencies 94 A Signal affecting a Join Point 95 Splits are Implicit Transaction Breaks 99 The Abort Scenario 100 Flow Context Behavior with Splits and Join Points 102 Flux Agent Architecture 105 Engine - Agent Relationship 106 Agent State Diagram 108 Agents Page With One Registered Agent 109 The Agent Details Page 109 Effect of Caching on Throughput Performance 140 Effect of Caching on Flow Chart Retrieval Time 141 Effect of Caching on Split Performance 141 Effect of Caching on Sub Flow Chart Performance 142 Flux XA Transactions 155 Main Screen 194 New Flow Chart 195 Viewing the Flow Chart Properties 195 Laying Out the Flow Chart 197 Adding the Flow 198 Flux Manual 12

14 Redesigned Flow Chart 199 The Timer Trigger's Action Properties 200 Selecting the Verify Button 201 Errors Discovered During Verification 202 Disabling Verification Alerts 202 Adding a New Engine 202 Tables JAR File Names 20 Flow Chart Runs Methods 31 Conditional Expression / Message Selector Syntax 46 Date Formatter Symbols 54 Available File Wildcards 70 Conditional join expression syntax using Action names 100 Conditional join expression syntax using numbers 101 Example Database Table - Claims 110 Cron Style Time Expressions 123 Relative Time Expression Syntax 132 Recognized Units 133 Relative Time Expression Examples 134 Supported Database property syntax 147 Java to SQL Type Conversions 152 Variable Table Fields 192 Action Results 206 Trigger Results 210 Runtime Data Map Type Conversion Introduction to Flux Flux is a software component for performing enterprise job scheduling. Job scheduling is a traditional function that executes different tasks at the right time or when the right events occur. Flux can be used in any Java environment, including J2EE, XML, client-side, and server-side applications. Jobs can be scheduled to run on specific dates, certain days of the week, at certain times, and on a recurring basis. Jobs can also be scheduled to run when certain events occur, such as when a different software system performs some action. The Flux engine persists its scheduling data in a relational database so that once scheduling begins, Flux can be stopped and restarted without rescheduling jobs. Flux integrates with Java by calling user-defined callbacks when a job is ready for execution. Furthermore, once a job becomes ready to fire, Flux integrates with J2EE applications by invoking EJB session beans, sending JMS messages, and invoking Message-Driven Beans. Flux models jobs using a traditional flow chart. Flux flow charts, like most, consist of triggers, actions, and flows. These triggers, actions, and flows can be combined to create flow charts that are as simple or as complex as your specific needs require. A trigger waits for an event to occur. Traditionally, these events are time-based. For example, a trigger can fire at 9 AM and 4 PM, Monday through Friday, except Flux Manual 13

15 on company holidays. Other triggers may respond to activities that occur in other software systems in the enterprise, such as financial settlement engines, messaging systems, and servers. You can use the triggers built into Flux or write your own and plug them into Flux. An action performs some function such as updating a database, calling into a J2EE application, communicating with another software system, and so on. Flux itself comes with a suite of actions, but you can write your own custom actions and plug them into Flux. A flow connects a trigger or action to another trigger or action. For example, after a trigger has fired, a flow may guide execution into the next appropriate action. Flows can be conditional or unconditional: for example, when an trigger fires, execution may unconditionally flow into a database action. On the other hand, it may be appropriate for execution to branch to different actions depending on the 's sender. In those cases, conditional flows ensure that the appropriate action is taken. Throughout this document, the terms flow chart, workflow, and business process are used interchangeably. In general, actions and triggers are referred to as nodes or flow chart nodes. Figure 1-1 illustrates how Flux defines a job using a common flow chart. This job fires from 7:45 am through 3:15 pm, every 20 minutes. Each time the job fires, a program called MyProgram.exe is executed. Figure 1-1: Sample Flow Chart or Job Diagram 1.1 Features Flux provides the following features. Simple Configuration, Lightweight, Small Footprint, and Zero Administration. Flux is simple to configure. Flux runs right out of the box. There are no complex setups. Flux is lightweight, has a small footprint, and can be embedded comfortably in server-side and client-side applications. Flux requires zero administration. Multiple Flux instances can be created and configured in applications with a few lines of Java code. Flux Manual 14

16 Suitable as a Standalone Job Scheduler and a Job Scheduling Software Component. Flux can be used both as a standalone job scheduler and an embeddable software component. Standalone Graphical User Interface. A standalone desktop application that provides a drag-and-drop, point-and-click graphical user interface (GUI) to view, edit, create, remove, control, and monitor flow charts with a point-and-click GUI. No programming required. Time-based Scheduling. You can create flow charts that perform different actions at the right time and on the right day. Event-based Scheduling. You can create flow charts that perform certain actions when the right events occur, such as a significant event occurring in an external software system. File-driven Flow charts. File-driven flow charts allow applications to come to life when files are created, deleted, or modified, either on the local file system or on remote FTP servers. Using short and powerful expressions to describe large sets of files, you can create, delete, copy, move, and rename files and directories on the local file system, on remote FTP servers, or between the two. By combining File-driven Flow charts with Flux's time-based and event-driven scheduling, you can create scheduled file transfers to move your business files where you need them at the right times and when the right events occur. 42 Built-in Triggers and Actions. By using the 42 built-in triggers and actions in your workflows and flow charts, you can assemble flow charts to perform tasks as time and events dictate. Or you can write your own custom triggers and actions using the Flux API and plug them into your flow charts. One-shot Job Scheduling. A single flow chart is scheduled to execute at a specified time. For example, a flow chart might be scheduled to run at 4 am, July 4th, 2007, or a flow chart might be scheduled to run 90 days from the current time. Recurring Job Scheduling. A flow chart is scheduled to execute periodically. The flow chart runs indefinitely, until an ending date, or for a fixed number of repetitions. For example, a flow chart might be scheduled to run at 6 am on the second Tuesday of every month. Another flow chart might be scheduled to run at 8:15 am and 5:15 pm, Mondays through Fridays. Additionally, a flow chart might be scheduled to run every 750 milliseconds. Flow Chart Dependencies. Flow chart actions can be set to execute after certain other actions have already finished execution. For example, in this manner, flow chart actions A, B, and C can be configured to execute one after the other. You can also configure entire flow charts to be dependent on other flow charts. More powerful than flow chart chaining, Flux s flow chart dependencies allow you to configure one set of flow charts to execute after a first set of flow charts has reached a certain milestone or has finished running. These dependencies support conditional processing, so that a dependent flow chart will not run until the right flow chart finishes with the right result. Conditional Job Scheduling. Actions can be set to execute only if previous actions return certain results. For example, if a previous action finishes with one Flux Manual 15

17 particular result, action B may execute. But if a different result is generated, action C executes. Flow Chart Listeners. Flow chart listeners provide an additional mechanism that allows you to perform certain activities when the Flux engine performs certain tasks. For example, when the Flux engine is stopped, a flow chart listener can observe that event and take appropriate action. Workflow. Flux uses a workflow model for creating and executing flow charts. Workflow allows you to create flow charts that accurately model your application requirements. Flux can create flow charts that contain arbitrary sequencing, branching, looping, decision points, splits, and joins (parallelism). With these workflow features, you can design more realistic and sophisticated flow charts that exactly model your business processes. Split and Join. Flux s workflow flow chart model also supports the ability to split (fork) a flow of execution into multiple, parallel flows of execution. These parallel flows can be joined together at a later point in the flow chart. Flow Chart Queuing and Producer/Consumer Processing. You can configure a set of flow charts that "produce" data and a second set of flow charts that "consume" that data. Tree of Flow Charts. Flow charts are stored in a tree. You can perform scheduling operations on entire branches of the tree or just individual flow charts. Transactions. Flow charts run using database transactions, preserving data integrity even if computer systems or flow charts fail. Clustering and Failover. Multiple instances of Flux can work together to schedule flow charts. Should any instance go down or become inaccessible, the remaining Flux instances recover and continue scheduling flow charts. Time Zones. Flow charts can be assigned to run in any time zone. For example, a client in New York can schedule a flow chart on the server in London to run at 9 AM Singapore time. Java, J2EE, XML, and Web Services Integration. Flux naturally integrates with Java, J2EE, XML, and Web Services applications through an extensive API. Flux can be used as a software component in any Java, J2EE, XML, or Web Services application to meet flow chart scheduling requirements. Holiday and Business Calendar Support. Schedules can be created that are sensitive to local holidays and other days and times that need to be treated specially. For example, a payroll flow chart can be executed every other Friday, unless that Friday is a holiday, in which case the flow chart should be run on the previous business day. Flux's Business Calendars have millisecond resolution and can be combined with other Business Calendars. The point of saying that Flux's Business Calendars have millisecond resolution is not to say that Flux is a real-time scheduler, which it's not. The point is to say that Flux's Business Calendars are not simply day oriented. You can create a Business Calendar that is oriented to what makes sense for your business, such as blacking out a period of time from 10 PM until 2:45 AM the next morning, Monday through Friday. Flux Manual 16

18 Web User Interface. A web user interface can be used to monitor and control flow charts from a web browser. JSP Tag Library. By using the Flux JSP tag library, web designers can create web pages that interact with Flux without writing Java code. Standalone Graphical User Interface. A standalone graphical user interface (GUI) can be used to view, edit, create, remove, and control flow charts with a point-and-click GUI. No programming required. JMX MBean Support. Flux is available as a JMX MBean for deployment in a JMX Management Console. System administrators can deploy the Flux JMX MBean in application servers for managing and administering Flux from a JMX management console. Persistence. If desired, flow charts can be persisted in a relational database. Flow charts do not need to be re-scheduled if Flux is restarted. Concurrency Throttles. Different concurrency throttles allow different kinds of flow charts to get more, or less, running time within the scheduler. For example, if you have "heavyweight" and "lightweight" flow charts, you can configure your scheduler so that at most two heavyweight flow charts can run at the same time, but up to 10 lightweight flow charts can run concurrently. Concurrency throttles can be set on individual Flux engines as well as across the entire cluster. Cluster-wide concurrency throttles allow you to limit the number of flow charts that can run throughout the entire system, regardless of the number of individual Flux engines that may be running. Adjusting concurrency throttles, depending on the kind of flow chart being executed, allows flow chart scheduling performance to be tuned and system resources to be conserved. For example, suppose a reporting system needs to limit the number of Transaction Report flow charts across the flow chart scheduling cluster to 32. This throttle prevents the report servers from being overwhelmed, regardless of how many Flux engines are added to the cluster. Plus, for other kinds of reports, there is still report generation capacity left available for other kinds of report flow charts. Pinning flow charts on specific Flux instances. A Flux cluster can be configured so that certain kinds of flow charts must run on certain Flux instances. For example, if only one of the machines in a Flux cluster has report generation software installed on it, you can specify that all of your reporting flow charts have to be executed on that machine only. In general, you can specify that a particular Flux instance will, or will not, run certain kinds of flow charts. By pinning certain flow charts on certain machines, you can make sure that your flow charts run on the machines that have the resources that those flow charts need. Pause, Resume, Interrupt, Expedite, Modify Across a Cluster of Schedulers. Flow charts can be paused and later resumed. Running flow charts can be interrupted. Flow charts can be expedited for immediate execution. Flow charts can be modified while they are running. All of these actions can take place on an individual Flux scheduler instance or across an entire cluster of Flux scheduler instances. Flux Manual 17

19 Time Expressions. Flow charts can be scheduled using simple string expressions to indicate when a flow chart should run, such as on the second Tuesday of every other month. Full Cron-style Time Expressions are supported to allow flow chart scheduling in the same way as Unix Cron but with millisecond precision. A helper interface is also available to calculate Time Expressions programmatically. Error Handling. If flow charts fail, corrective action can be taken. By default, Flux automatically retries failed flow charts after a delay. Flow charts that repeatedly fail are stopped and are flagged for further inspection. Moreover, Flux's error handling capabilities can be completely customized, including adding in custom notification mechanisms. In Flux, an error handler itself is a flow chart, so error handlers can use the same workflow model as regular Flux flow charts. Furthermore, you can define different error handlers for different classes of flow charts. This kind of customization allows you to completely customize your error handling to perform appropriate corrective steps for different kinds of flow charts. Logging. At various points during the Flux engine's execution, you can receive logging messages. These logging messages can be configured to include varying degrees of detail. Flux logging optionally integrates with Log4j, the JDK 1.4+ logger, or Jakarta Commons logging. Logging is especially useful for programmers and administrators who want to monitor Flux's behavior at a low level of detail. Audit Trail. The audit trail publishes messages that describe what important activities are taking place within the Flux engine, which can be viewed by external observers to determine who performed what activities. The audit trail is useful for non-technical personnel who need to know how high level Flux engine activities were initiated. 2 Technical Specifications Pure Java. Flux is written in pure Java. JDK 1.4 or Greater. JDK 1.4 or greater is required to run Flux. Flux has been te sted with JDK 1.4, JDK 5, and JDK 6 from Sun Microsystems and JRockit 1.4, 5 from BEA Systems. Optional J2EE. The J2EE features are optional. They do not need to be used. Optional EJB 1.1 or JMS 1.1. If Flux's J2EE features are used, minimum requirements are EJB version 1.1 or JMS version 1.1. Optional JDBC. Optionally, Flux can store flow chart information to a relational database using your JDBC driver. Or Flux can store flow charts in memory. Optional Web User Interface. If you choose to deploy the optional Flux web user interface, the Flux Operations Console, it requires Java Servlets version 2.3 and JavaServer Pages 1.2. Optional Web Services. If you use Flux s Web Service Action, it requires Apache Axis. Flux Manual 18

20 Tested Platforms. Flux has been designed to work with any database that has a stable JDBC driver and any application server that supports EJB 1.1 or JMS 1.1. Flux has been tested specifically on the following platforms: WebLogic Server 9.2, 9.1, 8.1, 7.0, and 6.1 WebSphere 6.1, 6.0, 5.1, 5.0, and 4.0 Oracle 10, 9, and 8 JBoss 4.1, 4.0 and 3.2 Tomcat 5.5 and 5.0 IBM DB2 9.1, 9.0, 8.1 and 7.2 SQL Server 2005 with row versioning enabled using the Microsoft JDBC driver Sybase ASE 12.5 using the jconnect 5.5 JDBC driver Sybase ASA 9.0 using the jconnect 5.5 JDBC driver MySQL 4.1 using the com.mysql.jdbc.driver JDBC driver with InnoDB table support MySQL 5.0 using the com.mysql.jdbc.driver JDBC driver with InnoDB and MyISAM table support HSQL When configuring engines with the HSQL database, do not use HSQL for clustering under any circumstances. HSQL is not robust enough for clustering use. It is an error to cluster engines using HSQL. Clustering and Failover Platforms. Clustering and Failover for Flux does not require an application server. It requires only a database server. Clustering and Failover has been specifically tested on the following databases. Oracle 10, 9, and 8 IBM DB2 9.5, 9.1, 9.0, 8.1 and 7.2 SQL Server 2005 with row versioning enabled using the Microsoft JDBC driver Sybase ASE 12.5 using the jconnect 5.5 JDBC driver. Other Platforms. Although formal testing procedures have not been undertaken, customers have reported success when using Flux on the following platforms. Oracle9i Application Server (OC4J) Sybase ASE 12.0 and Sybase EAServer 4.1 Sun ONE Application Server 7 (iplanet 7.0) Informix PostgreSQL Oracle Rdb (Oracle for OpenVMS platforms) Mckoi SQL Database Resin Orion Application Server Flux Manual 19

21 Operating Systems. Flux is known to work on Windows, Solaris, HP-UX, AIX, Linux, AS/400, and OpenVMS systems. Other systems with a JVM are likely to work as well. Flux Migration Policy. All upgrades to newer versions of Flux with the same minor version number are drop-in replacements. There are no database schema changes, removed API methods, or API methods whose behavior has changed. For example, upgrading from Flux to Flux is a drop-in replacement. However, when upgrading to a version of Flux with a higher major or minor version number (i.e., from Flux 5.3 to Flux 6.0, or from Flux 6.2 to 6.3), there may be database or API changes, and a migration will be necessary. However, you can always count on Flux to support and maintain Flux version 3, 4, 5, 6, etc. until the very last customer using that version has upgraded. Once you start using a particular major version of Flux, you can stick with it as long as you like. Of course, you will not have access to new features, but you will receive full support and maintenance as long as you are a current Flux customer with the proper licensing. Flux Corporation takes a long-term view of software. We expect you to be running Flux for years, decades, even centuries. Don't laugh. Java Virtual Machines may be around for many, many, years - therefore, so can your Flux software. COBOL has been around for decades, why not Java and Flux? 2.1 Java Archive (JAR) Files Utilized by Flux Java Archives, or JAR files, are compressed packages of Java classes that provide the user with certain functionality. For example, in order to use the Flux APIs, a flux.jar file is provided for a user to access Flux Engines, Flow Charts, and other core features. Some features require additional JAR files, other than the flux.jar file. For example, if you wish to use the scripting functionality inside of Flux, you must include the bsf.jar and bsh.jar files in your engine's class path. Table 2-1 below contains a complete list of the JAR files included with Flux. All JAR files listed are located in the lib directory under the Flux installation directory. Table 2-1: JAR File Names JAR File Name activation.jar batik.jar bsf.jar bsh.jar Functionality Provided Required when using Flux mail actions and triggers. Required by the Desktop Designer. Required by the Flux Desktop Designer and Operation Console to export flow charts as SVG files. Required for scripting support. Required for scripting support. Flux Manual 20

22 commons-logging.jar concurrent.jar hsqldb.jar i4jruntime.jar installer.jar j2ee.jar jbosscache.jar jettyforflux.jar jettystart.jar jgroups.jar log4j.jar mail.jar smack.jar xerces.jar Required for logging if logger type is set to "commons_logging" or when caching, agents, or real-time monitoring is enabled. Required when caching is enabled. Required only if running Flux using an in-memory database. Required by Windows installer. Required by Windows installer. Required when using EJB's and JMS actions. Also required by the Designer. Required when caching, agents, or real-time flow chart monitoring is used. Required when using the Jetty web server. Required when using the Jetty web server. Required when caching, agents, or real-time flow chart monitoring is used. Required for logging if the logger type is set to "log4j". Also required when using cluster networking. Required when using mail actions and triggers. Also required by the Designer. Required for instant message functionality. Also required by the Designer. Required by the Flux Desktop Designer and Engine for importing and creating flow charts. All of these JAR files will be automatically included in the Java class path by starting the Flux Designer and/or Flux engine from the provided batch or shell files. If you are executing the Desktop Designer and/or engine from a custom batch or shell file or the Command Line Interface, you will need to add the appropriate above JAR files into your Java class path manually. For assistance with manually adding JAR files into your class path, see Chapter 38 Flux Command Line Interface. Two tasks must be done for updating the JAR files Flux provides. First, remove the outdated JAR file from the lib directory within the Flux installation directory. Second, add the updated JAR file to the class path JDBC Database Drivers To add database functionality to Flux, a configuration file will need to be constructed and the appropriate JAR files for the database JDBC driver will need to be added to the class path. The syntax relative to the Flux installation directory is as follows: Flux Manual 21

23 java classpath.;flux.jar;<the path to the database JAR files here> \ flux.main server -cp <the path to the Flux configuration file here> start Detailed step-by-step instructions can be found in the Database Quick Start Guide Performance Metrics 2.3 Flux 7.7 Performance Metrics Average job submission time Jobs submitted per day Jobs submitted per hour Average job firing time Jobs fired per day Jobs fired per hour Average job retrieval time Jobs retrieved per day Jobs retrieved per hour Average job modification time Jobs modified per day HSQL MYSQL Oracle SQL Server ms 110 ms 126 ms 64 ms 277 ms DB2 1,515, , ,714 1,350, ,913 63,157 32,727 28,571 56,250 12, ms 1412 ms 1369 ms 3737 ms 3077 ms 32,395 61,189 63,111 23,120 28,079 1,349 2,549 2, ,169 5 ms 71 ms 135 ms 128 ms 142 ms 17,280,000 1,216, , , , ,000 50,704 26,666 28,125 25, ms 366 ms 382 ms 430 ms 682 ms 1,136, , , , ,686 Flux Manual 22

24 Jobs modified per hour Average job listing time Jobs listed per day Jobs listed per hour Average message publishing time Messages published per day Messages published per hour Average message consumption time Messages consumed per day Messages consumed per hour 47,368 9,836 9,424 8,372 5,278 7 ms 16 ms 21 ms 20 ms 17 ms 12,342,857 5,400,000 4,114,285 4,320,000 5,082, , , , , , ms 195 ms 69 ms 43 ms 158 ms 1,371, ,076 1,252,173 2,009, ,835 57,142 18,461 52,173 83,720 22, ms 4302 ms 3252 ms 8348 ms 6412 ms 36,517 20,083 26,568 10,349 13,474 1, , Java Version End-of-life Information The table below contains information on Java's end-of-life policy, including the dates at which Sun will cease to provide support for each version. Because Flux runs in a Java environment, we encourage you to take proactive measures to ensure that Flux is not running in an obsolete software environment. Release Family End of Life Notification End of Support Life 1.4 December, 2006 October 30, April, 2008 October 30, (approx.) 2010 (approx.) More information on the Java end-of-life policy can be found at java.sun.com/products/archive/eol.policy.html 2. Flux Manual 23

25 Notes 1. quick_start_guides/quick_start_database/general-database-setup.pdf Background Flux is a sophisticated Java software component. Flux performs flow chart scheduling and flow chart queuing functions in J2EE and Java applications. Flow charts are application-defined tasks that execute appropriate actions. Flux ensures that these tasks are executed at the right time and in the right order. Using sophisticated Time Expressions, you can schedule any number of flow charts to execute at various times with precision down to the millisecond. Flow charts can be scheduled to run once, repeatedly, a fixed number of times, or until an ending date. Flow charts are sensitive to user-defined company holidays and business calendars. Flux is small, so it can be embedded in applications. Flux is thread-safe, so it can work in multi-threaded applications. Flux is tuned to integrate tightly with J2EE application servers or standalone programs. The component is designed to work within a single virtual machine or cooperate in a network of virtual machines. Flux is application server and database independent. It runs on a variety of environments available from different vendors. 3.1 Architecture Before beginning to program using Flux, you should become familiar with Flux's architecture. Flux is a software component. It becomes a part of your enterprise or standalone Java application. It gives your application advanced workflow, scheduling, and business process management capabilities in a small package. Flux interacts with your J2EE or Java application, application server, and database. Interactions with application servers and databases are optional. If you don't need them, you don't have to use them. Enterprise applications, almost by definition, make use of application servers and databases. The Flux engine is based on a peer-to-peer architecture in which each Flux engine in the cluster is capable of failing over and clustering flow charts. A Flux engine can be run on platforms like Windows, UNIX, Solaris, IBM mainframes, or any other operating systems with support for JDK 5 and above. For a complete list of operating systems and platforms known to work with Flux, refer to Chapter 2 Technical Specifications. See Figure 3-1 below for an accurate depiction of the behavior of the Flux engine. Flux Manual 24

26 Flux can be used with databases like Oracle, DB2, Sybase, MySQL, and MS-SQLServer to store flow chart structure and data. An in-memory HSQL implementation is provided for applications that do not need or have persistent storage. For a complete list of databases supported by Flux, refer to Chapter 2 Technical Specifications Flux Clients Flux clients run the Flux software. These clients create instances of Flux that will work with the client s servers and networks to share information and flow charts created and run by Flux. The protocol used for client-engine communication is RMI. A Flux client may be: A Web client (Web User Interface) A standalone GUI (JDK 1.4+) The Command Line Interface (CLI) Application code using Flux APIs Figure 3-1: Flux Engine Architecture The Flux Business Process Dashboard The Flux JSP tag library Flux with J2SE The Java 2 Platform, Standard Edition (J2SE) gives you a complete environment for doing applications development on desktops and servers. It also acts as the base for the Java 2 Platform, Enterprise Edition (J2EE) and Java Web Services. Figure 3-2 below illustrates Flux running on the J2SE platform. Flux Manual 25

27 Figure 3-2: Flux on the J2SE platform Flux with In-Memory Databases You can use Flux with its embedded, in-memory database to create a fast and simple environment for Flux to run in. Figure 3-3 below displays the architecture of Flux using the J2SE platform with its embedded database. Figure 3-3: Flux J2SE Architecture with an Embedded Database Flux Manual 26

28 3.1.3 Flux with J2EE The Java 2 Platform, Enterprise Edition (J2EE) is the standard for developing multi-tier enterprise applications. The J2EE platform can simplify enterprise applications by basing them on standardized, modular components; by providing a set of services to those components; and by automatically handling details of application behavior; all without complex programming. If you are running Flux on a J2EE platform, it is suggested that you do not depend on the J2EE JAR files that are packaged with Flux, as these files could be out of date. Instead, we recommend you use the EJB and JMS JAR files that are distributed with your application server. Figure 3-4 below illustrates Flux running on the J2EE platform. Figure 3-4: Flux Running with J2EE Architecture The Cycles and States of the Flux Engine Like other components and objects, Flux is created, initialized, started, and later disposed. During initialization, you provide information to Flux so that it can communicate with your database and application server. Once it is initialized, you can interact with Flux. However, scheduled flow charts will not fire until the component is started. Once started, the component can be stopped and restarted indefinitely. When the component is stopped, flow charts do not fire. Finally, when your application is ready to shut down, the component is disposed. Once disposed, the component cannot be re-used. The sequence diagram in Figure 3-5 below illustrates this sequence of steps. Flux is created, initialized, started and used, then disposed. Flux Manual 27

29 Figure 3-5: Flux Cycle As has been explained, Flux can be in one of three states: Stopped, Running, and Disposed. When Flux is first created and initialized, it enters the Stopped state. From this state, you can fully interact with Flux. However, any scheduled flow charts do not fire. From the Stopped state, you can proceed to the Running state. In the Running state, scheduled flow charts fire. Flux may go back and forth between the Running and Stopped states indefinitely. You may wish to re-enter the Stopped state from the Running state in order to temporarily prohibit flow charts from firing. Finally, when your application is ready to shut down, Flux transitions into the Disposed state. Once the Disposed state has been entered, you cannot interact with Flux, nor will flow charts fire, until a new Flux instance is created, initialized, and started. These three Flux states are shown below in Figure 3-6. Figure 3-6: Flux States As explained in greater detail below, you create jobs as flow charts. All flow charts have a start action, which is the first function to be performed. In many situations, the start action is a Timer Trigger. Timer triggers wait for a particular time and date. These times and dates are calculated using Time Expressions, which are simple strings that succinctly describe when, and how often, a timer should fire, with precision down to the millisecond. Continuing this example, once a timer trigger fires, flow of control transfers to another action in the flow chart. Typically, this subsequent action notifies a listener in your application. Listeners can be Java objects, EJB session beans, EJB message driven beans, or JMS message listeners. J2EE applications, for example, find it convenient to use EJB session beans and JMS message listeners as flow chart listeners. The kind of listener you choose depends on your application's architecture. Because Flux models jobs as flow charts, after a listener action executes, control can flow to other actions. Possible actions include File actions for writing files, actions for sending , Messaging actions for sending messages to other applications in the enterprise. Besides actions, control can flow into other triggers. Flux includes a sophisticated timer trigger that waits for certain times and dates. Other triggers are possible, including File triggers that wait for a file to be created or modified, triggers that wait for incoming to arrive, messaging triggers that wait for inbound Flux Manual 28

30 messages from other software systems in the enterprise. You can create your own triggers and actions to plug into the Flux scheduler. Your custom triggers and actions should make it easy to create new flow charts that tightly interact with your application. Scheduled flow charts can be in different states. Each flow chart has a super-state and a sub-state. The super-state can be either Normal or Error. A flow chart in the Normal super-state runs in the usual way. However, a flow chart in the Error super-state is being handled by Flux s default error handling mechanism, which provides a very flexible way to recover from flow chart errors. The sub-state can be one of several possible states: Waiting:A flow chart in the Waiting sub-state is eligible for execution at a specific time in the future. For example, active timer triggers exist in the Waiting sub-state as they wait for their scheduled firing time to arrive. Firing: A flow chart in the Firing state is executing. Triggering: A flow chart in the Triggering sub-state is waiting for an event to occur. For example, active message triggers, business process triggers, and manual triggers exist in the Triggering sub-state as they wait for an event to occur or to be expedited. Paused: Paused flow charts do not execute or fire triggers until they are explicitly resumed. Joining: A flow chart in the Joining sub-state is involved in a split-and-join flow chart structure. A split forks a flow chart into multiple, parallel flows of execution. These parallel flows can be joined together at a later point in the flow chart, which is where the Joining sub-state is used. Failed: A flow chart in the Failed sub-state has failed to recover from an error and is waiting for some kind of external intervention. The Failed sub-state is legal only if the super-state is Error. A flow chart in the Failed sub-state is temporarily prevented from executing. Finished: A flow chart in the Finished sub-state is done executing and is ready to be deleted by the Flux Engine. The Flux architecture described in Section 3.1 provides an accurate, high-level view of how applications interact with the Flux flow chart scheduling component. In the Examples section, there are complete, working code examples that show how to get started developing your application with Flux Traversing Firewalls A firewall is software that helps to protect your network from unauthorized access outside of your network. Flux is engineered to traverse firewalls in a secure manner to ensure that your firewall stays intact and Flux still runs properly. Flux can traverse firewalls in two ways: simple traversal and IP tunneling. In a simple Flux firewall traversal, you open a port on the firewall to allow access to the Flux engine. The Flux engine must then be configured to use the open port. This can be set using the registry_port configuration option in your engine configuration file, or programmatically through the Configuration.setRegistryPort() method. Flux Manual 29

31 A Flux client obtains a reference to the engine using the Factory.lookupRmiEngine() method. Once this reference is obtained, the client uses it to communicate with the Flux server. This client-server communication is handled through a random port by default. You can specify the port that the Flux engine will use to communicate with the client by setting the rmi_port configuration option, or through the Configuration.setRmiPort() method as demonstrated below. Java's RMI subsystem multiplexes many RMI objects onto one port, so only a single port needs to be configured. NOTE: If the rmi_port configuration option is set to a different port than the registry_port, you will need to ensure that both are open on the firewall. Factory factory = Factory.makeInstance(); Configuration config = factory.makeconfiguration(); // creates an RMI registry on port 1099 config.setregistryport(1099); // specifies port 1099 as the client-server communication port config.setrmiport(1099); Refer to Figure 3-7 below for a detailed diagram of Flux using simple firewall traversal. Figure 3-7: Simple Flux Firewall Traversal When using IP tunneling, you will have to set the JVM property -Djava.rmi.server.hostname to the IP address of the first host in the tunnel on your Flux server. The default value of this property is the IP address of the local host in "dotted-quad" format. -Djava.rmi.server.hostname="Host.IP.Adress" Suppose you have three computers in your IP tunnel; A, B, and C. A can only communicate with B, B can communicate with A and C, and C can only communicate with B. A sends a Flux engine lookup request to B on port B forwards the lookup request to C. C finds the engine and returns a reference to that engine back to A using Flux Manual 30

32 B as an intermediary. A can now talk to the engine on C through B. Refer to Figure 3-8 for a detailed diagram of how Flux traverses firewalls using IP tunneling. Figure 3-8: IP Tunneling 4 Flow Chart Runs Several methods are available for working with flow chart runs. Most of the methods are provided in the flux.nonstreamable interface which is implemented by flux.engine. Some of the flow chart run related methods are shown below in Table 4-1. Table 4-1: Flow Chart Runs Methods Method long getaverageruntime(string) long getlongestruntime(string) long getopenruncount(string) long getruncount(string) FlowChartRunIterator getruns(st ring, Date, Date) long getunsuccessfulruncount(st ring) Description Returns the average length of execution time in milliseconds for all runs in the specified namespace. Returns the runtime in milliseconds of the longest flow chart run in the specified namespace. Returns the total number of flow chart runs which have not completed in the specified namespace which occurred within the lower and upper date boundaries. Returns the total number of flow chart runs in the specified namespace. Returns all runs in the specified namespace which occurred within the lower and upper date boundaries. Returns the total number of unsuccessful flow chart runs in the specified namespace. Variations of the above methods which accept date ranges are also defined in flux.nonstreamable. Flux Manual 31

33 5 Creating and Initializing Flux In order to create Flux, you must first decide how you are going to use Flux. The Flux engine component can be used as a purely local Java object or as an RMI server. If Flux is created as an RMI server, you can access it remotely from another Java virtual machine. Create Flux as a local Java object if you do not need to contact the engine from a different virtual machine. On the other hand, if you are running an application across a cluster of computers and need to communicate with Flux from different virtual machines, you will need to start Flux as an RMI server. When Flux is created, it is optionally initialized with information from your database. If you choose to persist flow charts in your relational database, Flux must be told how to communicate with your database. By default, Flux uses an in-memory database, HSQL. Running Flux in this configuration makes it easy to get started with Flux programming. Later, when you are more proficient in Flux, you can instruct Flux to store its flow chart information in your database. You can perform this initialization work in your Java code. Alternately, to make it simpler, you can initialize your engine from a configuration file. The configuration includes information on how to access your database and other tunable parameters. The Flux configuration file can be either a properties file or an XML file. For String properties that have defaults, such as "username", you can explicitly set that property to an empty value by using the reserved word null in the properties configuration file, or XML configuration file. In general, however, not setting a property will give it a default value. If there is no default value, it takes a null value. If you explicitly want to set a null value and there is a default value set already, use the reserved word null. As described in the following sections, there are a myriad number of ways to create Flux engine instances and to look up remote Flux instances. Do not be overwhelmed by the choices. Simply look for a solution that works for you and your application and stick with it. 5.1 Creating Flux The base class you will be working with in the Flux API is the Factory class. From this object, almost all of Flux components are manufactured, or can be produced from objects manufactured from it. The Factory object can be instantiated as follows: import flux.*; Factory fluxfactory = Factory.makeInstance(); Once you have created this Flux factory object, you can create all manner of different Flux instances. You can also lookup remote Flux instances running in other Java virtual machines. To create a rudimentary Flux engine with no configuration options set, the following syntax is valid: Engine engine = fluxfactory.makeengine(); When using this technique to create a Flux engine instance, an HSQL in-memory database is used under the covers. When using the HSQL in-memory database in Flux, Flux Manual 32

34 flow charts are not stored beyond the lifetime of a Flux engine instance. When using the HSQL database, you will be unable to configure a Flux engine for clustering or failover. 5.2 Creating Flux Using a Database Later, once you have gained proficiency with Flux, you will want to have Flux store its flow chart information to your database. There are two basic ways to do this. First, you can supply Flux with your JDBC driver class name, along with other information, so Flux can contact your database. Here is an example using Oracle. engine = fluxfactory.makej2sermiengine(databasetype.oracle, "oracle.jdbc.driver.oracledriver", "jdbc:oracle:thin:@ :1521:orcl", "username", "password", 5 /* Maximum number of database connections */, "FLUX_" /* Table prefix */, "Flux" /* Engine bind name */, 1099 /* Engine registry port */, true /* Create registry option */); The above code example provides the connection information to communicate with an Oracle database running on the local host. In this example, Flux will create up to five database connections to Oracle. Flux s database table names are prefixed with the string "FLUX_" so that the table names are guaranteed to be unique. These tables must be created manually using the appropriate Flux database creation script located within the doc directory below the Flux installation directory. In this case the Flux oracle.sql script would be used. Using this pattern, you can substitute your own JDBC driver and other database connection information to communicate with your database. Flux assumes your database tables are created correctly; therefore, if you have any problems while trying to run Flux with your database tables, check your tables to ensure they are all there and complete. Flux refers to J2SE engines when a JDBC driver is used to connect to a database. On the other hand, Flux refers to J2EE engines when an application server s data source is used to connect to a database. J2SE engines are responsible for committing and rolling back all transactions, however, J2EE engines behave differently. 5.3 Creating Flux Using a Data Source You can also initialize Flux to use database connections from your application server s database connection pool. Here is an example. engine = fluxfactory.makej2eeengine("initial.context.factory", "iiop://myserver", "MyDataSource", "username", "password", "FLUX_",); Flux Manual 33

35 The above code provides information to communicate with an application server s database connection pool. For many application server scenarios, you do not need to specify the initial context factory or the provider URL (the first two arguments). In typical situations, this information has already been set by the application server and does not need to be repeated. Sometimes database connections timeout or are closed automatically by the database server. To respond to this situation, your application server may need to be configured to periodically refresh database connections in its connection pool. Before Flux can use your data source, it must first lookup a "user transaction" from your application server. There are two user transaction JNDI lookup names that must be configured in your Flux configuration. One is for use from inside the Flux engine: DATA_SOURCE_USER_TRANSACTION_SERVER_JNDI_NAME The other is for use from a client accessing the Flux engine: DATA_SOURCE_USER_TRANSACTION_CLIENT_JNDI_NAME Both of these configuration properties default to java:comp/usertransaction. Change this JNDI lookup string if your application server requires a different lookup string. On the J2EE platform, the Java Transaction API (JTA) links a transaction manager to the parties involved in a distribution transaction system. Specifically, Flux, the transaction manager, is tied together with many actions into one bundle. When a transaction break occurs in Flux, the bundle of transactions is committed. You can configure this behavior by setting the DATA_SOURCE_USER_TRANSACTION configuration option. By default, this configuration is set to false, meaning that Flux will assume responsibility for committing and rolling back its own transactions. If you set this option to true, Flux will use a User Transaction object to coordinate the commitment of transactions with the database. 5.4 Creating Flux as an RMI Server The simplest way to create Flux as an RMI server is with the following code. engine = fluxfactory.makermiengine(); The above code is good for initial testing until you become more proficient with Flux programming. Later, you may want to create Flux as an RMI server with database connection information, as described above. The appropriate methods to call are shown below. and engine = fluxfactory.makej2sermiengine( ); // J2SE engine = fluxfactory.makej2eermiengine( ); // J2EE 5.5 Looking up a Flux RMI Server Instance Once you have created Flux as an RMI server, it is simple to look it up from the same Java virtual machine or a different one, given that you have specified your RMI host server. Flux Manual 34

36 remoteengine = fluxfactory.lookuprmiengine("my_remote_host"); The above code looks up an RMI server on the machine "my_remote_host", which was created using default arguments. 5.6 Creating Flux Using a Configuration Object If you choose, you can also collect all the configuration information into a configuration object and then create an engine instance using that configuration object. Configuration config = fluxfactory.makeconfiguration(); config.setdriver("oracle.jdbc.driver.oracledriver");... // Set other configuration settings. engine = fluxfactory.makeengine(config); By using a configuration object, you can modularize Flux s initialization data. There are many configuration properties that can be used. For complete details, see the flux.configuration Javadoc documentation. 5.7 Looking Up a Remote Flux Instance Using a Configuration Object You can also use the same configuration object to lookup a remote Flux instance. config.sethost("my_remote_host"); engine= fluxfactory.lookuprmiengine(config); 5.8 Creating Flux from a Configuration File Sometimes it is easier to set the configuration information in a properties or XML file. This way, for example, a database password does not have to be hard-coded in your application. // Properties configuration file Configuration propsconfig = fluxfactory.makeconfigurationfromproperties(mypropsfile); engine = fluxfactory.makeengine(propsconfig); // XML configuration file Configuration xmlconfig = fluxfactory.makeconfigurationfromxml(myxmlfile); engine = fluxfactory.makeengine(xmlconfig); Using either of the above methods, a configuration file is read, converted to a Configuration object, and then used to create an engine instance. The configuration file is a simple file name. To find the configuration file, or the XML DTD file that an XML configuration file references, Flux first looks in the system class path. Next, the current class loader s class path is searched. Finally, the current directory (user.dir) is checked. The XML DTD file that describes Flux s XML configuration file is EngineConfiguration.dtd. Flux Manual 35

37 Sample configuration files and examples can be found in the examples/software_developers/config and examples/software_developers/xml_config directories in the Flux package. All the configuration properties that can be configured are listed in the flux.configuration Javadoc documentation. 5.9 Looking Up a Remote Flux Instance from a Configuration File Similarly, a configuration file can be used to lookup a remote Flux instance. // Properties configuration file Configuration propsconfig = fluxfactory.makeconfigurationfromproperties(mypropsfile); engine = fluxfactory.lookuprmiengine(propsconfig); // XML configuration file Configuration xmlconfig = fluxfactory.makeconfigurationfromxml(myxmlfile); engine = fluxfactory.lookuprmiengine(xmlconfig) 6 XML Wrapper You can interact with the scheduler using XML APIs. These XML APIs form a wrapper around a Flux scheduler instance. Starting with flux.xml.xmlfactory, you can make instances of XmlEngine and XmlEngineHelper objects. The behavior is analogous to the behavior of flux.factory. Once you have created an XmlEngine, you can interact with it much like a normal Engine object. However, some methods are streamable. For example, the XmlEngine.put() method accepts an input stream containing an XML document. That XML document contains definitions of flow charts. Each of these flow charts is read and added to the scheduler. Similarly, the XmlEngine.get() methods write flow charts from the scheduler to an output stream. You can define the output stream to be a file or any object that extends java.io.outputstream. When an XmlEngine reads an XML file, an XML DTD file may be referenced. To find that XML DTD file, Flux first looks in the system class path. Next, the current class loader s class path is searched. Finally, the current directory (user.dir) is checked. The XML DTD file that describes streamable flow charts used by Flux s XML APIs is flow-charts-5-0.dtd. XML wrapper examples can be found in the examples/xml directory in the Flux package. Note that beginning with Flux 5.0, the XML file format changed. However, in Flux 5, there are still XML APIs to read and write the old Flux 4.2 XML flow chart file format as well as new XML APIs to read and write the new Flux XML flow chart file format. By default, the Flux 5.0 XML file formats are used. Note that as of Flux 6.0, support for the old Flux 4.2 XML flow chart file format was removed. If you have XML flow chart files in the Flux 4.2 format, use the Flux 5 APIs to convert any them to the Flux 5.0 format, which is the current XML flow chart file format used in Flux 5 and Flux 6. Flux Manual 36

38 7 Creating Flow Charts Flux models jobs using traditional flow charts. Flux s flow charts are as expressive and powerful as a finite state machine. By using a flow chart to model your job, your job can be arbitrarily complex. For example, your job may not contain any timing elements at all. It may simply execute actions A, B, and C in order. On the other hand, if your application requirements dictate that you execute actions A, B, and C in order, and then wait for a certain time to occur, followed by actions D, E, and F, then a flow chart is powerful enough to model that job. 7.1 Flow Charts To create a flow chart, you must create a flow chart object. To create a flow chart object, you need a helper object called EngineHelper. EngineHelper helper = fluxfactory.makeenginehelper(); FlowChart flowchart = helper.makeflowchart("my Flow Chart"); Flow charts must have names. In the example above, the name of the flow chart is "My Flow Chart". Flow chart names must not contain any of the following characters: "*", "?", "%", "_", ".", or "$". Flow chart name can be hierarchical. For example, the names "/my flow chart", "/heavyweights/my big job", and "/lightweights/my little job" are hierarchical and form a tree. The "/" symbol is the separator character that defines the different branches in the hierarchical tree. Collectively, all flow chart names form a hierarchical flow chart namespace, that is, a tree of flow charts. If a flow chart name is incomplete, because it refers to a directory in the namespace and not an actual flow chart name, the engine automatically generates and returns a unique, fully-qualified flow chart name. For example, if the flow chart name is null, empty, or "/", then the engine generates a unique name such as "/392" and returns it. Similarly, if the flow chart name is "/heavyweights/", then the engine returns a name such as "/heavyweights/72939" Listener Class Path Flux's listener class path feature is intended to allow you to install new versions of your Java Action and Dynamic Java Action listeners without having to restart your JVM. The listener class path is set on a flow chart or in the runtime configuration tree. By default, listener class paths are disabled. To enable them, you must enable the LISTENER_CLASS_LOADER_ENABLED engine configuration property. The listener class path is designed to reload only Java Action and Dynamic Java Action listener classes. It is not designed to reload tens or a hundred or more supporting jar files on which a listener class depends. In most cases, you should put these listener class jar file dependencies on the system class path. Any class loaded through the system class loader cannot be reloaded. If a flow chart's listener class path property set to null, the class path from the runtime configuration tree is used instead. By default, this property is set to null. The listener class path is either a list of directories and JAR files, or a single HTTP URL that points to one JAR file on the network. Flux Manual 37

39 In the following example, assume we have a file, database_listener.jar, which is located in the directory C:\action_listeners (or /action_listeners on Unix environments). To add this file to our listener class path, we would use the following code in a Windows environment: FlowChart flowchart = enginehelper.makeflowchart("flow Chart"); flowchart.setlistenerclasspath("action_listeners/databaselistener.ja r"); The listener class path may also accept in its argument a path to a directory. If a directory is specified, any JAR files under that directory are automatically included in alphabetical order in the listener class path as well. If, for example, we have multiple JAR files in the directory /reports_listeners that we want to include in our listener class path, we can modify our Windows code as follows: flowchart.setlistenerclasspath("/action_listeners/databaselistener.j ar:/reports_listeners"); Each item in the list is separated by a ":" or ";" character, depending on the operating system on which the code runs. Each item contains a path to a directory or JAR file containing "/" and "\" characters. On Windows, a letter followed by a ":" or ";" character is interpreted as a drive letter. As an alternative to JAR files and directories, the listener class path may be an HTTP URL. For example, if database_listener.jar was located at the URL we would use the following code: flowchart.setlistenerclasspath(" listener.jar"); Calls to FlowChart.setListenerClasspath() will be ignored unless the LISTENER_CLASS_LOADER_ENABLED configuration property on the Flux engine is set to true. By default, this configuration property is set to false, meaning the Flux default class loader is used unless explicitly instructed otherwise. We strongly recommend that you do not use our custom class loader when using Flux inside an application server environment. The application you use in your application server, most likely packaged as a WAR file, should be structurally sound within itself, and no outside classes should be required to run that application. NOTE: Flux will not dynamically load classes that were found on the engine's class path. If you want Flux to dynamically reload your classes from the listener class path, you must make sure they are not also available on the engine's class path. 7.2 Triggers Once a flow chart object has been created, you need to create either a trigger or an action. Commonly, flow charts start with a Timer Trigger and then flow into different actions. In this manner, the flow chart will wait until a specified time and date arrives before executing specific actions. Flux Manual 38

40 Triggers wait for some event to occur, and then fire. By waiting for a specific time, for a message to arrive, or for some other event to occur, successive actions in your flow chart will execute in the correct order. To create a trigger, you make it using a flow chart object. TimerTrigger timertrigger = flowchart.maketimertrigger("my Timer Trigger"); Once you have created a trigger, you set arguments on it. Typically, with a Timer Trigger, the following argument is set. timertrigger.settimeexpression("+5s"); To initialize this trigger, a Time Expression is passed in. In this example, the Time Expression "+5s" instructs this trigger to fire five seconds in the future. Complete instructions on how to build different Time Expressions are found in Chapter 26 Time Expressions. After a trigger fires, control flows into a different action or trigger and execution of the flow chart continues. The triggers that Flux supports are detailed beginning in Chapter 10 Core Triggers and Actions. Triggers can return variables. These variables are placed into a flow context, which is an object that represents the state of the flow chart as a thread of execution passes through the flow chart. Successive triggers and actions can access this flow context to retrieve data generated by earlier triggers and actions. The returned results of the Flux triggers are documented in Chapter 50 Trigger and Action Results. 7.3 Actions Within flow charts resides a large number of actions Flux provides, all with various functionality. If needed, you can also write your own custom actions that better conform to your businesses needs. Below are detailed descriptions of these actions Java Action In typical scenarios, your flow chart starts with a Timer Trigger. When that Timer Trigger fires at the appointed time, flow of control is guided into another trigger or action. Typically, that other trigger or action is a Java Action. A Java Action executes your Java code. In general, actions perform some function. You probably will start off by using a Java Action to run your Java code, but you can also use various JEE actions for calling into application servers, a Process Action for running native processes, various file actions for copying and moving large sets of files to FTP servers and on the local file system, user-defined actions that communicate with financial settlement engines, and an assortment of other actions that perform useful functions. The core actions that Flux supports are detailed in Chapter 10. Additional actions are explained in successive sections. To illustrate a specific example, the following code snippet describes how to create a Java Action that calls your Java code. Flux Manual 39

41 JavaAction javaaction = flowchart.makejavaaction("my Java Action"); Once you have created any action, you can configure it. In the typical case of a Java Action, the following configuration arguments are set. javaaction.setlistener(myjavalistener.class); javaaction.setkey(new MyJobKey("My Job Data")); To initialize this action, your job listener class is passed in. Your job listener class must implement the interface flux.actionlistener. Second, your job listener can be passed a job key, which is used to provide additional contextual information to your job listener. These job keys are called persistent variables. A persistent variable is simply a Java class with data stored in public fields. Complete details can be found in Section Refer to the collections example located in the examples/software-developers directory of the Flux installation directory for more information on using Keys in Java Actions. Here is a simple job key that allows the configuration data "My Job Data" to be passed to your Java job listener. Your job listener can make use of this data as it executes. // A simple "job key" (persistent variable) that passes job data // to the Java Action s job listener. public class JobKey { // Some job data. Must be declared in a public field. public String myjobdata; // Initializes this job key (persistent variable). public JobKey(String configurationdata) { this.myjobdata = configurationdata; } // constructor } // class JobKey Like triggers, actions can return variables. These variables are placed into a flow context, which is passed through the flow chart as the job executes. Successive triggers and actions can access this flow context to retrieve data generated by earlier triggers and actions. In particular, Java Actions return variables through the actionfired(keyflowcontext) method in the flux.actionlistener interface. The KeyFlowContext object passed into this method contains both the flow context for the flow chart and the job key specified in the key action property. The job key is obtained by using the getkey() method on the KeyFlowContext, and all of the methods contained within the FlowContext and VariableManager classes are available for use. The results returned by the Flux actions are documented in Chapter Sub Flow Chart Action To change the Synchronous option, use the following code. Flux Manual 40

42 setsynchronous(boolean synchronous); Refer to the Flux API's for the methods used to manipulate sub flow chart actions. 7.4 Trigger and Action JavaBean Properties All flow chart triggers and actions are represented as JavaBeans. Each trigger and action contains a new set of properties, which are used to configure each individual trigger and action. The fact that triggers and actions can be configured as JavaBeans provides a simple, robust mechanism interacting with Flux flow charts programmatically using JavaBean properties as the method of communication. In addition to using the Flux APIs for configuring triggers and actions within a flow chart, you can use the standard JavaBean APIs for accomplishing the same result. All trigger and action JavaBean properties are documented as described in Chapter 46 Trigger and Action Properties (JavaBeans Documentation). You will notice that these properties are used in the Flux GUI as well as the Flux XML flow chart file format. For details on how to write a custom trigger or action and use it like any other trigger or action in Flux, see Chapter 12 Custom Triggers and Actions. You can use custom triggers and actions at the Flux API level as well as in the Flux Designer and Web-based Designer. 7.5 Flow Charts that Start with an Action Even though it is very common for flow charts to start with a Timer Trigger, you can also start your flow charts off using actions. Suppose that you have a long-running flow chart and you want to run it in the background of your application. By constructing a flow chart that starts with an action, you can schedule your flow chart to run in the background, as soon as sufficient resources are available. 7.6 Flows By default, the first trigger or action that is made from a flow chart is the first to execute when the flow chart starts executing. However, after that first trigger or action executes, control must flow into a different trigger or action. Flows allow you to specify these connections. For example, to build a flow chart that starts with a Timer Trigger and flows into a Java Action, you need to define a flow from the Timer Trigger to the Java Action. This code creates a flow from the Timer Trigger into the Java Action. After the Timer Trigger fires, control flows into the Java Action. TimerTrigger timertrigger = flowchart.maketimertrigger("my Timer Trigger");... JavaAction javaaction = flowchart.makejavaaction("my Java Action");... timertrigger.addflow(javaaction); Flux Manual 41

43 Let s take this example a bit further. Suppose you wish to define a flow chart that executes a Java Action every five seconds, indefinitely. The above flow chart will execute your Java Action only once. To allow your Java Action to execute again after another five seconds, you must add a flow from the Java Action back up to the Timer Trigger. javaaction.addflow(timertrigger); In this manner, your Java Action will fire every five seconds, indefinitely. Continuing with this example, suppose you want this Java Action to fire for a total of ten firings. To add in this restriction, you need to set a count on the Timer Trigger. timertrigger.setcount(10); After the Timer Trigger fires ten times, control will not flow into the Java Action. Instead, it branches down a special expiration flow. This expiration flow allows you to perform different actions. Or, in the simplest case, your flow chart can terminate. Here is a complete example. This example may seem a bit long, but it s basic. Just paste it into your text editor, compile, and run it. import flux.actionlistener; import flux.engine; import flux.enginehelper; import flux.factory; import flux.flowchart; import flux.javaaction; import flux.keyflowcontext; import flux.timertrigger; public class CompleteExample { public static void main(string[] args) throws Exception { Factory fluxfactory = Factory.makeInstance(); EngineHelper helper = fluxfactory.makeenginehelper(); FlowChart flowchart = helper.makeflowchart("my Flow Chart"); TimerTrigger trigger = flowchart.maketimertrigger("my Trigger"); trigger.settimeexpression("+1s"); trigger.setcount(10); JavaAction action = flowchart.makejavaaction("normal Action"); action.setlistener(myjoblistener.class); action.setkey(new MyJobKey("My Job Data")); trigger.addflow(action); Flux Manual 42

44 action.addflow(trigger); // The above code creates a job that fires a Java Action every // five seconds for a total of ten firings. // Now create an expiration flow to an expiration action. JavaAction expired = flowchart.makejavaaction("expired Action"); expired.setlistener(anotherjoblistener.class); expired.setkey(new AnotherJobKey("More Job Data")); // Finally, after the Timer Trigger has fired ten times, // flow into a different part of the flow chart. You are free // to add more actions and triggers after the "expired" action // executes. trigger.setexpirationflow(expired); // Now we have defined our job. To see this job fire, // create an engine, start it, and then add this job // to the engine. Engine scheduler = fluxfactory.makeengine(); scheduler.start(); scheduler.put(flowchart); // Give the example job a chance to run. Thread.sleep(15000); // Finally, shut down the scheduler and exit. scheduler.dispose(); } // main() // A job listener that executes Java code when called. public static class MyJobListener implements ActionListener { public Object actionfired(keyflowcontext flowcontext) throws Exception { Flux Manual 43

45 System.out.println("MyJobListener has fired."); System.out.println("Your code goes here."); return null; } // actionfired() } // class MyJobListener // Another job listener that executes Java code when called. public static class AnotherJobListener implements ActionListener { public Object actionfired(keyflowcontext flowcontext) throws Exception { System.out.println("AnotherJobListener has fired."); System.out.println("Your code goes here."); return null; } // actionfired() } // class AnotherJobListener // Holds job data. public static class MyJobKey { // The actual job data. public String myjobdata; // The default constructor. public MyJobKey() { } // constructor // Initializes the job data object. public MyJobKey(String keydata) { this.myjobdata = keydata; } // constructor Flux Manual 44

46 } // class MyJobKey // Holds more job data. public static class AnotherJobKey { // The actual job data. public String morejobdata; // The default constructor. public AnotherJobKey() { } // constructor // Initializes the job data object. public AnotherJobKey(String keydata) { this.morejobdata = keydata; } // constructor } // class AnotherJobKey } // class CompleteExample This last example, while a little more complex, shows the expressive power of modeling jobs as flow charts. You can create flow charts that truly model the complexity of your job requirements. The above flow chart executes a Java Action 10 times by creating a loop in the flow chart. After the tenth execution, flow then branches down a new path, where new actions and triggers can be executed and fired Conditional Flows Triggers and actions can return results. Depending on those results, control can branch to different triggers and actions. These conditional branches are called conditional flows. Until now, all flows were unconditional. That is, they were always taken regardless of the previous result. Conditions are evaluated using an expression language that evaluates variables. A persistent variable is simply a Java object that contains public fields. Those public fields are automatically stored and retrieved from your database. Triggers and actions can return such variables. Conditions can then test those persistent variables to determine if the flow chart should branch in one direction or another. By convention, the name of a variable that a trigger or action returns is named result. If the previous trigger or action returned a variable that contains a public field named istrue, then you can build an expression that uses that information: result.istrue When building your flow chart, you must be aware of the kinds of variables that triggers and actions can return. The variables returned from built-in Flux actions and triggers are documented in Chapter 50. With this knowledge, you can create a conditional Flux Manual 45

47 expression that references particular public fields in a persistent variable that was returned by a previous action or trigger. The following code illustrates this capability. timertrigger.addflow(javaaction, "result.istrue"); Control flows from the Timer Trigger to the Java Action only if the condition result.istrue is satisfied, otherwise, that flow is not followed. You can add any number of such conditional flows leading from one action or trigger to another. In general, you do not need to hard code the string result. Instead, you can use a method from the EngineHelper helper object. timertrigger.addflow(javaaction, EngineHelper.useLastResult("istrue")); These two conditional flows are equivalent. These conditional expressions can test fields that are of any basic type, including strings, booleans, integers, floats, and so on. A string expression is satisfied if it contains the word true. Case is ignored. A simple boolean expression is satisfied if it contains the value true. Similarly, the numerical expressions are satisfied if they contain a non-zero value. In the above examples, it is not known whether result.istrue is a string, a boolean, or a numerical type. It does not matter. The conditional expression is evaluated regardless of its underlying type Conditional Flow Syntax You can create conditional expressions very similar to the WHERE clause of SQL queries. It is possible to write conditional expressions like result.salary >= 5000, evaluate multiple variables in your flow context like (result.salary >=5000) AND (result.age!= 20), evaluate data contained in a java.util.map, and other types of complex expressions. Conditional flows can be created using the sytax described in Table 7-1 below. You can mix and match these operators and use multiple variables to create complex statements that accurately describe your requirements. Operators Table 7-1: Conditional Expression / Message Selector Syntax Description = Equal To > Greater Than < Less Than >= Greater Than or Equal To <= Less Than or Equal To <> Not Equal To + Addition - Subtraction * Multiplication Flux Manual 46

48 / Division % Wildcard character for use with LIKE LIKE NOT AND OR IN mapname[ keyname ] Search for a pattern TRUE if the conditional expression evaluates to FALSE TRUE if all conditional expressions evaluate to TRUE TRUE if any conditional expression evaluates to TRUE TRUE if your value exists within a collection of values Evaluates data stored under the name "keyname" in a map with the name "mapname". Evaluates public fields in the value Else Flows An Else Flow is followed if there are no unconditional flows and none of the conditional flows were satisfied. There can be at most one Else Flow branching out of a trigger or action. javaaction.setelseflow(anotherjavaaction); Error Flows In addition to error flows, note that errors can be handled in a more general way using default flow chart error handlers as described below in Section 7.7. Sometimes, errors occur in triggers and actions. For example, an Action may not be able to contact its server. In this case, you can define explicit error flows that let your flow chart explicitly deal with the error condition. JavaAction javaaction = flowchart.makejavaaction("my Java Action");... JavaAction erroraction = flowchart.makejavaaction("my Error Action");... javaaction.seterrorflow(erroraction); If the above Java Action throws an exception, thus indicating an error, control flows into "My Error Action", where the error can be handled appropriately. As of Flux 7.7, transactions are no longer rolled back by default; instetad, error flows can be defined to rollback the current transaction. The transaction can be explicitly rolled back by using the Action.setErrorFlowWithRollback() method. Once this rollback is complete, a new transaction begins and the flow of execution continues through the specified error flow. All work in that error flow, such as runtime data mapping and signal raising, is then performed. At the end of the error flow, a transaction break is encountered and the current transaction executing in that specific error flow is Flux Manual 47

49 committed to the database. Once the transaction in the error flow is committed, flow will continue through the specified actions in the normal way. From here you may set up actions to fix the problem and continue with the flow chart, or you may wish to just end the flow chart and exit. See Figure 7-1 below for a detailed diagram of the behavior of transactions with error flows. For more information on transactions and how they work within Flux, refer to Chapter 31 Transactions. Figure 7-1: Transaction Behavior with Error Flows In the case that an error occurs in a trigger or action and there is no error action defined, the Flux engine presumes that the flow chart is finished, because there is no appropriate flow to follow. By default, the Flux engine automatically deletes flow charts when they finish. This way, the database does not grow unbounded. When an error flow is followed, there is a flow context variable generated named ERROR_RESULT. The type of this object is flux.errorresult. You can query this error result object to retrieve the exception that caused the error. This variable can be accessed by calling flowcontext.geterrorresult(). After an error flow, if the action is successfully executed, the error result variable will be cleared. Furthermore, in the context of an error handler, if the error handler completes and control returns to the main flow chart, the error result variable is cleared as well. See the Javadoc documentation for flux.errorresult for further details Looping Flows To create loops in a flow chart, simply set a flow from one trigger or action to another such that the flow of control loops back to a previous trigger or action. By using looping flows, you can execute a series of steps several times before branching out of the loop when a user-specified condition is met. Flux Manual 48

50 7.7 Default Flow Chart Error Handlers Section above describes how to create an error flow to handle errors that occur when a trigger or an action executes. An error flow handles an error in a specific trigger or action in a specific way. To handle specific situations with a specific error flow, an error flow must be defined at each trigger or action for each specific situation. On the other hand, default flow chart error handlers have a broad brush. A single error handler can be defined in one location. That error handler can be responsible for recovering from errors, retrying actions, sending notifications, and performing other error recovery actions. By default, a default error handler is installed that retries actions up to five times, with one minute delays in between. If the action still fails after five retries, the execution flow context is placed into the ERROR super-state and the PAUSED sub-state. Default error handlers are simply flow charts that are installed in the runtime configuration tree. When an execution flow context executes an action and that action throws an exception, if no error flow is defined, the Flux engine searches for a default error handler. The Flux engine checks the hierarchical flow chart namespace in the runtime configuration tree, beginning at the location in the tree that corresponds to the name of the flow chart and working its way to the root of the tree. The first default error handler that is found is used. If no error handler is found, the flow chart is considered to be finished, in which case the flow chart terminates and is deleted in the normal way whenever a flow chart finishes. However, if a default error handler is found, the following actions take place. The execution flow context s transaction is rolled back. The execution flow context s super-state is changed from NORMAL to ERROR. The execution flow context creates the default error handler flow chart and begins executing it. If the error handler flow chart terminates normally, without throwing an exception, the execution flow context commits the current transaction and resumes execution in the main flow chart from the point of the last committed transaction. On the other hand, if the error handler flow chart terminates by throwing an exception, the execution flow context commits the current transaction, and the main flow chart is placed into the PAUSED state. Because paused flow charts do not execute when paused and because this flow chart is in the ERROR super-state, extraordinary actions can be performed in an attempt to remedy the flow chart and ultimately resume it. Whenever a flow chart action is retried after an error, runs successfully to a transaction break, and successfully commits the transaction, the execution flow context s super-state is changed back from ERROR to NORMAL, and the error handler flow chart instance is destroyed. As with error flows, when a default flow chart error handler is entered, there is a flow context variable named error_result. The type of this object is flux.errorresult. You can query this ErrorResult object to retrieve the exception that caused the error. See the Javadoc documentation for flux.errorresult for further details. Flux Manual 49

51 See the Error Handling example located in the /examples/software_developers/error_handling directory under your Flux installation directory for a complete example on showing how to use a default flow chart error handler. 7.8 Runtime Data Map Properties for Triggers and Actions Triggers and actions contain static, constant properties that are defined when a flow chart is created. These constant properties configure the behavior of different triggers and actions. Documentation on these properties is available in Chapter 46 Trigger and Action Properties (JavaBeans Documentation) or in each action or trigger s entry in the Javadocs. For example, the "Time Expression" property on the Timer Trigger trigger configures how often the Timer Trigger fires. Normally, this property is set when the flow chart is created and initialized using the method TimerTrigger.setTimeExpression() or by setting the JavaBean property called "Time Expression" on the Timer Trigger. However, properties on triggers and actions can also be set dynamically at runtime. This feature makes it possible to configure an action with data that is generated at runtime. For example, suppose you have a flow chart that contains a file trigger that monitors all "*.xml" files on a remote FTP server or on the local file system. Further suppose that in a subsequent action, the flow chart must move the actual XML file that caused the file trigger to fire to a new location. This setup requires a trigger and action a File Exist Trigger to locate the files to be moved based on a file criteria, which would be *.xml in this case, and a FileMoveAction to actually move the files. In your File Move Action, you must set a dynamic property so that the file that will actually be moved is the file that actually caused the file trigger to fire. To configure the trigger and action properties that are set at runtime, use a runtime data map. A runtime data map describes what trigger and action properties should be configured with data derived at runtime. The following example shows how to configure a File Move Action to use the actual file name that was found by a File Exist Trigger. Map runtimedatamap = new HashMap(); FlowContextSource source = myflowchart.makeflowcontextsource("result.url_matches"); ActionPropertyTarget target = myflowchart.makeactionpropertytarget("simple File Source"); runtimedatamap.put(source, target); filemoveaction.setruntimedatamap(runtimedatamap); The above source code first creates a runtime data map and then populates it with a single entry. This entry maps a flow context variable called RESULT.url_matches, which is placed into the flow context by the File Exist Trigger after it fires, to an action property called "Simple File Source", which is a property on the File Move Action. When this flow chart runs and the File Move Action is activated, just before it executes, data is copied from the RESULT.url_matches flow context variable into the "Simple File Source" property on the File Move Action. Flux Manual 50

52 This runtime data mapping allows the File Move Action to actually move the file that was found by the File Exist Trigger. At the time that the flow chart is created, the exact file name that will be found by the File Exist Trigger is not known, because the File Exist Trigger is configured to watch for all "*.xml" files. But at runtime, when an actual XML file is found, say, "foo.xml", then the File Move Action is configured, on the fly, to move the file called "foo.xml". Note that the above runtime data map example will not work correctly if "filemoveaction" is declared to have a join point, because after a join point, the flow context variable managers on inbound flows are deleted. In the above example, the source of the runtime data map will not exist, because after the join point is satisfied, the source flow context is destroyed. The kind of data that can be mapped at runtime includes: Source Data flux.runtimedatamap.flowcontextsource. Data that is copied out of the flow context at runtime. flux.runtimedatamap.runtimeconfigurationsource. Data that is copied out of the runtime configuration at runtime. The runtime configuration is described in Section 20. flux.runtimedatamap.flowchartsource. Data that is retrieved from the current flow charts variable manager. Target Data flux.runtimedatamap.actionpropertytarget. Data that is copied into a trigger or action property at runtime. flux.runtimedatamap.flowcontexttarget. Data that is copied into the flow context at runtime. flux.runtimedatamap.flowcharttarget. Data that is copied into the current flow chart s variable manager. These runtime data maps can be defined on any action or any flow. The data mapping for a flow chart is performed at runtime while the flow chart executes. 7.9 Substitution All triggers and actions contain properties that affect their behavior and execution. Substitution allows for some of these properties to be replaced at runtime with variables from the executing action, flow context, flow chart, or runtime configuration. This behavior allows for dynamic property loading and convenience when using variables on a global, flow chart, scale. For example, suppose after detecting files with a File Exist Trigger, those files must be deleted. This behavior may be implemented by retrieving the results from the File Exist Trigger, mapping them to the flow context, then using that variable for the Simple File Source of a File Delete Action. There are three types of substitution, Global, Runtime, and Date substitution. These types of substitution are all explained below in the proceeding sections. NOTE: Substitution may not be used on any properties whose syntax is verified at build time including, but not limited to, Listeners and Join Expressions. Flux Manual 51

53 7.9.1 Global Substitution Global substitution uses the syntax ${VariableName} to replace a property with the corresponding variable s value from either the action s, flow context s, or flow chart s variable manager. Global substitution works in precedence order. The variable name the property uses is first searched for in the action s variable manager, then flow context, and finally the flow chart s variable manager. If no instance of the variable is found in these scopes, the property will be used in its present condition. The syntax for global substitution is as follows: // Retrieve the flow chart s variable manager. VariableManager variablemanager = flowchart.getvariablemanager(); // Add a variable into the flow chart s variable manager. variablemanager.put("message", "Console Action s message."); // Make a Console Action. ConsoleAction ca = fc.makeconsoleaction("console Action"); // Set the Console Action s message property to reference the // message variable. ca.setmessage("${message}"); The above code uses a variable s value from the executing flow chart to substitute the Console Action s message property at runtime. This is indicated by the ca.setmessage("${message}");. Notice the variable is first put into the FlowChart s variable manager. When using URL substitution, either a URL object or a string representation of a URL may be passed into the variable manager and used in the variable substitution. Refer to the example code below: // Add the URL variable into the variable manager. VariableManager variablemanager = fc.getvariablemanager(); URL var = new URL("file:///MyFile.file"); variablemanager.put("filetomove", var); // Create a File Move action. FileMoveAction moveaction = flowchart.makefilefactory().makefilemoveaction("move"); // Create a Set to be used for the Simple File Source. Set simplefilesource = new HashSet(); // Create a URL to place into the Simple File Source. // This URL will use the substitution syntax. URL url = new URL("file:/${FileToMove}"); // Add the URL to be substituted into the Simple File Source. simplefilesource.add(url); // Use the Simple File Source in the File Move Action. Flux Manual 52

54 moveaction.setsimplefilesource(simplefilesource); // Create a Simple File Target to tell the File Move Action // where and under what name to place the moved file. Set simplefiletarget = new HashSet(); simplefiletarget.add(url); moveaction.setsimplefiletarget(simplefiletarget); The above code uses a URL object to specify the variable to replace. Upon runtime, those URLs will be replaced with the variable located in the flow chart variable manager under the name "FileToMove" Date Substitution Date substitution provides a way to search, create, and delete files whose names hold date representations. If the format is syntactically correct, a representation of the current date is constructed according to the parameters given to the date formatter. Time expressions may also be applied to display a date relative to the current date. For more information on time expressions see. For example, suppose your log files from a program are automatically backed up each night. A directory is created with the current date as the name and the log files are stored in the appropriate directory. Further suppose that you wish to keep log files from a month ago only. Using Flux, a File Delete Action may be used with a FileCriteria object that constructs a directory name using the date formatter. The syntax would look similar to the code below: FlowChart deleter = helper.makeflowchart("deleter"); FileFactory filefactory = deleter.makefilefactory(); FileDeleteAction deleteaction = filefactory.makefiledeleteaction("file Delete Action"); // Create a criteria for the FileDeleteAction to use. FileCriteria deletecriteria = filefactory.makefilecriteria(); // Specify the files you wish to delete by using the criteria's // include method. deletecriteria.include("${date (-1M) d-mmm-yyyy})"; deletecriteria.setbasedirectory(backup_dir); // Add the criteria to the FileDeleteAction. deleteaction.add(deletecriteria); Assuming the backup directories are named in the format d-mmm-yyyy (1-Jan-2xxx), the above code would construct a File Delete Action that deletes a directory in the specified Base Directory with the name set as last months date relative to the current date. So, if today s date was 7-Jun-2006, the File Delete Action would look for a directory named 7-May-2006 to delete. When using date substitution, the syntax must be as follows: ${date (optional_time_expression) date_formatters} For example, to add the current date into a File Criteria s includes in the format Sunday-Jan-2xxx, use the following syntax inside the File Criteria s include() method: ${date DD-MMM-yyyy} Flux Manual 53

55 Table 7-2 below shows the available date substitution variables: Date Formatter Table 7-2: Date Formatter Symbols Result yyyy 2006 yy 06 MMMM MMM June Jun MM 06 M 6 dd 01 d 1 DD D HH 08 H 8 mm 05 m 5 ss 06 s 6 Thursday Thurs For a detailed example using date formatters with includes, see the IncludeDateFormatters example in the examples/software_developers/date_formatters directory under your Flux installation directory Runtime Configuration Substitution Runtime configuration substitution behaves similarly to global substitution except that runtime configuration substitution deals with a broader spectrum. Given the proper syntax, runtime configuration substitution retrieves a specified variable s value from the engine s runtime configuration and substitutes it into a property. Runtime configuration substitution is helpful when a variable must be used in more than one flow chart. To specify that a runtime configuration variable must be retrieved, use the following syntax: ${runtime VariableName} For more information on Runtime Configuration, see Chapter 24 Runtime Configuration. 8 Running Flow Charts To run a flow chart, you schedule it for execution by adding it to the Flux engine. Flux Manual 54

56 8.1 Adding Flow Charts to the Engine Once a flow chart has been created as a FlowChart object, it can be added to the engine to be executed in the background. To add a flow chart to the engine, call Engine.put(). A fully qualified flow chart name will be returned, as shown below. String jobname = engine.put(myflowchart); While the engine adds the flow chart, it also verifies the flow chart for correctness. For example, a Timer Trigger that does not contain either a time expression or a scheduled trigger date will fail verficiation. Consequently, the entire flow chart will fail verification, an exception will be thrown, and the flow chart will not be added to the engine. Once the flow chart is successfully added, the unique flow chart name can be used to retrieve the flow chart from the database. Retrieved flow charts represent a snapshot of the flow chart in execution: while the flow chart on the engine will continue to execute, the retrieved flow chart will not. The flow chart name can be extracted from the retrieved flow chart by calling FlowChart.getName(). You can use th wilcard characters "*" and "?" to search the namespace for flow charts. The "*" character matches any character zero or more times, and the "?" character matches any character exactly once. Flux uses the JDK 1.4+ regular expression library to evaluate these wildcards. Once started, a flow charts runs until it is finished. In general, a flow chart finishes when the engine executes a trigger or action and there is no appropriate flow to follow afterwards. There are some exceptions to this behavior: for example, a trigger or action may throw an exception, in which case the error flow is followed. If there is no error flow defined, there is no appropriate flow to follow and the flow charts finishes. By default, the engine automatically deletes flow charts when they finish executing. In this way, Flux prevents the database from growing unbounded as new flow charts are added. 8.2 Adding Flow charts at a High Rate of Speed Your application may call for adding many hundreds or thousands of new flow charts continuously. If this is the case, you can increase the throughput at which flow charts are added to the engine. A single engine instance will only permit a certain amount of new flow charts to be added per second to the engine. If you need to increase that throughput rate, you can instantiate a second engine instance and add additional flow charts through that second instance. Both engine instances will add new flow charts to the same database. Consequently, these newly added flow charts will be eligible for firing by any engine instance in the cluster. In fact, you can create a number of engine instances and use them in parallel to add new flow charts to the engine. This "array of engine instances" can be used to add new flow charts at a high rate of speed. To configure the array of engine instances, follow these steps. 1. Create an appropriate number of engine instances for the array. The right size of your array depends on the sophistication of your computer system, but as a rule of Flux Manual 55

57 thumb, create between 2 and 8 engine instances. You may need to adjust the size of your engine array to match the capabilities of your computer system. An engine array that is too large will slow down your system. 2. Configure each engine instance to point to the same set of database tables. 3. Do not start any of the engine instances. A started engine will not add new flow charts as fast as a stopped engine. By default, a newly created engine instance is stopped, and is only started by calling Engine.start(). 4. (Optional) Disable cluster networking on each engine in the array to increase throughput by a small amount. Use the array of engine instances to add new flow charts. You may need one or more additional engine instances to actually fire flow charts. These other engine instances must be started in order to execute flow charts. 8.3 Flow Chart Order of Execution When multiple flow charts are waiting to be executed on an engine, the engine must determine which flow chart should be executed next. To make this decision, the engine considers three primary factors: execution time, priority, and timestamp (the age of the flow chart on the engine). 1. Execution time Execution time is the time at which a flow chart is ready to fire. If a flow chart is not yet scheduled to fire, or it cannot be executed from its current state (for example, the flow chart is paused), then the flow chart will not be executed. A flow chart's execution time supercedes its priority. If Flow Chart A is ready to execute and Flow Chart B is not, then Flow Chart A will always execute first, even if Flow Chart B has a higher priority. If both flow charts are ready to execute, then the execution time is equal and the priority of each flow chart must be considered. 2. Priority If two flow charts are both ready to execute, the flow chart with the higher priority will always run first. Flow chart priorities can be set through the flow chart's runtime configuration. The lower the number, the higher the priority; a priority of 10 takes precedence over a priority of 100. For more information, see. A flow chart's priority takes precedence over its timestamp. Flow charts with higher priorities will run before flow charts with lower priorities regardless of which was submitted to the engine first. 3. Timestamp Finally, if two flow charts are both ready to execute and have the same priority, the flow chart that was submitted to the engine first will execute. For example, a flow chart that was submitted at 3:00 AM will run before a flow chart that was submitted at 5:00 AM as long as both have the same priority. Other factors may also affect the order of execution. For instance, an expedited flow chart will fire as soon as possible regardless of the execution time, priority, or timestamp of any other scheduled flow charts. Other factors such as the fairness time window can affect the prioritization of flow charts, and can cause lower-priority jobs to be executed before higher-priority jobs. Flux Manual 56

58 Concurrency throttling can also affect the order of execution for your flow charts. Concurrency throttling takes precedence over execution time, priority, and timestamps. A concurrency throttle might only allow a certain number of high-priority jobs to execute at once, which could cause some low-priority jobs to be executed instead (An example of this is demonstrated in Section Concurrency Throttling below). For more information on concurrency and the fairness time window, refer to Chapter 24 Runtime Configuration on page Order of Execution Examples Execution Time and Priority Scenario: Group A contains 15 flow charts that each take 1 minute to execute. All of the flow charts in Group A are submitted at 8:00 AM. The flow charts in Group A have a priority of 300. Group B contains 10 flow charts that each take 2 minutes to execute. All of the flow charts in Group B are submitted at 8:05 AM and have a priority of 100. The flow charts in both groups are ready and scheduled for immediate execution. The concurrency throttle is set to 1, meaning only 1 flow chart may execute at a given time. Result: At 8:00 AM, Group A would have preference and begin executing. At 8:05, 5 flow charts from Group A have already been executed when Group B is submitted to the engine. Now, because both groups are scheduled for immediate execution, their execution time is considered equivalent. As a result, Group B will begin executing its flow charts, since its priority is higher than Group A's. Once all 10 of Group B's flow charts have run to completion, the remaining 10 flow charts from Group A will execute Timestamps Scenario: Group A and Group B both contain 10 flow charts that take 1 minute to execute. The flow charts in both groups have a priority of 300, and the concurrency throttle is set to 1. Group A's flow charts are submitted to the engine at 8:00 AM. Group B's flow charts are submitted at 8:05 AM. Result: At 8:00 AM, Group A would have preference and begin executing. At 8:05, Group B's flow charts are submitted to the engine. This time, because both flow charts have the same execution time and priority, precendence goes to the flow charts that were submitted first. In this case, because Group A was submitted at 8:00 AM, the remaining 5 flow charts from Group A must finish executing before any flow charts in Group B will run Concurrency Throttling Scenario: Group A contains two flow charts, each of which takes 10 minutes to complete. These flow charts have a priority of 100, with a concurrency throttle of 1 on this branch in the Flux Manual 57

59 flow chart namespace. This means that only 1 flow chart from Group A may execute at a given time. Group B contains three flow charts that each take 1 minute to complete. These flow charts have a priority of 300, with a concurrency throttle of 1 on this branch of the flow chart namespace. Both groups of flow charts are submitted at 8:00 AM. The concurrency throttle on the root node of this flow chart namespace is set to 2, meaning that only 2 flow charts can execute at the same time. Result: At 8:00 AM, the first flow chart from Group A and the first flow chart from Group B will begin executing simultaneously. This is the effect of concurrency throttling: while 2 flow charts are allowed to run simultaneously, each group is throttled so that only 1 node can execute from that group. Because concurrency throttling will not allow the second flow chart from Group A to execute while the first is still running, all three flow charts from Group B will finish before the second flow chart from Group A runs, even though Group A's flow charts would normally have preference due to their priority. 9 Modifying Flow Charts Once flow charts have been created and added to the scheduler, they can be modified. The attributes of a flow chart as well as its triggers and actions can be modified. To modify a flow chart, the flow chart must first be retrieved from the scheduler by calling Engine.get(), Engine.getFlowCharts(), or Engine.getByState(). Once retrieved, the flow chart s attributes can be updated. For example, if a flow chart contains a JavaAction, that JavaAction s key can be updated by calling JavaAction.setKey(). Note that you cannot add or remove any triggers, actions, or flows from an existing flow chart. Your modifications are restricted to the properties and attributes of a flow chart and its triggers, actions, and flows. The reason for this restriction is that it is too difficult to determine how to resume execution of a running flow chart when the structure of that flow chart has changed so much that it is no longer recognizable. Once the flow chart has been fully modified, the changes must be submitted to the engine before they take effect. If you do not submit the changes back to the engine, the flow chart will not be modified. The in-memory FlowChart object that represents the flow chart is strictly in-memory. That is, changes to the in-memory object do not affect the underlying database until you submit the flow chart back to the engine by calling Engine.put(). For example, suppose a simple flow chart is built and added to the scheduler like so. // Create a simple flow chart. FlowChart flowchart = EngineHelper.makeFlowChart("my flow chart"); // Create a timer trigger that fires every 5 seconds. TimerTrigger trigger = flowchart.maketimertrigger("my timer trigger"); trigger.settimeexpression("+5s"); // Fire every 5 seconds. // Create a Java action to do some work. JavaAction action = flowchart.makejavaaction("my java action"); Flux Manual 58

60 action.setlistener(mylistener.class); // Set up the flows in this flow chart. trigger.addflow(action); action.addflow(trigger); // Finally, add the flow chart to the scheduler. String flowchartname = Engine.put(flowChart);... // Now, modify the time expression on the timer trigger // to 30 seconds. FlowChart modifiedflowchart = engine.get(flowchartname); // Update the timer trigger to 30 seconds. TimerTrigger modifiedtrigger = (TimerTrigger) modifiedflowchart.getaction("my timer trigger"); modifiedtrigger.settimeexpression("+30s"); // Submit changes back to the scheduler. engine.put(modifiedflowchart); Until you submit the flow chart s changes back to the scheduler, they are not saved. If your flow chart was running at the instant you called Engine.put(), the running flow chart will be interrupted and restarted from the end of the last transaction. This behavior ensures that your modifications are not lost. Note that if you attempt to modify your flow chart from within the flow chart itself (say, from a JavaAction), then your flow chart will rollback and be re-executed automatically. This behavior leads to a never-ending loop and must be avoided. Consequently, any attempt to modify a flow chart from itself will fail, due to the flow chart modification behavior described above. However, you can successfully modify a flow chart under the following conditions: 1. A client thread modifies a flow chart through the Engine.put() method. A client thread is any thread that is not executing a flow chart. 2. A flow chart thread modifies any flow chart other than itself through the Engine.put() method. A flow chart thread is a Flux thread that is executing a flow chart. 3. (Advanced Technique Be careful! You must know and understand what you are doing!) A flow chart thread forks a new thread that modifies that same flow chart. After forking the thread, the flow chart waits for the forked thread to return using Thread.join(). After the forked thread s modification, the flow chart s transaction will still rollback. However, the modifications made by the forked thread are permanent, unlike in technique 2 above. Be careful when using this technique. You must modify your flow chart in such a way that you do not create a permanent loop. You can create permanent loops by repeatedly forking the thread that modifies the flow chart and, subsequently, forcing re-execution of that same code when your flow chart automatically re-executes (after the flow chart s transaction is rolled back). 4. (Advanced Technique Be careful! You must know and understand what you are doing!) A flow chart thread modifies its own flow chart object in memory without calling any Engine methods. The flow chart modifications will last only until the next transaction break at which time they are lost. Flux Manual 59

61 10 Core Triggers and Actions Flux can execute arbitrary triggers and actions. You can write your own custom triggers and actions, or you can use the triggers and actions that are bundled with Flux. The following is a detailed description of the core triggers and actions that Flux provides. Note that all triggers implement the flux.trigger interface, all actions implement the flux.action interface, and flux.trigger is a sub-interface of flux.action. Due to this inheritance structure, when the Flux documentation refers to actions, sometimes the context implies that the Flux documentation is referring to triggers as well. However, it is not always the case. It depends on the context of the documentation. It would be beneficial to view the Javadoc page for each action and trigger as they are presented in this portion of the documentation Timer Trigger A Timer Trigger waits for a certain time or date, then fires. There are different ways that a Timer Trigger can fire. It can be scheduled to fire at an absolute time such as 1 January 2007 or at a relative time, such as four hours from the current time. These flow charts can be run once or on a recurring basis. They can repeat forever, a fixed number of times, or until a specific ending date. To enable repeated firings, you must have a loop in your flow chart that flows back around to the Timer Trigger again One-Shot Timer Also known as a one-off timer, a one-shot timer fires exactly once. It does not repeat. To schedule a one-shot timer, simply make the following call. timertrigger.setscheduledtriggerdate(datetofiretimer); After your one-shot timer fires, your flow chart can proceed into an appropriate action and execute it at the right time Recurring Timer Unlike a one-shot timer, a recurring timer fires more than once. Recurring timers require two basic elements: a Time Expression and a loop in the flow chart to return control to the recurring timer. timertrigger.settimeexpression("+5s"); This timer trigger uses a Time Expression of "+5s" to fire this trigger every five seconds. Once again, you must define a flow around and back to the Timer Trigger again in order to allow it to fire again after an additional five seconds Business Interval If provided, a Timer Trigger will use a Business Interval in conjunction with its Time Expression to calculate the next scheduled firing time. The Business Interval is used to avoid scheduling flow charts on company holidays, outside business hours, and other special times. Business Intervals, including how to define your own, are explained in greater detail in Chapter 27 Business Intervals. For an example demonstrating business intervals, refer to the directory examples/software_developers/business_interval under the Flux installation directory. Flux Manual 60

62 10.2 Delay Trigger The Delay Trigger is needed when you want to wait for a certain amount of time, unlike a timer trigger, which fires on a set schedule. The DelayTrigger is particularly useful in default error handlers where you need to pause for a time before retrying a particular action. The following attributes are available for configuration on a DelayTrigger: The count describes how many times the delay trigger will fire before it expires. The delay time expression specifies either the amount of time the delay trigger will wait. The expiration flow indicates the flow the timer trigger will take when it expires (the count reach 0). The expiration time expression specifies the time that a delay trigger will expire Java Action A Java Action executes code from one of your Java classes. It is simple to use. First, set the class name of your flow chart listener. javaaction.setlistener(myflowchartlistener.class); This flow chart listener class must implement the interface flux.actionlistener. When this Java Action fires, the class MyFlowChartListener will be created and the actionfired() method will be called. You can then perform actions that are appropriate for your flow chart. Finally, you can set an optional flow chart key. javaaction.setkey(new MyVariable("My Data")); A flow chart key is simply a persistent variable. By providing a unique flow chart key, your Java Action can tailor its behavior according to the data in the key. The flow chart key can be accessed by using the KeyFlowContext object passed into the actionfired() method of Java Action s listener class. This is accomplished by using the getkey() method. The code below demonstrates this: //Make action JavaAction jaction = flowchart.makejavaaction(); //Set listener jaction.setlistener(mylistener.class); //Set key jaction.setkey(new MyKey("data")); //Make the MyKey class for the key public static class MyKey { private String data; public MyKey(String data) { Flux Manual 61

63 this.data = data; } } public static class MyListener implements ActionListener { public Object actionfired(keyflowcontext keyflow) { System.out.println(keyFlow.getKey()); } } The KeyFlowContext also contains the flow context variable manager for your flowchart. Therefore, you will be able to use previous action and trigger s results, as well as modify what will be placed into the flow context after Java Action fires. The actionfired() method also returns an Object, can be referenced as the action s RESULT in the flow context. Note that it must be an action, so primitive data types must use wrapper classes Dynamic Java Action The code below shows correct usage of the Dynamic Java Action: DynamicJavaAction dja = flowchart.makedynamicjavaaction(); dja.setlistener(myclass.class); dja.setlistenerstring("mymethod"); public static class MyClass { public static void mymethod() { System.out.println("Do something"); } } 10.5 RMI Action RMI Action is like Java Action except that it contacts a remote object, instead of operating on a local one. The remote object must be an RMI object that is bound to an RMI registry. The RMI object must implement the interface flux.remoteactionlistener. To specify the location of the remote object, you need to set a host, bind name, and port. The scheduler calls this remote object when the RMI Action executes. The host is the name of computer where the remote object is located. The bind name is the name bound to the RMI registry on the host. Finally, the port is the TCP/IP port number that the RMI registry is bound. Normally, the port need not be set, since the default value is appropriate in most cases. For example: rmiaction.sethost("somehost.mycompany.com"); rmiaction.setbindname("myremoteobject"); Flux Manual 62

64 10.6 Dynamic RMI Action Dynamic RMI Action is like Dynamic Java Action and RMI Action. Like Dynamic Java Action, your remote object does not need to implement any specific Flux interfaces. Instead, you specify the signature of the method that should receive a callback. Like RMI Action, you specify the location of the remote object. The scheduler calls this remote object when the Dynamic RMI Action executes Process Action Also known as Command Line Action, Process Action executes command line programs, batch files, scripts, and, in general, any native program. You can run command line programs as processes in the foreground or in the background, asynchronously. You can specify which files should be redirected to the process s standard-in (stdin), standard-out (stdout), and standard-error (stderr) streams. You can set the process s working directory and environment. Finally, you can also indicate whether the process should be terminated if the flow chart is interrupted. When working with Process Actions, Flux does not search for executables in the specified working directory, but searches for executables in the path and then sets the working directory for the invoked process. This way the invoked process can refer to files or other resources relative to the working directory. When you set the invoked processes command arguments, a space between the arguments in Flux symbolizes that there are two different arguments being passed in. For an example on executing a process, suppose you set a Process Action s command property to myprocess.sh and set the home directory to /home/myprocess, assuming myprocess.sh resides in /home/myprocess. Flux will not be able to locate the executable process myprocess.sh in the directory /home/myprocess. To overcome this problem you can either specify the absolute path to the executable in the Process Action s command property, ex. /home/myprocess/myprocess.sh, or execute your process by invoking the command shell, ex. sh myprocess.sh. In the later case the command shell will be able to locate myprocess.sh in the working directory. The same applies to the command prompt on windows, cmd /c. The Process Action also supports arguments on its command line. For example, if a Process Action needs to individually process files returned from a Flux file trigger, a command line argument in a Process Action can be used to specify that file. Further more, you can use the Process Action s "Command Properties" option with runtime data to specify a filename that the Process Action can use. All you need to do is pass in the filename, specified by your File Trigger, through the flow context and into your Process Action s "Command Properties" field. Once the information is passed in you must use the "cmd /c ${FILENAME}" syntax on your Process Action s "Command" field where "FILENAME" is the variable you set for your flow context. See the End-Users examples in your flux\examples\end_users directory for a detailed description of how to implement the behavior described above. For an example using a Process action in code, see the Process example located within the examples/software_developers/process/ directory under the Flux installation directory. For a broader and more complex example using a Process action, see the IndividualProcessing example located within the examples/software_developers/individual_processing/ directory under your Flux installation directory. Flux Manual 63

65 Error Condition Syntax Optionally, you can specify when a process's exit code should indicate that an error has occurred. Traditionally, a non-zero exit code from a process represents an error. However, it cannot always be assumed this way. Using a syntax similar to an SQL "where" clause, you can specify when a process's exit code indicates whether an error has occurred. If a process fails, an exception is thrown, thereby forcing the execution of the flow chart to enter Flux's standard error handling mechanisms. By default, the error condition is undefined, which means that a process is never deemed to have failed and the process action does not throw an exception after a process executes. However, if an error condition has been set, after the process returns its result, the error condition is evaluated using that result. If the error condition is then satisfied (found to be true), an exception is thrown. Otherwise, no exception is thrown. This remainder of this section describes the syntax for error conditions. The special keyword "RESULT.result" represents the exit code from the process after it finished executing. Compare this "RESULT.result" to integers in order to determine whether the process has finished with an error. You can use non-negative numbers. You can use some of the usual SQL relational and boolean operators: <, <=, =, <>, >=, >, AND, OR, NOT Some examples include: RESULT.result <> 0 RESULT.result > 5 RESULT.result > 10 AND RESULT.result < 100 RESULT.result = 1 OR RESULT.result = 2 NOT RESULT.result = 1 OR RESULT.result = 2 NOT RESULT.result = 0 You may not use parentheses. These operators are evaluated from right to left. For example, in the exception condition "NOT RESULT.result = 1 OR RESULT.result = 2", "RESULT.result = 1" is evaluated, then "RESULT.result = 2" is evaluated, then the "OR" operator is evaluated, and finally the "NOT" operator is evaluated Destroy On Signal Process Actions have the ability to be aborted upon receiving a Flux signal. The Process Action's action property Destroy On Signal controls this functionality. If the "Destroy On Signal" property is set to true, a native process being run by a Process Action will be aborted upon the reception of any Flux signal. The mechanism of this feature is very similar to the Destroy On Timeout feature, also available on Process Actions. For example, suppose Process Action A in a flow chart runs longer than Process Action B in a parallel flow within the same flow chart. A signal can be raised in the outgoing flow of Process Action B that terminates the native process being run by Process Action A. After Process Action A's native process is terminated, Process Action A will conditionally branch into a predefined signal flow where any further action may be taken. Figure 10-1 below shows the flow chart layout for this functionality. Flux Manual 64

66 For Example: Figure 10-1: Destroy on Signal processaction.setdestroyonsignal(true); ProcessAction processactiona = flowchart.makeprocessaction(); processactionb.setdestroyonsignal(true); ProcessAction processactionb = flowchart.makeprocessaction(); ConsoleAction nextaction = flowchart.makeconsoleaction(); Flow flow1 = processactionb.addflow(nextaction); Set signals = new HashSet(); signals.add("a signal"); flow.setsignalstoraise(signals); Process Actions and Agents In the event that agents are enabled, a process action passes off its work to any one of the running free agents. To specify which agent the job should run on, the Agent Pool action property must be specified. If no value is found for this property, the process runs on the engine s host. If a value is defined for that property, the process will run on the first free agent running in the specified pool. To run the process on any free agent in any pool, use the value * for the Agent Pool action property Running Process Action Commands as a Different User By default, Process Actions in Flux execute commands as the user who started the Engine or Agent that runs the Process Action. Often, however, a Process Action will need to execute a command as a different user. In those cases, the Process Action's commands must be modified to run using the appropriate access rights of the desired user. On Windows systems, the "runas" command allows individiual commands to be executed using different permissions. For example, to execute the program notepad.exe as the account user in the domain domain, the Process Action's command property would appear as follows: runas /savecred /user:domain\user notepad.exe Flux Manual 65

67 The "/user" option is used to determine the user account the action will be executed as. Usernames are entered in the form or DOMAIN\USER. As a security measure, Windows requires that the user's password be entered each time the "runas" command is executed. Using the "\savecred" command allows Windows to save the user's credentials, ensuring that the user will only have to enter the password the first time the Process Action is executed. On Unix systems, the "sudo" command also allows individual commands to be executed using different user accounts. Before "sudo" can be used in a Process Action, however, it must be configured with the necessary restrictions. Edit the user privelege section of the file /etc/sudoers to provide appropriate permissions for the user who will execute the Process Action. For example: flux ALL=(batch) NOPASSWD:ALL Adding this line to the sudoers file would allow the user flux to run any commands as the user batch without being prompted for a password. Once the sudoers file has been edited, any Process Action that is run under the flux user will be able to execute commands as the batch user with the "sudo" command. Including the following in a Process Action's command property would execute the /opt/batch/report script as the user batch: sudo -u batch /opt/batch/report It is also possible to edit the sudoers file to only provide permission for certain commands under a certain user account. For example, including the following line in the sudoers file would give the user flux permission to only execute the /opt/batch/report script as the user batch: flux ALL=(batch) NOPASSWD:/opt/batch/report Using the techniques described above, Process Actions can be set up to run commands as different users while still mainting the integrity of existing security restrictions For Each Number Action The For Each Number Action steps through an inclusive range of numbers. The number to start from is indicated by the start of range action property, and the end number by end of range. For example, given the range of numbers [0..10], the first time For Each Number Action executes, the number 0 is passed to the job s flow context for use by successive triggers and actions. If control loops back around to the For Each Number Action again, the second time this action executes, it passed the number 1 to the job s flow context. This stepping continues with 2, 3, 4, etc until the number 10 is included. The stepping itself can be changed. Stepping is indicated by the stepping property of a ForEachNumberAction object. The stepping defaults to 1, but if it is changed, for example, to 0.5, the generated number sequence would be 0, 0.5, 1, 1.5, etc. The range of numbers can progress down as well. For example, given the range [10..0], the first number in the stepping is 10, followed by 9, etc. In this case, the stepping must be set to -1. Otherwise, only the number 10 would be included in the sequence before the sequence was exhausted. Flux Manual 66

68 10.9 Console Action A Console Action is a simple action that prints data to either standard out (stdout) or standard error (stderr). For example, to print the value of some data to stdout, build the following Console Action and add it to your flow chart as your are constructing your job s flow chart. ConsoleAction consoleaction = flowchart.makeconsoleaction("my Console Action"); int myintegerdata = 42; double mydoubledata = 77.9; boolean mybooleandata = true; String mystringdata = "Hello World"; consoleaction.println(myintegerdata); consoleaction.println(mydoubledata); consoleaction.println(mybooleandata); consoleaction.println(mystringdata) When this Console Action executes as part of your flow chart, you will see the following output printed to stdout true Hello World If you want to print your data to stderr, instruct the Console Action like so. consoleaction.setstderr(); Manual Trigger A Manual Trigger simply waits to be expedited by a call to Engine.expedite(). Once expedited, it fires. Manual triggers are useful for flow charts that must run only when specifically instructed by an unusual event or a person. The following code demonstrates the use of a Manual Trigger: ManualTrigger manual = flowchart.makemanualtrigger(); ConsoleAction console = flowchart.makeconsoleaction(); manual.addflow(console); engine.put(flowchart); engine.expedite(); You can also expedite flowcharts using the Flux Operations Console, or the Flux command line interface Null Action The Null Action does nothing. It is a no-op. The Null Action is useful for providing transaction breaks. By default, Null Actions are not transactional, while all the other core actions and triggers are transactional. By inserting a Null Action in your flow chart, you can create a "transaction break". At transaction breaks, Flux commits the current Flux Manual 67

69 database transaction. Complete details on how Flux handles transactions, including transaction breaks, are available in. Because it is an implementation of the Action class, the NullAction also inherits all latent properties an action contains. These include the use of postscripts and prescripts and join points. It is considered good practice to use the pre or post script on a null action to do light processing that doesn t warrant a full Java Action. The following code demonstrates using a Null Action s prescript. NullAction nullaction = flowchart.makenullaction(); String prescript = "boolean lastresult = (boolean) flowcontext.get(\"result.result\");\n" + "lastresult = lastresult==true?false:true;\n" + "flowcontext.put(new Boolean(lastResult), \"RESULT.result\")"; nullaction.setprescript(prescript); Decision The Decision node is a convenience action. It is an explicit decision point in a flow chart. The decision point provides an obvious point for performing conditional branches. Of course, it is not necessary to create such an explicit decision point. Any action in a flow chart can contain conditional branches. However, by using a Decision node, you can highlight for software developers and human readers alike the points in a flow chart that contain conditional branches Error Action The Error Action simply throws an exception, which forces error handling to occur. This is used to explicitly mark a situation where an error would occur, but an exception would not be automatically thrown. For example; if you are using Flux to check process all the files in a directory, and on any given time the flow chart to process these files are found there should be at least five files in the folder, the flow chart could use a combination of a conditional flow and an Error Action to force error handle should there be less than five files in the directory. 11 File Triggers and Actions Flux provides a suite of triggers and actions for file distribution and file transfer. You can initiate jobs when files change on the local file system or on remote FTP servers. You can schedule the copying and moving of files between the local file system and remote FTP servers. Large groups of files used in these file triggers and actions can be specified succinctly. A variety of transformations can be applied to files as they are copied and moved File Criteria Most file triggers and actions accept a FileCriteria object. File Criteria objects specify what files will participate in file triggers and actions. They also specify the hosts where these files reside. These hosts can be the local host or any remote host. Flux Manual 68

70 File Criteria objects are created from the FileFactory object. File Factory objects are made from the Factory object Include To add a file to the group of files that will participate in a file trigger or action, call FileCriteria.include(). This method accepts a file pattern of files to include. For example: filecriteria.include("myfile.txt"); The above method call adds the file myfile.txt to the group of files that will participate in a file trigger or action. You can call FileCriteria.include() as many times as you like. To include a set of files based on a wildcard pattern, use the "*" character. For example: filecriteria.include("*.txt"); The above method call adds all files with the file extension ".txt" to the group of files collected in FileCriteria. Similarly, all files in a directory can be included by using the file pattern "*". For example: filecriteria.include("*"); Wildcards in the file pattern can be used anywhere. For example, to include all files that begin with the letter "a" and end with the letter "z", use the file pattern "a*z". For example: filecriteria.include("a*z"); Another wildcard pattern includes the "?" symbol. It matches exactly one character. For example: filecriteria.include("?.txt"); The above method call adds all text files that have exactly one character in front of the ".txt" suffix to the group of files that will participate in a file trigger or action. Sometimes, you want to include files that match a file pattern not only in the current directory but also in all sub-directories. For example, to find all text files in the current directory and in all sub-directories, use the special file pattern "**/*.txt". For example: filecriteria.include("**/*.txt"); Flux Manual 69

71 The special symbol "**" indicates that all files in the current directory and all sub-directories are searched for potential matching files. In this case, we are interested in text files only. Other files, such as "*.java" files, are ignored. Forward slashes ("/") and back slashes ("\") can be used interchangeably on both Wind ows and Unix systems. The following table, Table 11-1 summarizes the meaning of these file patterns when used with File Criteria. In some scenarios, you may copy or move files into destinations that do not yet exist. The scheduler must therefore create the destination. However, it is not always obvious whether the destination should be a file or a directory. To eliminate this ambiguity, a trailing slash can be appended to a file pattern, which indicates that the file pattern must be treated as a directory only if the file pattern does not already exist. For example, suppose you are copying a file "a.txt" to a destination "b". If "b" already exists, "a.txt" is copied over the pre-existing "b" file. However, if "b" does not already exist, it is assumed to be a file, not a directory, because it does not have a trailing slash. On the other hand, suppose you are copying a file "a.txt" to a destination "b/". Note the trailing slash If "b" already exists and is a file, "a.txt" is copied over the pre-existing "b" file. If "b" already exists and is a directory, "a.txt" is copied into the pre-existing "b" directory. Finally, if "b" does not already exist, it is assumed to be a directory, not a file, because it has a trailing slash. In this case, the directory "b" is created and the file "a.txt" is copied into it. Symbol Table 11-1: Available File Wildcards Meaning * Include all file names in the current directory. Directory names are not included. File and directory names in any sub-directories are not included. ** Include all file and directory names off the current directory and in all sub-directories indirectly reachable from the current directory. *.txt?.txt */*.txt Includes all file names ending with the literal ".txt" in the current directory. No files in any sub-directories are included. Includes all file names ending with the literal ".txt" in the current directory that have exactly one character in front of the ".txt" suffix. No files in any sub-directories are included. Includes all file names ending with the literal ".txt" in each sub-directory directly beneath the current directory. No deeper sub-directories are included. Does not include any ".txt" files in the current directory. Flux Manual 70

72 **/*.txt myfile myfile/ Includes all file names ending with the literal ".txt" in the current directory and in all sub-directories. Includes the literal name myfile. If the literal name myfile already exists as a file or directory, myfile refers to that file or directory. If myfile does not exist, myfile refers to a file, not a directory. Note the trailing slash. Includes the literal name myfile. If the literal name myfile already exists as a file or directory, myfile refers to that file or directory. If myfile does not exist, myfile refers to a directory, not a file. Whenever a trailing slash is used in the context of searching for files, then "**" is silently appended to the symbol. For example, if "mydirectory/" is specified, then "mydirectory/" is silently translated to "mydirectory/**", which means that all files indirectly reachable from "mydirectory" are included Exclude To exclude files from the group of files that will participate in a file trigger or action, call FileCriteria.exclude(). This method accepts a file pattern of files to exclude. This file pattern follows the same rules as defined in the above section. All the included file patterns are processed first, followed by the excluded file patterns. Suppose that you have three text files in your current directory: "a.txt", "b.txt", and "c.txt". Now suppose you want to include all text files except "b.txt". The following example demonstrates how this scenario can be accomplished. filecriteria.include("*.txt"); filecriteria.exclude("b.txt"); The above method calls include all text files and exclude "b.txt". The resulting File Criteria includes the files "a.txt" and "c.txt" but not "b.txt" Regex Filter You can also exclude files by specifying a regular expression filter. Regular expressions are powerful, compact expressions that are used to match strings against patterns. A Regex Filter is simply a regular expression that excludes matching files from the group of files that will participate in a file trigger or action. Further information about regular expressions can be found online: For example: filecriteria.include("*.txt"); filecriteria.addregexfilter(".*a\.txt"); Flux Manual 71

73 The above method calls first include all text files in the current directory. Then any file ending with "a.txt" is filtered out, using the regular expression ".*a\.txt". Note that the regular expression filter is matched against the full path name of each file. Consequently, a leading ".*" is needed in front of "a\.txt" to match against, for example, c:\temp\a.txt Base Directory All relative file and directory names described in a File Criteria object are relative to a base directory. Relative files require a base directory component to completely specify their location in the file system. Absolute files, on the other hand, do not. For example, c:\temp is an absolute directory name while mydir/myfile.txt is a relative file name. The relative file mydir/myfile.txt needs a base directory in order to completely specify its location in the file system. By default, the base directory is the current directory. The current directory is the directory where the JVM was started. You can retrieve the name of this directory by accessing the system property user.dir. You can change the base directory to anything you like. For example: filecriteria.setbasedirectory("c:\\temp"); In the above example, all relative files and directories included in the File Criteria object are relative to the c:\temp directory. However, the base directory is not used by any absolute files and directories in the File Criteria object Network Hosts Ftp Hosts The files and directories specified in a File Criteria object normally refer to files and directories on the local file system. However, you can specify that these files are located on a remote FTP server. This functionality allows you to specify files and directories on a remote FTP server that should be used by the different file triggers and actions. For example, by setting an FTP host, you can copy files to and from an FTP server. You can also have a File Trigger fire when a file on an FTP server changes its state. To create and initialize an FTP host, first create an FtpHost object from a FileFactory object. The FtpHost object contains methods for setting the name of the FTP host, the port number, and username and password information for logging in. Finally, the FTP host is set by calling: filecriteria.sethost(ftphost); After calling this method, the File Criteria s included files and directories are relative to the default base directory on this FTP host. Furthermore, calling filecriteria.sethost() resets the base directory to the default base directory for FTP servers, "/". To change the base directory to a different directory on the FTP server, call filecriteria.setbasedirectory() after calling filecriteria.sethost(). Note that in this case, the base directory is relative to the FTP server, not the local file system. This way, files and directories on the FTP server can be included and excluded in the usual way using File Criteria objects. Flux Manual 72

74 If your FTP server does not display directory listings in the standard format, you can substitute your own parser to adapt non-standard FTP directory listings to the scheduler. You need to create a class with a default constructor that implements the flux.file.ftpfilelistparser interface and provide that class to your FtpHost object using the setfilelistparser() method Non-standard FTP Servers Some FTP servers, such as special-purpose FTP servers used for EDI purposes, return non-standard file listings when responding to FTP commands. Flux supports these non-standard FTP servers by allowing the job to supply its own Java class that is responsible for parsing the non-standard file listings. Thus, Flux can monitor and manipulate files that reside on non-standard FTP servers. Use the FtpHost.setFileListParser() method to register your custom file listing parser. This class must implement the flux.file.ftpfilelistparser interface Secure FTP Host A Secure FTP Host is exactly like an FTP Host, except that it operates on secure FTP servers instead of standard FTP servers. Secure FTP servers encrypt the communications traffic for security purposes. When using Secure FTP (SFTP) with the Flux Operations Console, flow charts and error messages may have problems being displayed inside of the web application due to classpath issues. To prevent this problem from occurring either place the flux.jar file inside your invoking JVM's lib/ext directory, or include flux.jar in the system classpath FTP Over SSL FTP over SSL (FTPS) encompasses both FTP and SFTP functionality. FTPS provides a way to perform secure file transfers using an SSL/TLS layer. When connecting to FTPS servers using Flux, the connection may take longer than usual to initialize depending upon which type of connection strategy the FTPS server is utilizing. The possibility of an extended connection wait is due to a series of connection strategies tried until the proper strategy is found. Connection via the TLS layer is first attempted, the second attempt is a connection over SSL, and lastly an attempt is made to connect via implicit SSL. Each attempt is given five seconds to connect. If the connection can not be made within the time frame, the next connection strategy is attempted Supported FTP Servers in Flux Flux explicitly supports two File Transfer Protocol (FTP) servers, although other FTP servers are known to work with Flux. These servers are listed below. Microsoft's Internet Information Services (IIS) FTP server version 5.0+ Very Secure FTP server (VSFTP) 2.0+ Although the above servers have been thoroughly tested against Flux and are fully supported, other FTP servers have been known to work correctly. If you are using any other FTP servers other than what is listed above, note that they have not been tested and are not officially supported by Flux. IIS 5.0 NOTE: When using IIS 5.0, a problem occurs when changing into directories with spaces in their names. A workaround is to either enable the "Use CD Commands" option in the Flux Designer under the FTP Host section within the "File Criteria" editor, or set the "issuecdcommandsmode" flag to true on a flux.file.ftphost object in Java code. Flux Manual 73

75 Universal Naming Convention Flux also supports connection to network devices via the Universal Naming Convention (UNC). When using a UNC host in Flux, the user may specify an optional domain, username, password, and port. The name of the server and file being used are required fields Zip File Criteria A ZipFileCriteria object is the same as a FileCriteria object except that the included and excluded files are relative to the files contained in a given Zip file. The base directory is also relative to the root of the Zip file itself. After creating a ZipFileCriteria object, you must specify the actual name of the Zip file by calling ZipFileCriteria.setZipFilename(). For example, suppose a Zip file a.zip contains the files /myfile.txt and /mydir/myotherfile.txt. You can specify these two text files in a Zip File Criteria as follows. zipfilecriteria.setzipfilename("a.zip"); zipfilecriteria.include("**/*.txt"); These two method calls include the text files /myfile.txt and /mydir/myotherfile.txt from inside the Zip file for use in a file trigger or action. Zip File Criteria objects can be used for creating Zip files, copying files into existing Zip files, and extracting files from Zip files File Triggers File triggers watch for changes on file systems. Using file triggers, you can create jobs that fire when files are created, modified, and deleted either on the local file system or remote FTP servers. File triggers use File Criteria, explained in detail above, to specify which files to watch and on which hosts to watch them on. Chapter 50 Trigger and Action Results gives a detailed description of the returned RESULT of triggers File Exist Trigger The File Exist Trigger fires when a group of files specified by a File Criteria exist on the local file system or an FTP server. The File Exist Trigger propagates a few results into the flow context as noted in Chapter 50 Trigger and Action Results on page 206. For an example of the File Exist trigger see the FileExist example located in the examples/software_developers/file_exist directory under your Flux installation directory File Modified Trigger The File Modified Trigger fires when a group of files specified by a File Criteria are modified according to their timestamps File Not Exist Trigger The File Not Exist Trigger fires when a group of files specified by a File Criteria cannot be found. Flux Manual 74

76 11.4 File Trigger Properties Polling Delay The polling delay specifies a time delay that occurs between successive polls of the local file system or an FTP server. For example, after a File Trigger finishes polling a file system, a polling delay of one minute ensures that at least one minute will pass before the next attempt to poll that file system for changes. Example: fileexisttrigger.setpollingdelay("+3s"); A polling delay of 3 seconds means that the File Trigger should search through the specified directory every 3 seconds Stable Period The stable period defines a time period in which a file must remain unmodified before a File Trigger will fire. This property is important for those cases where an incoming file arrives at a slow pace and you do not want to fire the File Trigger prematurely. Note that this property does not apply to the File Not Exist Trigger, which fires when a file does not exist. Example: fileexisttrigger.setstableperiod("+1s"); A stable period of 1 second will result in a file having to be unmodified for one second before the File Trigger will fire Active Window An Active Window defines time periods in which you want your File Triggers to monitor files. For example, you can watch for incoming files between 3 am and 7 am only. This means that the File Trigger will remain inactive from 7 am until 3 am on the following morning. The Active Window property is very useful when you want to define blackout periods on your File Triggers. For Example, you want your File Exist Trigger to detect files only between 3:00 am and 7:00 am, between Monday to Friday. The following code snippet will do this. // The start date for your Active Window is 3 am. Date startdate = helper.applytimeexpression("^d@3h",null,null); BusinessInterval bi = helper.makebusinessinterval("4 hour interval"); // The Active Window is defined for a duration of 4 hours and // repeats at 3 am between Monday and Friday. bi.include("search Time", startdate, "+4H", " * * mon-fri"); // Set the Active Window on your File Trigger. filetrigger.setactivewindow(bi); Effect of Active Window on Polling Delay Active Windows can influence the polling frequency of your File Triggers. Flux Manual 75

77 For example, as shown in Figure 11-1 below, say you have an Active Window set on a File Trigger that makes it poll for files between 10 am and 3 pm. If the polling delay on your File Trigger is set to poll for files every 6 hours, then your File Trigger will stop polling after 6 am as 12 pm falls outside the Active Window time period. After the File Trigger is active again at 3 pm, your polling frequency of 6 hours will make your File Trigger resume polling for files at 9 pm. Figure 11-1: Polling Delay Restarts in an Active Window Also, if your Active Window is not large enough to accommodate your File Trigger s polling delay, it may happen that your File Trigger never polls for files as all the polling times happen to fall outside the Active Window. As shown in Figure 11-2 above, the Polling Delay is 1 hour 30 minutes and happens to always fall outside the Active Window. The File Trigger in this case will never poll for files. For more information on using the Active Window feature, refer to the active_window example located within the examples/software_developers/active_window directory located under your Flux installation directory Sort Order Data retrieved from file triggers can optionally be sorted by a SortOrder object of the user s choosing. By default, the results are sorted in alphabetical order, but this may not be optimal in all cases File Trigger Results File trigger results are the returned result of a file trigger. File trigger results display which files matched the given file criteria and provide different ways for viewing those results. File trigger result options include: Figure 11-2: Polling Delay is Larger than Active Window url_string_matches: URL string matches returns a list of java.lang.string objects that represent each matching file s URL. url_object_matches: URL object matches returns a list of java.net.url 1 objects representing each file matching the given includes. Flux Manual 76

78 filename_matches: Filename matches returns a list of java.lang.string objects that represent the name of each file found, without path information. fileinfo_matches: Fileinfo matches returns a list of flux.file.fileinfo objects. These objects give a better description of the file such as the file s name, parent, size, date last modified, if the file is readable/writable, and the URL as a java.net.url 2 object. All results mentioned above may be accessed from the flux.file.filetrigger.filetriggerresult class after a file trigger has finished execution File Actions File actions can be used to perform standard operations on files such as copy, move, delete, rename, and create. File actions use File Criteria, explained in detail above, to specify which files to operate on and on which hosts File Copy Action The File Copy Action can be used to copy a group of files specified by a File Criteria from a source to a target. Files can be copied in a network transparent way between local file systems and FTP servers. This action can use a Renamer to rename files as they are copied from sources to destinations as defined in the File Criteria Renamer Renamers include GlobRenamer and RegexRenamer. GlobRenamer is used to rename files en masse. For example, to rename all "*.java" files to "*.txt" use the following. filefactory.makeglobrenamer("*.java", "*.txt"); NOTE: Glob renamers use case-sensitive matching. If case-insensitive matching is required you will need to create your own custom renamer or use RegexRenamer. RegexRenamer is used to rename files en masse using the power of regular expressions. For example, to change all files ending with abc.txt to end with xyz.txt use the following. filefactory.makeregexrenamer("(.*)abc\.txt", "\1xyz.txt"); Preserve Directory Structure The PreserveDirectoryStructure flag indicates whether the directory structure that leads to a source file or directory is maintained when it is copied or moved to its targets. For example, if you copy c:\temp\foo.txt to d:\ and the Preserve Directory Structure flag is set, then the file will be copied to d:\temp\foo.txt, however, if this flag is false, then the file will be copied to d:\foo.txt. For more information on the File Copy action, see the FileCopy example located in the examples/software_developers/file_copy directory under the Flux installation directory. Flux Manual 77

79 File Move Action The File Move Action can be used to move a group of files specified by a File Criteria from a source to a target. This action is used in the same manner as File Copy Action described above and can also use a renamer File Create Action The File Create Action creates a group of files specified by a File Criteria. By default, all existing files that match the given criteria will not be overwritten. If you wish to replace existing files with an empty file, simply enable the overwrite functionality. The following code enables the overwrite feature: FileCreateAction creator = laboratory.makefilecreateaction("create"); creator.setoverwrite(true); File Delete Action The File Delete Action deletes a group of files specified by a File Criteria. When a directory is deleted, all child files and directories are deleted too File Rename Action The File Rename Action is identical in all respects to the File Move Action. This action is present in the case that it is not clear that moving and renaming a file are the same operation For Each Delimited Text File Action The For Each Delimited Text File Action operates on delimited text files. These files contain rows and columns of textual data. The first time this action is executed, it retrieves the first row, or line, of data and passes that data through to the job s flow context. These data are then available for use by successive triggers and actions in the flow chart. If the flow chart loops back around to the same action, the second time it is executed, this action retrieves the second line of data and passes it through to the flow context for use by other triggers and actions. If this action has exhausted all the rows in the current file, then it processes the next file in the File Criteria. This looping behavior, where this action steps through a delimited text file one at a time, continues until the group of files is exhausted. For an example using the For Each Delimited Text File action, refer to the ForEachDelimited example located in the examples/software_developers/for_each_delimited directory under the Flux installation directory File Transform Action The File Transform Action changes the format of a group of files specified by a File Criteria and a specific Transform. Converting a DOS file to UNIX format and vice-versa are examples of file transformations. Flux Manual 78

80 Supported transforms include UnixToDosTranform and DosToUnixTransform. The UnixToDosTransform converts files in the UNIX format into DOS files. The DosToUnixTransform converts file in the DOS format into UNIX files. The following code shows how to create and set a File Transform action: TransformFactory transformfactory = flowchart.maketransformfactory(); FileTransformAction transformer = transformfactory.makefiletransformaction("transformer"); transformer.settransform(transformfactory.makedostounixtransform()); By setting the transform property on the File Transform action to DOS to UNIX transform, the action will transform any file that matches its criteria to UNIX formatting For Each Collection Element Action Although not directly related to file actions, the For Each Collection Element Action can be used to operate on a group of files specified by a File Exist Trigger. Activating a File Exist Trigger prior to the For Each Collection Element Action's execution provides the ability to pass the matched files from the File Exist Trigger into the For Each Collection Element Action's collection via the flow context. The first time this action is executed, it retrieves the first file name matched in the File Exist Trigger and passes that file name through to the flow chart's flow context. Successive triggers and actions in the flow chart can then access the file name using the For Each Collection Element Action's loop index. Observe the example below for the setup and creation of a For Each Collection Element Action. // Create a new For Each Collection Element Action. ForEachCollectionElementAction fec = flowchart.makeforeachcollectionelementaction( "For Each Collection Element"); // Set the loop index of the For Each Collection Element Action // to "index". // The loop index can then be used throughout the flow // chart to access its contents. fec.setloopindex("index"); // The collection can be set manually, as in the following // code, or through the runtime data map. Note that the code // here does not define a collection - you will need to define // one in your own code. fec.setcollection(collection); If the flow chart loops back around to the same action, the second time it is executed, this action retrieves the second file name and passes it through to the flow context for use by other triggers and actions. This looping behavior continues until the group of files specified by the File Exist Trigger is exhausted. For an example using the For Each Collection action, refer to the ForEachCollection example located in the examples/software_developers/for_each_collection directory under the Flux installation directory. Flux Manual 79

81 File Action Results After a File Action completes execution, it will return a result containing listings, in various formats, of filenames and URL objects the File Action acted upon successfully. To retrieve a File Action's Results, use the following syntax: FileAction.FileActionResult result = (FileAction.FileActionResult) flowcontext.getlastresult(); From this result, listings of successful URL Strings, objects, or filenames representing the files acted upon may be retrieved. Syntax for retrieving these listings from the result is as follows: Map of String to String (source to destination) representations of URLs of files that were successfully acted upon: Map urlstrings = result.url_strings; Map of java.net.url 3 to java.net.url 4 (source to destination) URLs of files that were successfully acted upon: Map urlobjects = result.url_objects; Map of String to String (source to destination) containing the absolute paths of the files that were sucessfully acted up: Map filenames = result.filenames; If you are unsure whether or not an action may return a result, refer to Chapter 50 Trigger and Action Results on page 206. The table in that section will provide a brief description, the Java type, and the access syntax for all of Flux s trigger and action results. Notes Custom Triggers and Actions Flux provides a backbone for creating suites of custom triggers and actions that can be easily integrated into your flow charts. This section describes how to create custom actions and triggers how to use them in your flow charts, and how to view them in the standalone and Web-based Designer Adapter Factories When you create a custom suite of triggers and actions, you make them available for inclusion into jobs using adapter factories. An adapter factory is a factory that makes new instances of your custom triggers and actions. To create an adapter factory, create Flux Manual 80

82 a class that implements flux.dev.adapterfactory. Your custom factory must implement the init(flowchartimpl) method of AdapterFactory, as in the following code example: import flux.dev.adapterfactory; import fluximpl.flowchartimpl; public class MyCustomFactory implements AdapterFactory { private FlowChartImpl flowchart; } public void init(flowchartimpl flowchart) { } this.flowchart = flowchart; The variable flowchart must be saved in a field, as it is used to attach your custom triggers and actions to a specific flow chart when they are created. Create additional methods that make new instances of your custom triggers and actions. These methods must take a String argument specifying the name of the trigger or action. When you instantiate your custom trigger or action, you must attach it to the flow chart that instantiated the adapter factory and the name argument passed in; for example: public MyCustomAction makecustomaction (String name) { } return new MyCustomActionImpl(flowchart, name); Once you have created your adapter factory, you must make it available to the scheduler. Create a file called factories.properties. In this file, list your adapter factory. MyFactoryShortName=examples.custom_action.FileFactory Note that the name of your adapter factory ("MyFactoryShortName" in the previous example) must not conflict with a built-in factory name such as "file", "gui", "j2ee", "jmx", "messaging", "transform", "webservices", or "xml". The names of built-in factories can be derived from the result of the flux.enginehelper.getactioncatalog() method. For a complete working example of a custom adapter factory, refer to the Custom Action example located in the examples/software_developers/custom_action directory under the Flux installation directory Custom Triggers To create a custom trigger, you need to perform a few steps. 1. Create an interface that is used to initialize your trigger. Your interface must extend flux.trigger and should contain getter and setter methods for your custom trigger s properties. Flux Manual 81

83 2. Create variable classes that implement java.io.serializable for your persistent variables. 3. Create an implementation class that extends fluximpl.triggerimpl and implements your interface. 4. In your implementation class, implement the getter and setter methods from your interface. To manipulate your persistent variables, use the get() and put() methods of the flow chart s Variable Manager available with the flux.flowchart.getvariablemanager() method. In addition to your own getter and setter methods, you will need to implement the following: A constructor that calls the super(flowchartimpl, String) method. gethiddenvariables() Return a Set object containing the names of your persistent variables. verify() - Verify the integrity of the object and determines whether it is ready to be added to the engine;if not, throws an EngineException. execute()- check to see whether an event has occurred. If the event has not occurred, throw a flux.dev.nottriggeredexception from this method. You can return a variable or throw your own exception from the execute() method. getnextpollingdate() - returns a date that specifies when you want the engine to poll your trigger. Your trigger s execute() method is called each time the trigger is polled. This polling mechanism is the simplest way to write triggers. You can shorten the polling frequency to a few seconds if you want, although in general you should make the polling frequency as long as you can. For a complete working example of a custom trigger, refer to the Custom Synchronous Trigger example located in the examples/software_developers/custom_synchronous_trigger directory under the Flux installation directory Custom Actions To create a custom action, you need to perform a few steps. 1. Create an interface that is used to initialize your action. Your interface must extend flux.action and should contain getter and setter methods for your custom action s properties. 2. Create variable classes that implement java.io.serializable for your persistent variables. 3. Create an implementation class that extends fluximpl.actionimpl and implements your interface. 4. In your implementation class, implement the getter and setter methods from your interface. To manipulate your persistent variables, use the get() and put() methods of the flow chart s Variable Manager available with the flux.flowchart.getvariablemanager() method. In addition to your own getter and setter methods, you will need to implement the following: Flux Manual 82

84 A constructor that calls the super(flowchartimpl, String) method. gethiddenvariables() Return a Set object containing the names of your persistent variables. verify() - Verify the integrity of the object and determines whether it is ready to be added to the engine; if not, throws an EngineException. execute()- perform your custom action. The code in the execute() method is executed when the flow of your flow chart reaches the custom action. You can return a variable or throw your own exception from the execute() method. A simple Hello World implementation is provided as an example: public interface CustomInterface extends flux.action { } import flux.engineexception; import flux.flowcontext; import flux.file.fileactionexception; import java.util.set; import fluximpl.flowchartimpl; public class CustomTest extends fluximpl.actionimpl implements CustomInterface { public CustomTest(FlowChartImpl flowchart, String s) { super(flowchart, s); } public Set gethiddenvariablenames() { } return null; public void verify() throws EngineException, FileActionException { } public Object execute(flowcontext flowcontext) throws Exception { System.out.println("Hello World!"); //RESULT will = true after execution. return new Boolean(true); } } For a complete working example of a custom action, refer to the Custom Action example located in the in the examples/software_developers/custom_action directory under the Flux installation directory. Flux Manual 83

85 12.4 Transient Variables When your custom triggers and actions return variables from the execute() method, they are considered to be variables that follow the persistence rules described in Chapter 30 Databases and Persistence. However, if you need to return a non-persistent variable, simply call flux.flowcontext.returntransient() immediately before returning your transient variable from your execute() method. Your transient variable will not be stored to the database. It will be cleared from the flow context at the next transaction break Using Custom Triggers and Actions To use your custom triggers and actions in Java code, you must include all of your custom classes and the factories.properties file in your system classpath, the Java classpath, or the current directory (user.dir). To access your adapter factory in code, call the makefactory(string shortname) method on the Flow Chart object.this method returns an instance of your adapter factory. You can then make triggers and actions in your flow chart using the adapter factory In the Flux standalone or Web-based Designer You can view and edit the properties of your custom actions and triggers in either the standalone or the Web-based Designer. A JavaBean BeanInfo class is not necessary for your custom trigger or action, but if you provide one it will provide more detailed information for display in the Designer. You can also create a custom property editor for any property in your custom trigger or action, which overrides the default property editors built into the Designer. Your custom property editor is displayed in the Flux Designer when you edit your custom trigger or action s properties. The class name of your custom property editor is patterned after your custom property class name. For example, if your unique property type is called com.somecompany.mycustomproperty, then the name of your property editor must be com.somecompany.mycustompropertyeditor. Your editor class must implement the java.beans.propertyeditor interface. In addition to creating custom property editors, you can completely customize the look and feel of a custom action s entire property editor panel by using a customizer. A customizer allows you to display a complete property editor, entirely of your own design, within the Flux Designer. The class name of your customizer is retrieved from the java.beans.beaninfo.getbeandescriptor().getcustomizerclass() method. Your customizer class must extend the javax.swing.jpanel class and implement the java.beans.propertyeditor interface. Custom property editor and customizer classes may need to implement the flux.gui.stopeditinglistener interface to ensure that the classes are provided notification when the user is finished editing a property, which can happen when a custom property editor or customizer loses focus. A JavaBean BeanInfo class can be created for custom actions by following the standard JavaBean conventions and extending SimpleBeanInfo. SimpleBeanInfo Javadoc: Flux Manual 84

86 In the getpropertydescriptors() method of your BeanInfo class, call super.getpropertydescriptors() to retrieve the base property descriptors. Return them, along with your custom action s property descriptors, from your getpropertydescriptors() method. The Designer uses these property descriptors to display and edit your custom action s properties. The Designer also uses your BeanInfo to display other information, such as your custom action s icons. Any property that contains an attribute named "transient" with its value set to Boolean.TRUE is not saved to XML. The name of the "transient" property is considered case-insensitive. To use your custom triggers and actions in the standalone Flux Designer, place your factories.properties file in the Flux installation directory and run the flux-designer 1 script. To use your custom triggers and actions in the Flux Web-based Designer, make sure your custom classes, their dependencies, and the factories.properties file are available to the engine s classpath. Because dependencies only need to be available to the engine, a single Operations Console instance can monitor multiple engines without dependency issues. For a concrete example that shows how to display and use a custom action in the Flux Designer, see the examples/software_developers/custom_action example under your Flux installation directory. Follow the steps outlined in the file flux-designer-custom-action.txt. Notes 1.../flux-designer.bat 13 J2EE Actions Flux provides a suite of actions and triggers for use with J2EE applications. These actions include calling an EJB session bean and publishing a JMS queue and topic message EJB Session Action An EJB Session Action creates and invokes an EJB session bean in an application server. Like the Java Action, it is simple to use. First, set the class name of your job listener. ejbsessionaction.setlistener("jndi name of your session bean", YourEjbSessionHomeInterface.class); This job listener class must implement the interface flux.remoteactionlistener. When this EJB Session Action fires, your EJB session bean will be created and the actionfired() method will be called. Be sure to set the JNDI name of your bean properly. You must also set the class of your bean s home interface. The class MyJobListener will be created and you can then perform actions that are appropriate for your job. Next, you can set an optional job key. Flux Manual 85

87 ejbsessionaction.setkey(new MyVariable("My Data")); A job key is simply a persistent variable, mentioned previously and detailed in Chapter 30 Databases and Persistence. By providing a unique job key, your EJB Session Action can tailor its behavior according to the data in the key. Due to EJB standard behavior and semantics, your MyVariable class must implement java.io.serializable. Finally, you can set optional data to allow Flux to contact your EJB session bean, wherever it may reside. If Flux is running inside your application server, you do not need to use any of these settings, unless you want to change the default username and password. However, if you want to invoke a session bean on an entirely different application server, you need to set some or all of the following settings. ejbsessionaction.setinitialcontextfactory("appserverfactory"); ejbsessionaction.setproviderurl("iiop://remote_server"); ejbsessionaction.setusername("myusername"); ejbsessionaction.setpassword("mypassword"); 13.2 Dynamic EJB Session Action A Dynamic EJB Session Action is exactly like an EJB Session Action except that your EJB remote interface does not need to implement any specific Flux interfaces. Instead, you specify the signature of the method that should receive a callback. Good Java programming style generally dictates the use of interfaces. However, there may be circumstances when you do not want to, or simply cannot, implement Flux-specific interfaces. You can pass in job-specific data as arguments to your job listener method EJB Entity Action This action creates or looks up an EJB entity bean. Then this action invokes a method on that bean. That entity bean must implement the flux.remoteactionlistener interface. If you do not want your entity beans to implement this Flux-specific interface, use Dynamic EJB Entity Action instead Dynamic EJB Entity Action This action creates or looks up an EJB entity bean. Then this action invokes a method on that bean. That entity bean does not need to implement any Flux-specific interface JMS Action The JMS Action publishes a JMS message to a JMS queue or topic using the Java Message Service (JMS). Using this JMS Action provides the ability to send just about any kind of JMS message from a Flux flow chart. The JMS Action can send various JMS messages, listed below. JMS Action Supported Message Types: Byte Map Flux Manual 86

88 Message Object Stream Text When using the JMS Action, you can also set a variety of JMS properties on the JMS message. For a detailed example demonstrating how to setup a JMS Action, refer to the "JmsAction" example located in the examples/software_developers/jms_action directory underneath your Flux installation directory. Flux can be configured to receive JMS messages as well. For more information on using incoming JMS messages to coordinate your jobs, refer to Section 16.5 Translating JMS Messages into Flux Messages. 14 Mail Action A Mail Action sends via an SMTP mail server. Different mail properties can be configured, such as the TO, CC, and BCC recipients, the subject, the message body, and file attachments. When embedded into a flow chart, the Mail Action provides a way of sending job notifications and other messages during the course of job executions. The Mail Action s message body can be configured with data from the running job. Mail Actions are required to have: a mail server, the acting SMTP server in your domain; port, the port on which to establish a connection with your server on; to address, the address of the person you want to receive the mail; and from address, the address that will show up as the sender of the message. The body property contains the text for that the will contain, and the subject property denotes what will be in the subject line. There are several additional configuration options that may be of importance depending on your setup: Attachments - File attachments to be sent along with the . These are specified as a list of file pointers. BCC Addresses A list of addresses to mail to as a BCC address. Body Footer Also called a signature, this is a String that is displayed at the end of a message. Body Header A body is text specified in a String that will be displayed at the top of the . Body Properties A variable manager that contains data that can be substitute in the body of an . Input is specified through a Properties object. CC Addresses A list of addresses to mail as a CC address. Content Type The content type specifies whether an is in HTML or Plain text format through the use of a MailContentType object. Extra Headers Additional headers you may want to include, as listed in a Properties object. Password Used if the server requires some form of authentication Flux Manual 87

89 Username Used if the server requires some form of authentication In the message body of an , any occurrence of the string "${my_property}" is replaced at runtime by the value of the property whose key is my_property. These substitutions are called body properties. Each body property has a key and a value. The key is found in the message body of the . The value is substituted in place of the key in the message body at runtime. These body property substitutions allow you to create customized notifications that contain data from your running jobs. 15 Prescripts and Postscripts on Triggers and Actions If using BeanShell as the language for your scripts, Flux will import its entire APIs automatically. To set a script on an action or trigger, use the following sample BeanShell code which sets a Postscript on a Null Action. NullAction nullaction = flowchart.makenullaction("null Action"); nullaction.setprescript("flowcontext.put(\""+ "HELLO" +"\",\""+ "Hello World" + "\");"); This script sets the variable "HELLO" to the value "Hello World". Upon execution and completion of this Null Action, the variable is mapped to the flow context where other actions or triggers may access it. To set the script language, create a Flux configuration and use the setscriptinglanguages() method. If no languages are specified within the Flux configuration, BeanShell will be used by default Overriding Action Results with Postscripts A postscript can be used to override the result of an action or trigger. For example, a postscript can replace the standard result returned from a Java Action with a new value. Similarly, a postscript can replace trigger results with new results. To override the result of an action or trigger, simply create a postscript on a particular action that contains the following line of script code: flowcontext.put("result", resultoverride); The argument "resultoverride" is simply the value that you provide that replaces and overrides the result from the action or trigger itself. The result of every action or trigger is place into the flow context under the reserved name "result". By placing your own value into that "result", you can override and replace the result, if any, returned by the action or trigger. For example, suppose a Java Action returns the string "my result value". To override that result with the number 23, create a postscript on that Java Action that contains the following line of code: flowcontext.put("result", 23); Flux Manual 88

90 Similarly, to override the old string result "my result value" with a new string result "a new value", create a postscript that contains this line of code: flowcontext.put("result", "a new value"); Alternatively, you can manipulate and replace the action results by first retrieving the result from the flow context, and then reinserting it. The code below demonstrates multiplying the current number in a For Each Number Action by four and returning it into the flow context: String postscript = "double value = flowcontext(/"i/");\n" + "value *= 4;\n" + "flowcontext.put(/"i/", value);"; foreachnumber.setpostscript(postscript); 15.2 Predefined Variables and Classes When you use a pre- or post-script, there are several variables and classes that are avaiable to you by default Variables The predefined variables are: flowchart - the flow chart that is being executed. action - the trigger or action that the script is executing from. vm - the flow chart's variable manager. flowcontext - the current flow context Classes All of the flux.* classes (classes from the flux package) are available by default. If you need to use any other classes (for example, a flux.file.* class), you will need to import them in the script, like so: import flux.file.*; 16 Coordinating and Synchronizing Jobs Using Messaging and Checkpoints Using Flux s flow chart model, you can create jobs with a great amount of flexibility, conditional branching, and looping. However, sometimes you need to coordinate and synchronize jobs above the flow chart level, at the macro level. For example, suppose that flow chart A performs some work on a daily basis. Further suppose that flow chart B needs to run if and only if flow chart A s daily firing was successful. You cannot simply schedule flow chart B to run a few minutes, or a few hours, after flow chart A runs. For a variety of reasons, flow chart A may be delayed or, in certain situations, may skip firings. For these reasons and other legitimate reasons, it is best for flow chart B to fire if and only if flow chart A fires successfully. Flux Manual 89

91 To perform this coordination and synchronization between flow charts A and B, flow charts use messaging and checkpoints. You can configure a checkpoint on a flow when you are defining your flow chart. A checkpoint consists of a message definition or template and the name of a message publisher. At runtime, when control passes over a flow that contains a checkpoint, the message publisher named by the checkpoint publishes an asynchronous message using your message definition as a template. In effect, your flow has published the fact that it has reached a certain goal or milestone. Each checkpoint is configured with the name of a publisher. Different publishers send different kinds of messages. For example, some publishers can be configured to send queue (point-to-point) messages while other publishers can send topic (broadcast) messages. In summary, when checkpoints are crossed, asynchronous messages are published. The above discussion covers the publishing side of messages, which are called checkpoints. The discussion below covers the receiving side of messages, which involve message triggers. A message trigger is a flow chart trigger that waits until a message is available from a particular publisher. Message trigger objects are defined by the flux.messaging.messagetrigger interface. For example, when flow chart A reaches a particular milestone, it passes a checkpoint, which publishes that fact. At that time, flow chart B, which has been monitoring that checkpoint with a message trigger, notices that the checkpoint has been passed and fires. At this stage, flow charts A and B have coordinated and synchronized among themselves. It is guaranteed that flow chart B will not run until flow chart A has reached its milestone. Note that although the word message is employed, it does not necessarily imply that networking is involved. There are a variety of ways to deliver a message. All message publishing and all message deliveries are performed within the scope of a scheduler database transaction. These transactional messages ensure that a message is not published unless the current transaction commits. Similarly, a message is not received unless the receiver s transaction commits. Finally, in order to reduce the probability of database deadlocks occurring when using message triggers, set a transaction break on all actions that immediately follow a message trigger Publishing Styles Flux supports three publishing styles. Publishers are configured to send messages in one of three ways: Topic: Each published message is broadcast to all active message triggers. Queue Immediate Reception and Queue Transaction Break Reception: Each published message is consumed in a first-come-first-served manner by at most one active message trigger. As soon as the message trigger receives the message, the message is considered delivered and other messages from this publisher may subsequently be delivered. The consequence is that if it takes a Flux Manual 90

92 long time to process a message, more than one flow chart may be processing messages from the same queue publisher at the same time Message Data and Properties When a checkpoint is passed and a message is published, it can attach data to the message body or attach properties to the header of the message. Both are optional. Relevant methods are documented in the flux.messaging.runtimemessagedefinition Javadoc documentation. Message data and message properties are delivered to message triggers. This message information can be either constant or dynamic. Constant information represents objects that are known at the time the flow chart is constructed. Dynamic information is generated at runtime. For instance, constant data can be a string or a java.lang.integer object. On the other hand, dynamic data can be the result of a flow chart trigger or action that is not known until runtime. However, the name that refers to this dynamic data is still known when the flow chart is constructed. For example, the name result refers to a persistent variable in the flow context of a running job. In general, the name of dynamic data or a dynamic property is the name of a variable in the flow context variable manager. The following standard properties are available. The property flux.messaging.message.message_origin, identifies the name of the flow chart whose publisher published this message. If the message was published by the Publisher Administrator, the value of this property is flux.messaging.message.message_origin_publisher_administrato R. The property flux.messaging.message.priority represents the priority of a queue message. Higher priority queue messages are delivered before lower priority queue messages. Message of equal priority are delivered in first-in-first-out order. The highest message priority is 1. Priorities cannot be used with message topics. The property flux.messaging.message.publish_date represents when the message was published by a running job or the Publisher Administrator. The property flux.messaging.message.receive_date represents when the message was received by a Message Trigger Message Selectors Message Triggers can be selective in what messages they process. By default, a Message Trigger retrieves the next available message depending on the Publishing Style, as described previously. However, Message Triggers can be more selective. They can choose to accept only those messages that contain certain properties. When using the MessageAdministrator.review(), MessageAdministrator.size(), and MessageAdministrator.remove() methods, a message selector may be used. A message selector will filter through messages published, looking for specific properties in each message. Flux Manual 91

93 For example, suppose you wanted to retrieve the number of messages that were published by "Publisher1", delivered and assigned to a job, and with the property Map "sender, Smith". Further suppose you needed to review these messages after you have received the number of messages present. The code for implementing this behavior would look similar to the example code shown below. MessagingHelper messaginghelper = enginehelper.makemessaginghelper(); MessageAdministrator messageadministrator = engine.getmessageadministrator(); //Create a message with properties and publish it. MessageDefinition messagedef = messaginghelper.makemessagedefinition(); Map messagedefprops = messagedef.getmessageproperties(); messagedefprops.put("sender", "Smith"); messagedef.setmessageproperties(messagedefprops); messageadministrator.publish(messagedef, "Publisher1"); // Retrieve the number of messages that meet the // specified criteria. long numberofmessages = messageadministrator.size("publisher1", MessageState.DELIVERED, "sender = 'Smith'"); System.out.println(numberOfMessages); // Retrieve a MessageIterator, which contains the actual // messages that match the specified criteria. MessageIterator msgiter = messageadministrator.review("publisher1", MessageState.DELIVERED, "sender = 'Smith'"); // Iterate over the messages. while(msgiter.hasnext()) { Message message = msgiter.next(); // Process message } //Close the MessageIterator when we are done using it msgiter.close(); If the MessageState or the message selector parameters are set to null, all messages will be included and returned. For more information on messages, refer to the Flux API s. For additional information on Messaging, message selectors, and messaging selector syntax, see the Messaging section Job Dependencies and Job Queuing By using topic messages in the manner described in this section, you can create complex dependencies among jobs. By configuring topic messages at checkpoints, you can define jobs that publish messages using a particular publisher at pre-defined points in the job, such as when a job reaches a certain milestone or when it finishes. Any dependent jobs simply listen for messages sent by a particular publisher. When a message arrives, the dependent job performs its work. These independent and dependent jobs do not need to know about each other in order to set up dependencies. Independent and dependent jobs coordinate and collaborate with each other using common message publishers. Flux Manual 92

94 Job dependencies can be temporarily suspended by pausing message publishers. Finally, by using queue messages, you can create queuing mechanisms to process data in a first-in-first-out order. Simply configure your producer jobs to publish messages using a queue publisher. Then configure your consumer jobs to wait for messages from the same queue publisher Translating JMS Messages into Flux Messages In addition to its built-in messaging capabilities, Flux can be set up to receive JMS messages and adapt them into a form that is visible to your Message Triggers. To use JMS messages with your Message Triggers, you must first create a JMS inbound message configuration, which tells Flux where to retrieve the JMS messages and how to adapt them. If you are configuring your Flux engine using the Java API, you can create a JMS Inbound Message Configuration in code by calling Configuration.makeJmsInboundMessageConfiguration(). If you are configuring the engine from a configuration file, you set the JMS Inbound Message configuration as follows: jms_inbound_message_configuration.0.name=value Where "0" is any non-negative number, "name" is the name of the property to be set, and "value" is the intended value. The available configuration options for the JMS Inbound Message Configuration options are: PUBLISHER: The name of a publisher in the Flux engine. QUEUE_CONNECTION_FACTORY: The JNDI name used to retrieve a JMS queue connection factory, which is used to retrieve JMS queue messages. QUEUE_OR_TOPIC_NAME: The name of the JMS queue or topic from which JMS messages are retrieved. TOPIC_CONNECTION_FACTORY: The JNDI name used to retrieve a JMS topic connection factory, which is used to retrieve JMS topic messages. For more information on configuring inbound JMS messages, refer to the Javadoc for JmsInboundMessageConfiguration 1. Once the JMS inbound message configuration is set up, you can use the incoming JMS messages to coordinate your jobs. Since the JMS messages are translated into Flux messages, you can set up your Message Triggers to monitor for those messages exactly as you would for a standard Flux message. Refer to the sections above for more information on configuring Message Triggers to monitor for incoming messages. Notes Coordinating Actions Using Signals Signals give you the ability to make actions dependent on each others outcomes. This behavior is useful if you have processes within your business that depend on the Flux Manual 93

95 successful completion of a task, or require the knowledge of whether an action flowed out one way or another, so that the action can react accordingly. Signals are very similar to Flux Messages except that signals are used as a notification tool between actions in a single flow chart, and do not transmit data. Actions that are notified using signals skip execution and branch conditionally into their corresponding signal flow. If a java action is doing some work when the signal is sent then it will flow into the signal flow after finishing its work. Process actions and triggers stop running as soon as they receive the signal and flow into the signal flow. For example, suppose that action A and action B in a job are running in parallel. Now, suppose if action A runs into a problem, it notifies action B not to continue normal processing but take corrective action. This result can be achieved by making action A send a signal to action B. If the signal was sent before action B started its work then action B s execution is skipped and the signal flow is followed, and if action B is in the middle of doing its work when the signal is sent then the signal flow is followed after action B completes its work. The following lines of code show how signals can be sent and listened for. First, start by defining the signals that need to be sent. Set signal = new HashSet(); signal.add("take corrective action"); A signal is sent when a flow is followed. Multiple signals can be sent at the same time. flow.setsignalstoraise(signal); Figure 17-1: Using Signals to Form Dependencies Configure the action that you want to listen for signals. action.setsignalstomonitor(signal); Then set up a signal flow that the action will flow to if and when the signal is received. action.addsignalflow(thesignaledprocess, "take corrective action"); NOTE: If there is only one action in a flow chart that is listening for a signal then that signal is cleared automatically right after the action follows the signal flow. However, if Flux Manual 94

96 multiple actions are listening for the same signal, the signal will have to be removed manually as shown below by one of the signal flows. flow.setsignalstoclear(signals); If an action is listening for multiple signals, and multiple signals are sent at the same time, all the signal flows that match the signals sent are followed in a parallel fashion. NOTE: Signals take precedence over join points. For example, suppose you have actions A, B, and C. These actions all flow into a join point, action J, with a join condition of "B and C". This means that action J will be executed only if actions B and C complete and flow into action J. Now, let s say action A raises a signal and action J is monitoring for that signal. If action A sends the signal before action B and C execute, action J will receive the signal and execute regardless of its join conditions being met or not. Figure 17-2: A Signal affecting a Join Point 18 Timeouts Timeouts allow you to take special measures if an action or trigger takes too long to execute. If your workflow relies on one action that needs to finish before the rest of the tasks can be followed out, you can use a timeout on that action to ensure that if it never completes, your waiting tasks will still be executed. Timeouts act as a safe guard, within your flow charts, to make sure that all of your jobs are not stopped because of one bad action. Another use for timeouts could be with messages. Suppose you have an action listening for a message or signal. You want to listen for that message only during a certain time period. Using timeouts, you can set the timeout to fire at the end of the specified search time and set the flow to restart the action listening for the message at the next search time. For example, suppose Process Action A in a flow chart gets stuck in an infinite loop and never completes. You can set a timeout expression in Process Action A that terminates Flux Manual 95

97 the native process being run if it does not complete within the timeout period. The flow chart will conditionally branch into a predefined timeout flow. Timeouts are meant to be used mainly with triggers and the Process Action. For example, you can have a File Exist Trigger timeout if no files are detected within an hour. You can also set a timeout period on other Flux actions; however the timeout flow will be followed only after the action finishes its work. Process Actions are handled in a special manner, such that the native process being executed is terminated upon timeout. When an action other than a Process Action times out, FlowContext.isTimedOut() returns true. FlowContext.isTimedOut() returns null if the action is not timed out. The following code shows how to setup a timeout period on an Action. First, set a timeout time expression, which expresses how much time the trigger or action has to finish its work before it times out. processaction.settimeoutexpression("+3s"); A timeout time expression can also be expressed in terms of business units. For example, you can set an action or trigger to timeout if more than 5 business minutes have elapsed. The following code snippet describes how to specify a timeout of 30 business minutes. processaction.settimeoutexpression(" b"); processaction.settimeoutbusinessinterval(); Define a timeout flow and an action to execute if the timeout flow is followed. JavaAction timeoutaction = job.makejavaaction("timeout Action"); timeoutaction.setlistener(timeoutaction.class); Set the timeout flow from the process action to the timeoutaction. This slow is the one that will be followed when processaction times out. processaction.settimeoutflow(timeoutaction); NOTE: The ProcessAction.setDestroyOnTimeout() method can be used only with Process Actions, and will destroy the native process being run upon timeout. processaction.setdestroyontimeout(true); For an example using timeouts, refer to the Timeout example located within the examples/software_developers/timeout/ directory under the Flux installation directory. 19 Complex Dependencies and Parallelism with Splits and Joins Sometimes jobs need to be defined that have complex dependencies. These dependencies may require a job to wait for many different events to occur before the job can continue making progress. For example, a job A may need to wait for a job B to Flux Manual 96

98 finish as well as for a file to appear on a remote FTP server before job A is permitted to continue. Furthermore, a job may need to perform different actions in parallel. After the simultaneous actions perform their work, the parallel flows can merge back together and continue on as a single flow Multiple Start Actions To configure an action to be a start action, use the following code: youraction.setstartaction(true); When the setstartaction() tag is set to "true" your action will be the first to run when your flow chart is started from a Flux engine Complex Dependencies Once these parallel flows have been defined, each flow can enter a trigger to wait for some system state to become satisfied. For example, a flow chart with two start actions can have one parallel flow wait for a different job to finish while the other parallel flow waits for a file to appear on a remote FTP server. These two triggers wait in parallel for the appropriate events to occur. Once these two triggers fire, they typically need to synchronize and merge back together into a single flow. To perform this synchronization and merging, a flow from each trigger is created. Both flows merge into a single action. This action must have a join point enabled on it Join Points IMPORTANT: As each execution flow enters a join point, an implicit transaction break occurs (described further below). Furthermore, the flow context associated with each i nbound execution flow is deleted at the transaction break. After the join point is satisfied, a fresh, empty flow context is created and assigned to the newly created execution flow. At that point, the newly created execution flow continues running with its own private flow context Conditional Join As an example, suppose you have three actions working in parallel; action A, action B, and action C. These actions all flow into a single action, called a join point. A and B depend on each other s information so they must be together before the join point can fire. However, if C completes and flows into the join point before A and B then they are not needed to proceed. You can set up this scenario using a conditional join. By setting the join point s JoinExpression to "A and B or C", the join point will wait for either A and B to flow into it before firing, or it will fire as soon as C has flowed in. Example code for setting a join expression is written below. NOTE: The conditional functions are NOT case sensitive. youraction.setjoinexpression("a and B or C"); This join expression sets a join point to look for both action A and action B to flow into the join point or just action C to flow into the join point. As soon as one of these conditions is met the join point fires and any actions that flow into the join point late are Flux Manual 97

99 terminated. NOTE: When using join points inside of loops within a flow chart, this late terminating behavior could cause difficulty. For example, suppose you are waiting for two out of three actions before firing a join point. Further suppose that all three actions fire simultaneously, therefore, one of the action's is terminated since it would be considered late. A conditional join point pauses the join point s action until the specified condition is met. If there is no join expression, and the action's join point property is set to false, job execution will not pause at this action to merge multiple inbound flows. However, if the join expression is set or if the join point property is set to true then execution pauses to merge all inbound flows to this action that match the given join expression. Setting a valid join expression automatically sets an action as a join point. If your setjoinpoint() is set to true, and your setjoinexpression() is either set to null or invalid, than the regular join point behavior will take effect. In other words, your action will wait for all inbound flows to arrive before executing. To manually set a join point: youraction.setjoinpoint(true); There are two different types of variables you can use in join point expressions, names and numbers. When using a name, you must use the name you gave the action when you created it. The join point will then wait for those actions to flow into the join point before firing. The following line of code is an example of a join expression using names. Refer to Table 19-1 for details. youraction.setjoinexpression("a and B or C"); The other type of variable used is numbers. Numbers behave differently than names in a join point expression. If a number is used in the join expression, it is always evaluated as greater than or equal to that number. For example, suppose you have a join expression of "3". This expression would set the join point to fire as soon as three or more actions flowed into the join point. The code below shows the join point expression being set using a numerical value. You can also use "or", "and", and "not" with numbers in the expressions. Refer to Table 19-2 for details. youraction.setjoinexpression("3"); If a join expression exists, it will automatically set the specified action's join point to "true" when Engine.put() is called. If a join point exists, there is an implicit transaction break before the action. The transaction break means that if an error occurs after the join point, the default error handler will not retry the join point, only the action that failed. Figure 19-1 demonstrates this behavior. Flux Manual 98

100 Figure 19-1: Splits are Implicit Transaction Breaks You can use conditional joins along with signals to add the ability to free up processing time and stop unnecessary actions still running. Conditional join points, by default, stop any actions flowing into the join point after the join condition has been satisfied. However, if you use signals with the join point you can kill the unnecessary actions before they reach the join point. The scenario below shows a detailed example of this behavior. Suppose you have a start action called "Start Action". This "Start Action" splits off into two parallel flows, "Java Action 1" (JA1), and a "Process Action" (PA). JA1 flows into a large amount of actions, all doing various kinds of work. When the "Last Java Action" (LJA) completes, it flows into a conditional join point (CJP). On the other flow, PA is a stand alone process which upon completion, flows into the join point CJP. CJP has a condition expression set to "LJA or CJP", meaning that if LJA flows in first, the join point will fire regardless of PA s completion. The same applies for PA, if it gets to CJP first, CJP will fire regardless of LJA s completion. When CJP fires it will send a signal to all of the actions working above it for which they will all be listening for. This signal will instruct the Java Actions to stop their flow and terminate or instruct the Process Action to stop its execution and terminate depending on which action reached the join point first. This scenario, also called the "abort" scenario, shows the usefulness of conditional join points as well as signals. Refer to Figure 19-2 below. Flux Manual 99

101 The implementation of the behavior shown above is useful to recreate within your own flowcharts. It will keep Flux from lagging by deleting unnecessary actions that could potentially take up processing time. The abort example, located within the examples/software_developers/abort directory shows an implementation of this scenario. Functions for Conditional Join Points Figure 19-2: The Abort Scenario Table 19-1: Conditional join expression syntax using Action names <name> Example Description "or" ("A or B") Fires if either action A or action B flows into join point. "and" ("A and B") Fires if and only if action A and action B flows into join point. Flux Manual 100

102 "not" ("not A") Fires as soon as any other action besides A flows into join point. Table 19-2: Conditional join expression syntax using numbers Functions for Conditional Join Points <number> Example Description "or" ("1 or 2") Fires as soon as one action flows into the join point. "and" ("2 and 3") Fires only if three actions flow into join point. "not" ("not 5") Fires as soon as one action flows into the join point 19.5 Splits When a split occurs, an implicit transaction break occurs. Furthermore, the current flow context is cloned into multiple, identical flow contexts, one assigned to each newly created execution flow. At that point, each independent execution flow continues running with its own independent flow context. For example, suppose you have a flow chart that has the structure shown in Figure 19-3 below. Let s say that the flow context id of the flow between Start Action 1 and Java Action 1 has a title of "X" and a value of "5". Java Action 1 splits off into two separate flows. Java Action 1 s flow context branching left into Java Action 2 is titled "Y" and has a value of "2". The flow context branching right from Java Action 1 into Java Action 3, has a title of "Z" and a value of "4". When Java Actions 2 & 3 execute, the flow contexts from those two actions, which flow into the join point, have separate titles and values. When the Join Point fires its flow context becomes a completely new title and value, and does not save any of the previous values or titles. Refer to Figure Flux Manual 101

103 Flux provides the Split Action as a convenience to highlight that a split is occurring at a specific point in the flow chart. As demonstrated above, any action of trigger can become a join point. This is accomplished simply by adding multiple unconditional flow flows to an action or trigger Transaction Breaks Figure 19-3: Flow Context Behavior with Splits and Join Points Although transaction breaks are not discussed in detail until Chapter 31 Transactions, they need to be mentioned here. In short, transaction breaks define when a database transaction is committed. Splits and joins contain two implications for transaction breaks. When a join occurs, an implicit transaction break occurs on each joined execution flow. Because each inbound flow in a join contains its own database transaction, each transaction needs to be completed before the flows can be merged. Similarly, when a split occurs, the current database transaction needs to be completed before the flows can split. Consequently, an implicit transaction breaks occurs on the split execution flow, at which point a new database transaction begins for each new concurrent outbound flow. Flux Manual 102

104 As noted in the section on Splits, when a transactional break occurs, the current flow context is flushed and replaced with a fresh one. Any Flux node can become a transactional break. The settransactionalbreak(boolean) method specifies whether an action or trigger is a transactional break or not. 20 Business Process Management Business process management concerns "people workflows", as opposed to running software tasks in your flow charts. Without business process management, Flux runs non-interactive, batch software tasks that involve software, not people. With business process management, you can create workflows that coordinate the work performed by people. For example, in a business process for applying for a bank loan, the business process begins with the loan applicant filling out a loan application, then progresses to the next step when the bank receives the application. Next, the application is forwarded to the Credit Check department and the Employment Verification department simultaneously. Both of these departments perform their work at the same time. Finally, the business process flows back to a Loan Approver for final approval. Flux can model and execute these kinds of business processes Modeling a Business Process When you create a business process, you are focused on following items: 1. What tasks need to be performed at different steps 2. Who, or what type of person, should perform these tasks 3. How the performing of these tasks will be coordinated. In Flux, item 1 above is represented by a business process trigger. A trigger waits for an event to occur. In this case, a business process trigger waits for a person to perform the work represented by that business process trigger. In the above example, the act of filling out a loan application can be modeled by a business process trigger. Typically, you would provide a name to the business process trigger that describes the activity and work being performed. In the above example, a suitable name would be "fill out loan application". Item 2 above is represented by a participant. A participant is a person, or type of person, that performs the work represented by a business process trigger. In the above example, the person could be "Christopher Smith", to name a specific person, or "customer" in order to name a type of person. Participants are specified in the participant property of business process triggers. Item 3 above is represented by the normal flows in a Flux flow chart. As described in Section 1, flows connect triggers and actions to other triggers actions. In order to model the bank loan business process, create a separate business process trigger for each unit of work (fill out loan application, credit check, employment verification, final approval). The "fill out loan application" business process trigger is the starting point of this business process. Next, create flows to connect the business process triggers. Create a flow from the "fill out loan application" trigger that splits to both the "credit check" and "employment verification". Then create flows that flow from these concurrent business process triggers and join into the "final approval" business process trigger. Flux Manual 103

105 Finally, if you want this entire business process to run multiple times, create a flow from the "final approval" trigger back up to the "fill out loan application" trigger. However, if you want to run this business process only once, omit this final flow Business Process Engine A business process engine is the same as a typical Flux engine, only it is mandatory that security be enabled on it. This is so different users are able to be created; otherwise everything would be run under a single user and be more of a workflow than business process Login to the Business Process Engine Because business process management requires that a business process engine be secured, that is, have security enabled on it, you must log into a business process engine before you can interact with it. Next, right-click on your business process engine, and select the Login menu item. Flux will prompt you for your username and password. Flux uses JAAS for authenticating users; therefore it is possible to use LDAP, Active Directory, or any other directory system to authenticate against. For a complete example that shows how to use all of Flux s business process management features, see the Business Process Management example in the "examples/end_users/business_process_management" directory underneath the Flux installation directory. All of the business process management functionality is accessible using the Flux APIs. For complete details, see the flux.businessprocessmanagement package in the Flux Javadoc documentation. The business process management functionality that the Flux Designer, and Web-based Designer, supports is built using Flux s business process management APIs. The Flux user interfaces do not contain any BPM functionality of their own. Instead, they delegate to the Flux BPM APIs. 21 Flux Agent Architecture The Agent architecture allows light-weight Flux agents to run native batch files or processes specified in Process Actions in a flow chart on remote machines, consequently dividing the workload of a single machine to achieve better scalability. Using the Flux Agent architecture, you can distribute your resource-intensive work across a network of machines to achieve scalability. Flux Manual 104

106 Figure 21-1: Flux Agent Architecture For example, say you have a machine that runs batch processes and these processes use a lot of CPU cycles and memory. With increasing number of processes, you will rapidly run into heavy resource usage on your machine and reach scalability limits. This problem can be easily resolved by running Flux Agents in your network. Flux Agents running on different machines in your network will register with a central Flux engine or a Flux cluster. Once the Flux Agents are started and registered, you can export flow charts to the Flux Engine cluster. This cluster of Flux engines will distribute the processes in the flow charts over the network of Agents, assigning each "Free Agent" a process to run Flux Engine - Agent Relationship A single Flux engine or Flux engines in a Flux cluster act as coordinators in the agent architecture, exporting processes to registered Flux agents. Flux engines should be run on machines in a network with connectivity to agent machines. Figure 21-2 below shows the basic layout for engine agent relationships. Flux Manual 105

107 Figure 21-2: Engine - Agent Relationship 21.2 Creating a Flux Agent An agent can be created using the Flux agent API, or via the Command Line Interface (CLI). Example code for creating, and starting, an agent using the Flux API is shown below. AgentConfiguration agentconfiguration = AgentFactory.makeInstance().makeAgentConfiguration(); agentconfiguration.setenginehost("localhost"); agentconfiguration.setengineregistrybindname("flux"); agentconfiguration.setengineregistryport(1099); Agent agent = AgentFactory.makeInstance().makeAgent(agentConfiguration); agent.start(); The above example code creates an AgentConfiguration object. This configuration object can be used to configure the agent and to point it to a Flux engine. Learn more about these methods in the flux.agent Javadocs. Cluster networking must be enabled on the engine in order to use agents. Using the CLI to create and start an agent consists of a variety of options you may set on the agent for configuration. The various options are displayed below. -engineregistryport The registry port of the engine the agent will connect to. Flux Manual 106

108 -registryport The registry port that the agent will connect to. -configurationfile The configuration file used to specify an agent s properties. -engineregistrybindname The RMI registry bindname of the engine to which the agent will connect. -registrybindname The RMI registry bindname to which the agent will connect. -host The host on which the agent resides. -enginehost - The host of the engine to which the agent will connect. -engineusername The usersname of the engine to which the agent will connect. -enginepassword The password of the engine to which the agent will connect. -encryptedenginepassword The encrypted password of the engine to which the agent will connect. -pool The pool in which the agent runs. Using the CLI options specified above to configure the agent, the following line is used to start it. java -cp.;flux.jar flux. Main agent-server <agent options> start 21.3 States of a Flux Agent Figure 21-3 below outlines the states of a Flux agent. Flux Manual 107

109 Figure 21-3: Agent State Diagram 21.4 Agents and the Flux Operations Console Registered Flux agents are displayed in the "Agents" page. Figure 21-4 below displays the Flux Agents page within the Operations Console with one registered agent. Flux Manual 108

110 Figure 21-4: Agents Page With One Registered Agent To select an agent, click anywhere on the grey line that contains that agent. You can perform operations on that agent, such as start, interrupt, unregister, and shut down, using the buttons located at the bottom of the agent listing. To view more detailed information about an agent, click the agent's underlined name (this would be green:1099:fluxagent in Figure 21-4). The "Agent Details" page, demonstrated in Figure 21-5 below, displays information about the agent including the last command executed and the command currently being executed, if any. Figure 21-5: The Agent Details Page For more information on using Flux agents in code, refer to the agents example located in the examples/software_developers/agents directory under the Flux installation directory. Flux Manual 109

111 22 Exception Handling Flux can monitor a database for business exceptions. As an example of a business exception, suppose claims at an insurance company exceed a certain threshold. This situation is a business exception. An exception handling flow chart can detect this situation and send alert s to company employees and take other actions in response to the business exception. The foundation of an exception handling flow chart is the Database Exception Action. This action issues a user-defined query against a database. If that query returns results that are understood to be a business exception, the Database Exception Action returns a positive result. Otherwise, it returns a negative result. When you create a Database Exception Action, assign an SQL database query to the action s query property. This database query is run against the database when the action executes. After the query finishes, it is evaluated to detect whether a business exception occurred. This evaluation criterion is configured in the action s exception condition property. For example, suppose the following database table exists in your database. Table 22-1: Example Database Table - Claims CLAIMS PRIMARY_KEY CLAIM_AMOUNT CLAIMER_FOREIGN_KEY This database table, called CLAIMS, contains three columns. The most important column is the CLAIM_AMOUNT column, which contains the size of the claim in some currency, such as US Dollars. For the purposes of our example, a business exception occurs if the sum of the claims exceeds 4000 US Dollars. To create an exception handling flow chart, create a Database Exception Action whose query property is the following SQL query. SELECT SUM(CLAIM_AMOUNT) FROM CLAIMS This database query is executed whenever the Database Exception Action executes. In general, you can specify any SQL query that is legal for your database. Next, you must define the exception condition property. Set it to the following condition. COLUMN(1) > 4000 This exception condition states that if the first column of the result exceeds 4000, then the business condition has occurred. The SQL-like syntax for exception conditions is detailed below. All of these exception handling features are accessible using the Flux APIs through the flux.exceptionhandling package in the Flux Javadoc documentation. For a complete example that shows how to use Flux s exception handling features, see the exception handling example in the examples/end_users/exception_handling directory underneath the Flux installation directory. Flux Manual 110

112 22.1 Controlling when the Database Exception Action Runs When your flow chart reaches a Database Exception Action, the action executes immediately. If you want your Database Exception Action to run on a schedule, simply configure your flow chart so that a Timer Trigger is in front of your Database Exception Action. The Timer Trigger controls how frequently the Database Exception Action runs. In general, you can use different flow chart triggers to control exactly when your Database Exception Action runs. For example, by placing a Message Trigger in front of your Database Exception Action, you have the Database Exception Action run when certain events occur in the system Exception Condition Syntax This section describes the syntax for exception conditions. An SQL query returns a result as a set of zero or more rows. If the query returns zero rows, the Database Exception Action s result will be negative, indicating that a business exception did not occur. If the query returns one or more rows, only the first row is evaluated. Consequently, you must formulate your database query so that the relevant information is returned in the first row. In that first row, there are several columns. The first column is number 1, the second column is number 2, and so on. To access column 1, use the following syntax. COLUMN(1) In general, refer to column N as COLUMN(N), where the N argument is a number greater than or equal to one. You can use non-negative numbers. You can use some of the usual SQL relational and boolean operators: <, <=, =, <>, >=, >, AND, OR, NOT Some examples include: COLUMN(1) > 500 COLUMN(1) < 1000 AND COLUMN(2) > 5000 COLUMN(1) = 123 COLUMN(1) = 50 OR COLUMN(1) = 100 NOT COLUMN(1) = 50 OR COLUMN(1) = 100 NOT COLUMN(2) < 250 You may not use parentheses. These operators are evaluated from right to left. For example, in the exception condition NOT COLUMN(1) = 50 OR COLUMN(1) = 100, COLUMN(1) = 100 is evaluated, then COLUMN(1) = 50 is evaluated, then the OR operator is evaluated, and finally the NOT operator is evaluated. Flux can monitor a database for business exceptions. As an example of a business exception, suppose claims at an insurance company exceed a certain threshold. This situation is a business exception. An exception handling job can detect this situation and send alert s to company employees and take other actions in response to the business exception. Flux Manual 111

113 23 Security Since a Flux engine can be created with security enabled on it, the client command may have to present username and password information in order to log in to the Flux engine and run commands on it. If the client accesses a Flux engine with security enabled, the client command will prompt you for a username and password. You can also specify the username and password directly from the command line. One way to enable security is via a configuration file when creating the Flux engine. At a minimum, the configuration must set the configuration property SECURITY_ENABLED to true. This is done by including the following line in your fluxconfig.properties file: security_enabled=true Then to create an engine with security enabled, run the following command: java classpath.;flux.jar flux.main server cp fluxconfig.properties When security is enabled, a security policy file is used. This policy file is used to assign permissions to the Flux software as well as individual Flux users and roles. By default, the name of the security policy file is fluxjaas.policy. You can specify a different file name by changing the SECURITY_POLICY_FILE configuration property. Additionally, when security is enabled, a security configuration file is used. This configuration file is used to configure how usernames and passwords are obtained. By default, the name of the security configuration file is fluxjaas.config. You can specify a different file name by changing the SECURITY_CONFIGURATION_FILE configuration property. Also by default, the name of the entry that is consulted is called security-configuration. You can specify a different name by changing the SECURITY_CONFIGURATION_FILE_ENTRY configuration property. Flux uses the standard Java Authentication and Authorization Service (JAAS) to implement security. The security policy file and security configuration file mentioned above conform to a syntax that JAAS requires. To be sure, the JAAS syntax is not easy at first, but if you work with it, over time, it will become easier. The sample fluxjaas.config file that is shipped with Flux refers to a JAAS login module called flux.security.fluxloginmodule. Other JAAS login modules may be substituted here. A working security policy file, a working security configuration file, and a working user file are included with the Flux distribution. Look for the following files in the Flux installation directory. fluxjaas.policy: By default, this policy file grants all permissions. It is required so that the Flux engine can change the JVM policy and substitute it with its own policy. Once Flux is up and running the administrator can then configure permissions. For instance, an administrator can assign users permissions to read and write to jobs located in different branches of the tree of jobs. Also, an administrator can assign users the right to use the Message Administrator or Publisher Administrator. This fluxjaas.policy file conforms to JAAS syntax. fluxjaas.config: This configuration file specifies a JAAS login module. Flux ships with a default login module called the FluxLoginModule. This login module lets Flux Manual 112

114 you assign users and roles using either the Flux API s or Flux Operations Console and saves them to the Flux database. If you want to use your own JAAS login module to retrieve usernames and passwords from a different location, enter the name in the fluxjaas.config file. For example, you might have a JAAS login module that retrieves usernames and passwords from the operating system, a database, or an LDAP server. This fluxjaas.config file conforms to JAAS syntax. For a working example of creating and accessing a secure job schedule engine, see the security examples in the examples/software_developers/security and examples/end_users/security directories underneath your Flux installation directory. In Flux, security is optional. If you enable security, you can prevent certain users and roles from performing certain operations in Flux. You can also prevent certain users and roles from accessing different branches in the tree of jobs. For example, you can permit the user "mary" to access only the "/mary/" branch in the tree of jobs. For a more in-depth look at Flux security, see the flux.security package in the Flux Javadoc documentation. As with the business process management functionality, the security functionality that the Flux Designer supports is built using Flux s security APIs. The Flux user interfaces do not contain any security functionality of their own. Instead, they delegate to the Flux security APIs. Consequently, you can employ any of the security functions that the Flux user interfaces employ, simply by using the Flux security APIs The Definition of Groups, Roles, and Users On a Flux engine where security is enabled, groups, users, and roles exist to provide organization and increased control over Flux users. A specified Super Administrator governs all groups, users, and roles specifying what they can and cannot do, such as creating or deleting flow charts. All flow charts are associated with groups, all groups must be assigned a unique name, and all groups can contain both roles and users. Group names must not contain any of the following characters: "*", "?", "%", "_", ".", or "$". For example, groups can be viewed, from a business hierarchy, as the "Sales" or "Software Development" department of a business. By default, Flux provides one default group and one Super Administrator that all newly created roles and users are assigned to unless explicitly instructed otherwise. Group administrators can be created for each specific group to oversee and control the group's users. Roles classify users into even more specific categories. These roles can be assigned to groups and contain users, although, users can be assigned to groups without a role. Role names must be unique within their assigned groups, however, role names can be duplicated outside of a group. An example of roles, from a business hierarchy, could be "Sales/Sales Representative" or "Software Development/Programmer" positions. It may be easier to think of groups, roles, and users analogous to a domain system such as Active Directory. At the top level, there is the domain, similar to groups in Flux, which may cover a wide area such as Company A or Sales. Below that are OUs, (organizational units) which cover a specific area within that company such as Software Developer ; these are like roles in Flux. Associated with these OUs are the individual entries into the domain system such as a single user; this parallels the user in Flux. Flux Manual 113

115 Permissions given to the user are based on both the permissions allotted to the roles/ous the user is in and the permissions given to the individual user. Users can be assigned to roles and groups. Users can also be created for each person that uses the Flux software. User names must be unique in the macrostructure of the Flux engine, meaning user names can never be duplicated. To assure this quality, Flux uses the user's address as the user's name. An example of users, from a business hierarchy, could be "Sales/Sales Representative/johndoe@mybusiness.com" or "Software Development/Programmer/jamessmith@mybusiness.com". By using the Flux APIs, attributes specific to groups, users, and roles can be viewed and manipulated by the Super Administrator, or any users with the proper permissions Securing a Local Flux Engine If a Flux engine has been created with security enabled, you can login to it using the flux.localsecurity interface, which can be created from the flux.factory class. You login to a local engine by providing a reference to that engine and either a security subject or a username/password pair. Security is viewed on a per-thread basis. If you login as user "mary" on one thread, you cannot login as user "john" until after you logout user "mary" Securing a Local Flux Engine for Remote Access A Flux engine that has been created as an RMI server cannot have security enabled on it. Instead, create a local Flux engine and use the flux.remotesecurity interface to create a secure RMI interface to the local engine. The flux.remotesecurity object is created from the flux.factory class. The RemoteSecurity object provides an RMI front-end as well as a security front-end to a local Flux engine Using a Secured Remote Flux Engine Once you have created a secure engine ready for remote access, you can access that engine by using the flux.factory.lookupremotesecurity() method. Once you have successfully looked up a RemoteSecurity object, you can login, perform your work, then logout. 24 Runtime Configuration Two components are used to configure a job scheduling engine: Static: The flux.configuration object accepts configuration properties that are set when a job scheduling engine is created. These configuration options cannot be changed after the engine is created. For a comprehensive list, see the Javadoc listing for flux.configuration. Runtime: The flux.runtimeconfiguration.runtimeconfigurationnode object contains configuration options that can be changed at runtime, after the engine has been created. This object can be access through the Configuration and Engine interfaces via the getruntimeconfiguration() method. These properties include options such as the listener classpath for Java Actions, Flux Manual 114

116 concurrency throttles, and default error handlers. For more information see the Javadocs listing for RunTimeConfigurationNode. Each job scheduling engine instance is controlled by its static (non-runtime) configuration and its runtime configuration Tree of Configuration Properties The runtime configuration consists of a tree of nodes, set up in the usual tree structure. It is assumed that the entire tree can fit into memory. Each node contains a set of configuration properties as well as links to its parent and its children. Each configuration property consists of a key and a value. The key must be a string. The value must be a persistent variable. See Databases and Persistence for detailed information on persistent variables. Although the runtime configuration tree can contain any kind of property, it includes pre-defined properties: concurrency throttle, concurrency throttle cluster-wide, default flow chart error handler, and priority. Flow chart names are hierarchical. They form a tree of jobs. For example, the two flow charts, "/lightweights/job 1" and "/heavyweights/job 2", form a tree with two branches off the root, "/lightweights" and "/heavyweights". You can annotate this tree of jobs using properties in the runtime configuration. For example, to set a concurrency throttle for heavyweight jobs, set an appropriate concurrency throttle in the "/heavyweights" branch of the runtime configuration tree. To set a default flow chart error handler for lightweight jobs, set an appropriate error handler in the "/lightweights" branch of the runtime configuration tree. In summary, the tree of configuration properties mirrors the tree of jobs. A job s behavior can be further configured based on the branches in the runtime configuration tree that correspond to that job s fully qualified name Concurrency Throttles In summary, concurrency throttles provide a way to limit the number of jobs that can execute at the same time on all, or some, Flux instances. Because your computer or your cluster has a finite amount of processing power and I/O bandwidth, you do not want to run an unbounded number of jobs in your system at the same time. Furthermore, some computers in your cluster may have special resources installed, such as report generation software. In this case, your jobs that need these special resources must execute only on those computers with the special resources installed. A concurrency throttle provides a very flexible way to limit the kinds and numbers of simultaneous jobs running on a particular Flux instance and across the entire cluster. Concurrency throttles also allow jobs to be pinned to specific computers in your cluster. A job s concurrency throttle is defined by the branches in the runtime configuration tree that correspond to that job s fully qualified name Flow Chart Priority Every flow chart is assigned a priority. Higher priority flow charts generally run before lower priority flow charts. Flow chart priorities make it possible to specify that important Flux Manual 115

117 or time-critical flow charts need to run before other, less important, flow charts. Flow chart priorities allow you to specify that if two flow charts could otherwise both execute at the same time but concurrency throttles will allow only one of those two flow charts to run, the flow chart with the higher priority runs first. If a flow chart has a priority explicitly set, it will override any priority set in the runtime configuration. If a null flow chart priority is used, it will allow the runtime configuration priority to take precedence. To set a flow chart's priority use: myflowchart.setpriority(1); The example shown above sets the priority of "myflowchart" to "1". Concurrency throttles take precedence over flow chart priorities. A concurrency throttle tends to prevent flow charts from running, whether they are high priority flow charts or low priority flow charts. However, once a concurrency throttle allows a flow chart to execute, priorities come into play as described above. The highest priority is 1. Lower priorities have values greater than 1, such as 10, 25, 500, etc. If two different flow charts have different priorities, the flow chart with the priority closer to 1 runs first. If two different flow charts have the same priority, then the flow chart with the oldest timestamp runs first. Each running flow chart contains a timestamp that indicates the next time the flow chart requires attention from the Flux engine. It is possible that higher priority flow charts will run so frequently as to prevent lower priority flow charts from running at an acceptable rate. This behavior is called starvation, that is, lower priority flow charts can be starved or prevented from running as frequently as you would like. By default, Flux permits starvation, because it often does not cause any serious problems, and it is sometimes desirable. If starvation impacts you adversely, you can enable the Flux configuration property FAIRNESS_TIME_WINDOW. This property contains a time expression that indicates how frequently starved flow charts should have their priorities increased by 1 (the effective priority). (To increase a flow chart's effective priority, its numerical value is decremented by 1.) Eventually, such flow charts will reach an effective priority of 1 and be eligible for execution before all other flow charts. After firing, such flow charts revert to their original priorities and are again eligible to have their priorities increased if they starve. This anti-starvation behavior is called fairness. Note that whenever a trigger fires internally, even if it does not fire out of the trigger and force execution of the flow chart to continue to the next action, that flow chart s effective priority is reset to its original priority. Also note that because starved flow charts have their priority increased by one (their priority value decremented by one), appropriate priorities should be given at start. If you have your highest priority flow chart set to one, and all others set to 500, the benefits of fairness may not be significant or even noticeable. Flow chart priorities are stored in an engine s runtime configuration. Each branch in the runtime configuration tree can contain a PRIORITY property, which specifies the priorities of all flow charts in that branch of the tree. If a PRIORITY property is not explicitly set, the engine searches higher branches in the runtime configuration until it finds a PRIORITY property. If no explicit PRIORITY property is found, even at the root of the tree, then the flow chart defaults to a priority of 10. Flux Manual 116

118 Changes made to PRIORITY properties in the runtime configuration tree are propagated to running flow charts in the engine. For more information on flow chart priorities and the way they affect the order of exection for your flow charts, refer to Chapter 8 Running Flow Charts Default Flow Chart Error Handler To summarize default flow chart error handlers, they provide a mechanism where any failure in a flow chart trigger or action can cause a pre-defined error recovery job to run. The default error handler provides a single location that provides a way for jobs to recognize errors, report those error situations to other software systems or administrators, and attempt to recover those errors. A flow chart's default error handler is defined by the branches in the runtime configuration tree that correspond to that job s fully qualified name Other Properties Besides the two built-in properties, you can set any other property in the runtime configuration. As a practical example, you can define the provider URL configuration property to your application server in the runtime configuration. Then your EJB and JMS flow chart actions can be configured to use that provider URL. In this manner, your jobs can target one application server for a while. Later, when that application server is moved to a different machine, all your jobs can target the new application server simply by changing the provider URL configuration property in the runtime configuration. By changing a configuration property in a single location, you can alter the behavior of all your jobs in one step. In general, any job can find the runtime configuration properties that it needs by starting at the lowest branch in the runtime configuration tree that corresponds to that job s fully qualified name and working its way back to the root of the runtime configuration tree. Adding a property is achieved through the setproperty(string propertyname, Object value) method within the RuntimeConfigurationNode object. Refer to the code below for proper implementation. //Create a fresh runtime configuration node RuntimeConfigurationNode runtimeconfig = factory.makeruntimeconfigurationnode(); //or if a runtime configuration node has already been established //RuntimeConfigurationNode runtimeconfig = engine.getruntimeconfigurationnode(); //Add an entry for "server1" to link to the host "myserver2" on your network runtimeconfig.setproperty("server1", "myserver2"); 25 Scheduling Options Once you have created your flow charts, scheduling it is simple. Call engine.put(flowchart). the flow chart will be added to the Flux engine and scheduled for immediate execution. To remove your job from the engine, save the job ID returned from engine.put() and call engine.remove(jobid). Flux Manual 117

119 To delete all jobs, messages, and publishers from the engine in one step, call engine.clear() Concurrency Throttles Flow Chart firings can be single threaded. That means that at most one flow chart can be firing at a time. To allow more than one flow chart to execute simultaneously, you will need to increase the engine's concurrency throttles. You can define a concurrency throttle that governs a single Flux instance, or you can define a concurrency throttle that governs all flow charts in all Flux instances on your cluster. These coarse-grained concurrency throttles control the parallel execution of large numbers of jobs in one step. You can also define fine-grained concurrency throttles. For each branch in your flow chart tree, you can define a unique concurrency throttle that governs only the flow charts in that branch. For example, assume that two long-running flow charts are scheduled to fire one after the other. With a concurrency level of one, the second flow chart cannot run until the first flow chart completes. With a higher concurrency level, on the other hand, both flow charts will execute simultaneously. Choose a suitable concurrency throttle for your application. Lower concurrency throttles conserve system resources while higher concurrency throttles can finish tasks faster. To prevent your flow charts from consuming all of your resources, Flux limits the concurrency throttle to 150. If you attempt to set a concurrency throttle higher than this, Flux will automatically substitute a concurrency level of 150. As a general rule of thumb, if your flow charts are mostly I/O bound you should set a concurrency throttle of 2-3 times the number of CPUs in the macine that will execute your flow charts. If, on the other hand, your jobs are mostly CPU bound, you should choose a concurrency throttle of just 1-2 times the number of CPUs in your machine. Generally, it is a good idea to set the concurrency throttle 2 to 3 higher than the number of CPUs in the machine. For example, a quad-cpu machine should have a concurrency throttle of 6 or 7 for CPU bound flow charts, but could have a concurrency throttle of for I/O bound flow charts. Of course, concurrency throttles are tunable parameters, and you will probably need to experiment until you find the right value for your machine. The default concurrency throttle is 1. To inrease the concurrency throttle to 10, you would adjust the RuntimeConfigurationNode in your engine configuration like so: // Lookup the engine. Engine madeengine = factory.lookuprmiengine("enginehost"); // Retrieve the runtime configuration node. RuntimeConfigurationNode rootnode = madeengine.getruntimeconfiguration(); // Update the root node with the new concurrency throttle. rootnode.setconcurrencythrottle("10"); Or if your engine did not have a previously created configuration, you would use the follow code: Flux Manual 118

120 Factory factory = Factory.makeInstance(); // Create a Runtime Configuration Factory. RuntimeConfigurationFactory runtimefactory = factory.makeruntimeconfigurationfactory(); // Create a Runtime Configuration root node. RuntimeConfigurationNode rootnode = runtimefactory.makeruntimeconfiguration(); // Set the concurrency throttle to 10. rootnode.setconcurrencythrottle("10"); // Set your Runtime Configuration to your Configuration Object. config.setruntimeconfiguration(rootnode); To set a cluster-wide standard concurrency throttle, replace rootnode.setconcurrencythrottle("10"); from the code above with the following: rootnode.setconcurrencythrottleclusterwide("10"); This call specifies that no more than 10 flow charts across the entire cluster are allowed to fire concurrently Different Concurrency Throttles for Different Kinds of Jobs As mentioned before, flow chart names are hierarchical. They form a tree of jobs. Suppose your tree of jobs has three branches off the root: /heavyweight For long running, CPU-intensive jobs. /lightweight For short jobs or I/O bound jobs. /misc For a variety of other jobs. With a standard concurrency throttle as described above, these three kinds of jobs are treated the same. For example, suppose your standard concurrency throttle is 5. Then it is possible to be running 5 heavyweight jobs at once, which could slow your computer to a crawl. Instead, you need a different concurrency throttle for each kind of job. For heavyweight jobs, a concurrency throttle of 1 is appropriate. For lightweight jobs, a concurrency throttle of 10 is appropriate. Finally, for all other jobs, a concurrency throttle of 5 is appropriate. With different concurrency throttles for different kinds of jobs, you will not have 5 heavyweight jobs running at once. At most, you will have one such job running at a time, however, it will not prevent lightweight jobs from having an opportunity to run. To configure the concurrency throttles for your jobs, you must define a runtime configuration. This runtime configuration annotates the tree of jobs with configuration properties. For example, the /heavyweight branch of the tree of jobs is annotated with a concurrency throttle of 1. Similarly, the /lightweight branch of the tree of jobs is annotated with a concurrency throttle of 5. Flux Manual 119

121 For example code, see the Concurrency Level example, which provides a complete code example of how to create and initialize flux.runtimeconfigurationnode objects, which are used to configure these different concurrency levels. Also, see the Javadoc for flux.configuration.runtime_configuration. Note that if you configure your concurrency throttles on a per engine basis, the concurrency limitations are enforced on a per engine basis. However, if you configure your concurrency throttles on a cluster-wide basis using the CONCURRENCY_THROTTLE_CLUSTER_WIDE configuration property in your runtime configuration, then the concurrency limitations are enforced cluster-wide. This flexible enforcement allows you to limit the number of concurrent jobs that are executed across the entire cluster or within a single engine Concurrency Throttles Based on Other Running Jobs Finally, you can define concurrency throttles that are based on other running jobs. For example, suppose you have a "data generation" job and a "data analysis" job. Further suppose that in your system, it is not possible to run the data generation job at the same time as the data analysis job. You can define a concurrency throttle to enforce this constraint. Suppose the data generation job is named "/data generation/producer job" and the data analysis job is named "/data analysis/consumer job". To prevent the producer and consumer jobs from running at the same time, set two concurrency throttles like so: Producer job concurrency throttle: /data analysis/ <= 0 Consumer job concurrency throttle: /data generation/ <= 0 The producer job stipulates that it will run only if there are no data analysis jobs currently running. Similarly, the consumer job indicates that it will run only if there are no data generation jobs currently running. The syntax of this kind of concurrency throttle includes the name of the branch in the job tree, followed by the relational operator "<" or "<=", and followed by a non-negative number. Note that the relational operator "=" is not supported in order to avoid creating concurrency throttles that are impossible to satisfy Pinning Jobs on Specific Flux Instances You can specify that certain kinds of jobs must run on certain Flux instances in your Flux cluster. For example, if only one of the computers in your Flux cluster has report generation software installed on it, you can specify that all of your reporting jobs have to be executed on that computer only. In general, you can specify that a particular Flux instance will, or will not, run certain kinds of jobs. By pinning certain jobs on certain computers, you can make sure that your jobs run on the computers that have the resources that those jobs need. For example, suppose there are Unix and Windows computers in your Flux cluster. Further suppose that you have three kinds of jobs: Windows batch files that can run on Windows only. Flux Manual 120

122 Unix shell scripts that can run on Unix only. Java and J2EE jobs that can run on any computer. You can create concurrency throttles to enforce these constraints. First, organize your jobs into different branches in the tree of jobs. For example: /windows/batch file 1 /windows/batch file 2 /unix/shell script A /unix/shell script B /java/java job 1 /java/java job 2 On the Windows computers, use a Flux runtime configuration like so: /windows/concurrency_throttle=5 /unix/concurrency_throttle=0 /java/concurrency_throttle=5 The above runtime configuration allows up to 5 Windows jobs to run at the same time, zero Unix jobs to run, and up to 5 Java jobs to run concurrently on the Windows computers. Similarly, on the Unix computers, use a Flux runtime configuration like so: /windows/concurrency_throttle=0 /unix/concurrency_throttle=5 /java/concurrency_throttle=5 The above Unix runtime configuration is the opposite of the Windows runtime configuration. It allows up zero Windows jobs to run, up to 5 Unix jobs to run at the same time, and up to 5 Java jobs to run concurrently on the Unix computers. By using a different runtime configuration on each kind of computer in your cluster, you can control the kinds of jobs that the different kinds of computers will execute. See the "job pinning" example in the examples/end_users/job_pinning directory for details on how to pin certain jobs on certain computers in your cluster. Also, see the concurrency_throttle and concurrency_throttle_cluster_wide examples located in the examples/software_developers/concurrency_throttle and examples/software_developers/concurrency_throttle_cluster_wide directories under the Flux installation directory. As an alternative to job pinning, you can use Flux Agents. Briefly, an agent is a lightweight Flux component located on a remote computer that executes Process Actions on behalf of a Flux engine. Using agents, it s easy to specify that Unix processes should execute on agents residing on Unix computers and Windows processes should run on Windows computers, by way of a Flux agent on each of those computers. For more information on Flux Agents, see Flux Agent Architecture. Flux Manual 121

123 25.2 Pausing and Resuming Jobs If you need to temporarily prevent a flow chart from firing, you can pause it. Some time later, you can resume it, and it will begin firing again. Paused flow charts do not fire. To pause a flow chart, call Engine.pause(String flowchartid). Likewise, to resume a paused job, call Engine.resume(String flowchartid). You can also pause all jobs that are of a certain type by calling Engine.pauseByType(String type). Similarly, call Engine.resumeByType(String type) to resume all paused job that are of a given type. 26 Time Expressions Time Expressions are simple, compact strings that indicate when a job should be scheduled. They make it easy to specify jobs that run at recurring times or times that are relative to other times. This section presents the syntax for time expressions and provides examples for using time expressions to schedule jobs. Time Expressions can also be used for general-purpose date and time calculations. For example, if there in no interest in job scheduling but there is still a need to know the next time that February 29th falls on a Tuesday, a Time Expressions can be used to calculate that date. (The answer is the year 2028.) Broadly speaking, there are two kinds of time expressions. Cron-style Time Expressions are modeled after Unix Cron, but Flux s Cron-style Time Expressions are much more powerful than Unix Cron. Relative Time Expressions do things that Cron cannot, such as running a job on election day in the United States: the first Tuesday after the first Monday in November. Generally speaking, Flux s Cron-style Time Expressions can perform most of the timing tasks that are needed for real world applications. Relative Time Expressions, however, are convenient to use in certain situations because they are especially compact. The Flux Designer contains a Time Expression editor. For additional help on creating different time expressions, use the Time Expression editor. Use the editor to describe your schedule and save the Time Expression. You can paste the Time Expression string displayed in the editor into your application Cron-style Time Expressions In the style of Unix Cron, Cron-style time expressions specify dates and times according to a set of constraints. For example, ,17 * * mon-fri is a Cron-style time expression that reads, "Run a job at 8:15 AM and 5:15 PM Monday through Friday." Fundamentally, a Cron-style time expression specifies a set of constraints. These constraints specify when a job may run. Each constraint is represented by a column, and the columns are separated by spaces. No other spaces are allowed in the time expression. A totally unconstrained Cron-style time expression is shown as follows: * * * * * * * * * * * This unconstrained Cron-style time expression will fire a job every millisecond, continuously. In a more practical sense, it is desirable to apply some constraints. For Flux Manual 122

124 example, it may be necessary to fire jobs at 12 noon on weekdays, only. In this case, the Cron-style Time Expression must be constrained. Milliseconds, seconds, and minutes are constrained to 0. Because the job must fire at noon, hours are constrained to 12. Days-of-the-month and months are totally unconstrained, that is, set to "*". There is no constraint here, because the job must be able to fire regardless of the month or day of the month that it is. Finally, days-of-the-week must be constrained to weekdays, that is, Monday through Friday. A simple constraint of "mon-fri" accomplishes that goal. After applying these constraints, a Cron-style Time Expression is created that fires jobs at 12 noon on weekdays only, shown as follows: * * mon-fri Creating these expressions is easier than it may seem. There is a helper object, Cron, that can be used to create Cron-style time expressions, if desired. A Cron object is obtained by calling EngineHelper.makeCron(). By using the Cron helper object, the exact syntax of Cron-style Time Expressions does not need to be learned or memorized. However, it is useful to read and understand the Cron syntax. The format for a Cron-style time expression is a string containing at least one required column, the milliseconds column, followed by optional columns: milliseconds seconds minutes hours days-of-month months days-of-week day-of-year week-of-month week-of-year year Every column from seconds to year is optional. Optional columns, if left unspecified, take on a default value of "*". However, if one of the optional columns is specified, the other optional columns to its left become mandatory and therefore must also be specified. For example, if the optional months column is specified, then the other optional columns to its left days-of-month, hours, minutes, and seconds become mandatory and must be specified. For example, to fire a job at 5 pm every day, you can use the following Cron-style time expression, which takes advantage of optional columns: Because the hours column was specified, all columns to its left become mandatory and must be specified. However, all columns to the right of hours remain optional and take on the default value of "*". The above example Cron-style time expression is equivalent to the following Cron-style time expression, which contains no unspecified optional columns: * * * * * * * The valid ranges for Cron-style Time Expressions are as follows: Table 26-1: Cron Style Time Expressions Milliseconds Seconds 0-59 Minutes 0-59 Hours 0-23 Days-of-month 1-31 Months 0-11 or jan-dec Days-of-week 1-7 or sun-sat Flux Manual 123

125 Day-of-year Week-of-month Week-of-year Year minimum up to 6, where minimum is either 0 or 1, depending on your locale. In the United States locale, minimum is 1. minimum up to 53, where minimum is either 0 or 1, depending on your locale. In the United States locale, minimum is 1. Note that if the computer's locale is switched while Flux is running, you must shut down Flux and restart it to recognize the new locale. This restart is required for Cron-style time expressions to work correctly in the new locale. A range of "*" includes the entire valid range. Multiple items may be included if they are separated by commas. For example, "5,10,15" includes all the numbers 5, 10, and 15. A range, for example, of "5-15" includes all the numbers 5 through 15, inclusive. A range of "5-7,10-12" includes the numbers 5, 6, 7, 10, 11, and 12. A range, for example, of "tue-sat" includes all the days Tuesday through Saturday inclusive. Finally, any range ending in /n includes every nth item in that range. For example, "/4" would include every fourth item in that range. This notation is called a step value and may be applied to ranges of numbers, months, and days-of-week, whether they are in numeric or string form. Moreover, step values may be applied to the special symbol "*". For example, if "*" is followed by "/3", this notation indicates that every third item from the range is to be included. The step value expression "*/3", as applied to the day-of-month in January, means that the values 1, 4, 7,..., 28, and 31 are included in the range. Weekdays of the month can be specified in the days-of-month and day-of-year columns. For example, to run on the 3 rd Monday of the month, place "3MO" in the days-of-month column. The "^" and "$" symbols may be applied as well. The symbol "^" is synonymous with "1", meaning the first day of the month or the year. The symbol "$" means the last day of the month or the year. For example, to run a job on the last Friday of the month, place "$FR" in the days-of-month column. Similarly, to run a job on the first Monday of the month, place "^MO" or "1MO" in the days-of-month column. Notice how the weekday abbreviations are slightly different than the abbreviations used in the days-of-week column. In the days-of-month and day-of-year columns, the weekday abbreviations are SU, MO, TU, WE, TH, FR, and SA. The "^" (first) and "$" (last) symbols can be applied to ranges of numbers as well. For example, to run a job on the 5 th day of the month through the last day of the month, use the range "5-$" in the days-of-month column in your Cron-style Time Expression. Two more symbols can be used in Cron-style Time Expressions. The symbol "b" indicates a business unit of time, which is defined by Business Intervals as described in Chapter 27 Business Intervals. For example, "*b" in the Days-of-month column means that jobs should fire on business days only. The difference from the "*" symbol is that when using "*", jobs fire every day, not just on business days. Note that "*b" must be used, not just "b". Similarly, "5b-10b" in the Days-of-month column indicates that jobs should run on the 5 th, 6 th, 7 th, 8 th, 9 th, and 10 th business days of the month. To fire on the 5 th, 7 th, and 9 th business days of the month, you can use "5b,7b,9b" or "5b-9b/2b". To fire on every other calendar day between the 5 th and 9 th business days of the month, use "5b-9b/2". Note Flux Manual 124

126 the subtle distinction between "5b-9b/2b" and "5b-9b/2". The former fires every other business day; the latter fires every other calendar day. The symbol "$b" is a handy way of running jobs on the last business moment. For example, "$b" in the Days-of-month column means to run on the last business day of the month. To run on the last calendar day of month, simply use "$". If a Business Interval has not been defined, the "b" and "h" symbols are simply ignored. Cron-style Time Expressions also support "shift" symbols. Shifting can shift backwards or forwards. This functionality allows jobs to run at moments relative to other moments. For example, to run jobs on the second-to-last day of the month, use the expression "$< 1" in the Days-of-month column, which means one day before the last day of the month. Shifting can be employed for other useful purposes. For instance, to run a job on the first business day on or after the 10 th calendar day of the month, use "10>1b" in the Days-of-month column. If the 10 th calendar day of the month falls on a holiday, then the " >1b" portion will "push" the expression ahead to the next business day. On the other hand, if the 10 th calendar day of the month is already a business day, then the ">1b" portion does nothing, because the 10 th calendar day of the month is already a business day. To run a job on the second business day on or after the 10 th calendar day of the month, simply use "10>2b". To run jobs on the second-to-last business day of the month, use a slightly different expression, "$<2b" or "$b<2b", both of which achieve the same result. The expression "$b<1b" is equivalent to "$b". To run jobs on the second-to-last calendar day of the month, use "$<1". This expression means "the last calendar day of the month, less one day". In summary, "<1b" and ">1b" are no-ops if the current day is already a business day, but "<2b" and ">2b" are guaranteed to shift by at least one day. This behavior makes it possible run jobs on or after certain days of the month. You can also use the shift operators to run that are a few calendar days after a certain day. For example, to run a job 5 calendar days after the 6 th business day of the month, use "6b>5" in the Days-of-month Cron column. As another example, to run a job on election day in the United States, which is the first Tuesday after the first Monday in November, use "1MO>1TU" in the day-of-month Cron column. This expression, "1MO> 1TU", says to go to the first Monday of the month, then advance to the following Tuesday. In general, you can shift left or right by a number of days of the week. For example, you can shift right to the third Friday after a certain moment by using the expression ">3FR". In general, all of these techniques and symbols can be used in any of the Cron-style Time Expression columns, not just Days-of-month and Day-of-year columns. The rollover operator (">>") provides similar functionality to the right shift operator (">"). The difference between rollover and right shift is that when the rollover operator attempts to satisfy the constraints of the Cron-style time expression, it can "rollover" into the next Cron column on the right in search of a match, unlike the right shift operator. For example, suppose you need to fire a job on the first Monday, at midnight, on or after the 28 th calendar day of the month. Using a right shift operator, the Cron-style time expression is shown as follows >1MO Using the above time expression, this job fires at midnight on every month that has a Monday on or after the 28 th calendar day of the month, such as the 28 th, 29 th, 30 th, or 31 st Flux Manual 125

127 calendar day of the month. This time expression can be satisfied in some months, such as September 2003, when the 28 th calendar day of the month falls on Sunday. However, suppose that the first Monday of the month is not until the 3 rd calendar day of the following month, such as the case in October In October 2003, the 28 th calendar day of the month is a Tuesday. The first Monday after the 28 th does not occur until Monday, 3 November In October 2003, this job will not fire using the above Cron-style time expression and the right shift operator. The reason is simple. When the Cron-style time expression searches for a matching date and time, the day-of-month column constraint requires that the month column constraint stays fixed. Consequently, only month values that are explicitly specified are considered. However, if you want this job to always fire on the first Monday on or after the 28 th calendar day of the month, even if the time expression needs to "rollover" into the next month, the rollover operator (">>") performs this function. For example, the following Cron-style time expression always fires on the first Monday on or after the 28 th calendar day of the month >>1MO In October 2003, the 28 th calendar day of the month is a Tuesday. The first Monday after the 28 th does not occur until Monday, 3 November Therefore this job fires on Monday, 3 November The job next fires on Monday, 1 December The job next fires on Monday, 29 December 2003, because in December 2003, the 28 th calendar day of the month falls on Sunday, so the first Monday is simply the next day. The right side of the rollover operator accepts numbers, followed by an optional "b" symbol, "h" symbol, or a two letter day-of-week abbreviation. Using the rollover operator, you can specify firing times that contain both a "fixed" component, followed by a "variable" component, which searches as far as necessary into other Cron columns in order to satisfy the constraints of the Cron-style time expression. The "h" symbol is the opposite of "b". It means holidays or non-business days. It can be used anywhere "b" can be used and has the opposite meaning of "b". Each column normally accepts a number, a range of numbers, or a comma-separated list of numbers and ranges of numbers. However, relative increments such as "+5" can be specified also. For example, "+15" in the Minutes column means that the job should fire every 15 minutes. Note that this requirement is subtly different than "*/15", which means to fire when the minute hand is on the 0, 15, 30, and 45 number on the clock. As a further example of using relative increments such as "+5", you can specify that a job should run on every 10 th Wednesday. To specify this requirement, place "WED" in the days-of-week column and "+10" in the week-of-month or week-of-year column. In this case, the job will fire on a Wednesday, every 10 weeks. Note that these Cron-style time expressions are slightly different than traditional Unix-style Cron specifications. In Unix, the time range goes down to the minute. Here, the time range goes down to the millisecond. Furthermore, in Unix, the months range from Here, the months range from Similarly, in Unix, the days-of-week range from 0-7, starting with Sunday. Sunday is associated with the numbers 0 and 7. Here, the days of the week range from 1-7, starting with Sunday. Sunday is associated with the number 1 and Saturday with the Flux Manual 126

128 number 7. The reason for these differences is that the scheduler is consistent with Java. In Java, the months range from 0-11 and the days-of-the-week range from "Or" Constructs Cron-style Time Expressions also permit running jobs at two or more times that are unrelated to each other. For example, to run a job at 10:15 am and 3:35 pm, you can use the "Or" construct in Cron-style Time Expressions. For example: 0 0 ( ) * * * Notice how the third and fourth Cron columns, Minutes and Hours, are grouped. This group contains two elements, "15 10" and "35 15", which means that the job can fire at 10:15 am or 3:35 pm. These "Or" constructs can accept two or more columns in each group. In fact, you can take an indefinite number of fully-specified Cron-style time expressions and group them together using multiple " " symbols, also known as "Or" constructs. For example, the following three Cron-style time expression, which are completely unrelated to each other, can be grouped together to form a new Cron-style time expression that fires when any of the three individual Cron expressions would fire * * mon-fri // Fires on the half hour * * * sat,sun // Fires at the top of the hour * * * * // Fires on the first quarter hour. The result of "Or"ing the above three Cron expressions is shown below. This Cron expression fires on the half hour, at the top of the hour, and on the first quarter hour. ( * * mon-fri * * * sat,sun * * * *) For Loops Furthermore, Cron-style Time Expressions permit running jobs from a beginning point in time, followed by job firings at regular intervals, up to an ending point in time. Analogous to the "for" loop language construct in the Java programming language, Cron-style Time Expressions have a "For Loop" construct. For example, suppose you need to fire jobs from 9:15 am through 4:30 pm, every 15 minutes, Monday through Friday. The following Cron-style Time Expressions uses a "For Loop" to describe this desired firing pattern: 0 0 (15 9; 30 16; */15 *) * * * The "For Loop" contains "(15 9; 30 16; */15 *)". Like the Java programming language "For Loop", the loop starts at 9:15 am, continues until 4:30 pm, and fires when the minute hand is on 0, 15, 30, or 45 and when the hour hand is on any number of the clock. In general, the first component of the "For Loop" is called the "Start Constraint". The second component is called the "End Constraint", and the third component is called the "Increment Constraint". Notice how these names and this behavior is very similar to Java "For Loops". There are some variations on the "For Loop" that you can use. Instead of firing when the hand is on 0, 15, 30, and 45, you can fire every 15 minutes, which is subtly different: 0 0 (15 9; 30 16; +15 *) * * * Finally, you can omit the End Constraint and insert a fourth constraint at the end, called the count: Flux Manual 127

129 0 0 (15 9; ; */15 *; 5) * * * The above Cron-style Time Expression fires at 9:15 am, 9:30 am, 9:45 am, 10:00 am, and 10:15 am. After the 5 th firing, the "For Loop" breaks out and the Cron-style Time Expression seeks out the next time in the future that can be satisfied Using For-Loops to Condense Multiple Timer Triggers This section describes how you can condense several timer triggers into one using the for-loop construct in Cron-style time expressions. First of all, here is a typical Cron expression for running a job every day from 10:00 through 11:45, on the quarter hours: 0 0 */ * * * Reading left to right, the milliseconds and seconds columns are on 0, and the minutes are 0, 15, 30, 45. The hours are 10 and 11. For the days-of-month, months, and days-of-week, a wildcard is shown, meaning the job can fire on any day-of-the-month, month, or day-of-the-week. This Cron expression means that your TimerTrigger will fire at 10:00, 10:15, 10:30,..., 11:30, and 11:45. The for-loop constructs add to this functionality. First consider this typical use case: "Fire from 10:15 through 12:30." By using the technique above, you would observe extra firings at 10:00 and 12:45. Without "for loops", you would have to break the Cron expression into three different jobs: ,30,45 10 * * * 0 0 */15 11 * * * ,30 12 * * * This technique of using three different jobs will work, of course, but it is cumbersome, because you need three different Cron expressions, three different TimerTriggers, and three different jobs. The for-loop construct solves this problem. It is patterned after the Java "for loop", which is well understood. Consider this "for loop" syntax: 0 0 (15 10; 30 12; */15 *) * * * The first two columns are the same as the first two columns in each of the three Cron expressions listed above: 0 0. Next, notice the parenthesized part. This part is very similar to a Java "for loop". A Cron for-loop specifies a repeating pattern of timing points. The first part, "15 10", says to start the for-loop firing at 10:15. The "15 10" part represents the minutes and hours columns in a Cron expression. The second part, "30 12", represents the ending point. This for-loop firing will stop at 12:30. Finally, the third part, "*/15 *", is the "increment" -- just like in a Java for-loop. In Java, the increment is usually something like "i++". Here, it is "*/15 *". This increment says to fire between the start and end dates, whenever the minute hand is on the 0, 15, 30, and 45 and when the hour hand is on any legal hour. The effect of the Cron for-loop is you observe the following firing pattern: 10:15, 10:30, 10:45, 11:00, 11:15, 11:30, 11:45, 12:00, 12:15, and 12:30 Flux Manual 128

130 The last three columns of the Cron expression, "* * *", have the usual meaning. The great part about the Cron for-loop is that you can condense multiple jobs into just a single Cron expression. There is no need for three Cron-expressions, three TimerTriggers, and three jobs. Just one of each will do. If you don't want to use the string Cron syntax directly, you can use the Cron helper object to generate your Cron expression. The Cron object API can generate any legal Cron expression. It doesn't matter whether you prefer to write out your Cron expressions using strings or whether you use the Cron API. At runtime, it's all the same Using the Cron Helper Object to Create For-Loops The above section describes how to create a for-loop. This section describes how to create that same for-loop using a Cron helper object. The end result of using the Cron helper object is exactly the same as if you had generated the Cron time expression yourself. The only difference is how the Cron time expression string is generated. The code that uses the Cron helper object to generate a Cron time expression as described above can be found below. For reference, the code below will generate a Cron time expression that is equivalent, but not necessarily identical, to the following Cron time expression. 0 0 (15 10; 30 12; */15 *) * * * The code below first uses a Cron helper object. EngineHelper is a factory for flux.cron objects. Next, we need to create the "start", "end", and "increment" constraints of the for-loop. These three constraints represent the three components inside the parentheses in the above for-loop Cron expression. Finally, we put the three constraints together to form the for-loop and finish off the Cron object. If you call tostring() on the final Cron object, the returned string will look different than the above Cron expression. However, the semantics are identical. To use the Cron expression created by this Cron object, call Cron.toString(). // Generate a Cron time expression equivalent to: // 0 0 (15 10; 30 12; */15 *) * * * Factory factory = Factory.makeInstance(); EngineHelper helper = factory.makeenginehelper(); // Make the start constraint. CronSlice startconstraint = helper.makecronslice(croncolumn.minute, CronColumn.HOUR); startconstraint.add(croncolumn.minute, 15); startconstraint.add(croncolumn.hour, 10); // Make the end constraint. CronSlice endconstraint = helper.makecronslice(croncolumn.minute, CronColumn.HOUR); endconstraint.add(croncolumn.minute, 30); endconstraint.add(croncolumn.hour, 12); Flux Manual 129

131 // Make the increment constraint. CronSlice incrementconstraint = helper.makecronslice(croncolumn.minute, CronColumn.HOUR); incrementconstraint.add(croncolumn.minute, "*/15"); incrementconstraint.add(croncolumn.hour, "*"); // Create the for-loop. CronSlice forloop = helper.makeforloop(startconstraint, endconstraint, incrementconstraint); // Finish off the Cron object. Cron finalcron = helper.makecron(forloop); finalcron.add(croncolumn.millisecond, 0); finalcron.add(croncolumn.second, 0); // Finally, generate the Cron time expression to be used elsewhere. // 'finalcron' is equivalent to: 0 0 (15 10; 30 12; */15 *) * * * finalcron.tostring(); 26.2 Real World Examples from our Technical Support Department Question: How can I schedule a job to fire every 5 minutes between 9:00 and 11:55, Monday through Friday? Answer: This is an interesting schedule. The "For Loop" construct of Cron-style Time Expressions can model this need. 0 0 (0 9; 55 11; */5 *) * * mon-fri The above Cron-style Time Expression will fire Monday through Friday, beginning at 9:00 am, firing every 5 minutes, until 11:55 am. Question: How do I schedule a job to fire every 3 weeks on Monday, Tuesday, and Friday, beginning at 10:00, ending at 16:30, and repeating every 5 hours (or 5 minutes)? Answer: A Cron-style time expression that uses a "for loop" can model this requirement. 0 0 (0 10; 30 16; 0 +5) * * mon,tue,fri * * +3 * The above Cron-style Time Expression will fire Monday, Tuesday, and Friday, from 10:00 until 16:30 every 5 hours, then skip 3 weeks, then repeat. Note that the "+3" in the week-of-year column allows this job to fire every 3 weeks. To modify this time expression to fire every 5 minutes instead of every 5 hours, change the time expression as follows. 0 0 (0 10; 30 16; +5 *) * * mon,tue,fri * * +3 * Notice the only change was in the "increment constraint" of the "for loop". The constraint "0 +5" (every 5 hours) was changed to "+5 *" (every 5 minutes). Followup Question: How do I schedule a job to fire every 2 days, beginning at 10:00, ending at 16:30, and repeating every 5 minutes? Answer: The time expression to use is very similar to the time expressions shown above. Instead of firing on certain days of the week and then skipping 3 weeks, simply fire every 2 days, like so: Flux Manual 130

132 0 0 (0 10; 30 16; +5 *) * * * +2 * * * Note that the "+2" in the day-of-year column allows this job to fire every 2 days. Question: I want to define an activity that occurs each month on the last weekday at a particular time. For example, the last Thursday, 18:30, starting 1 January 2002, and ending 31 December [Note: This example can be solved using a Relative Time Expression too.] Answer: Ok, so you want to run a job on the last weekday of the month at And the period you want to run this in is from 1 January 2002 through 31 December First, create a Date object that represents the beginning of the new year in 2003, like so: EngineHelper helper = factory.makeenginehelper(); // New Year's Day 2003, at midnight Date newyearsday2003 = helper.makedateindefaulttimezone("2003 Jan 1 00:00:00.000"); This date defines when your job will stop firing. Next, we need a time expression to use that will fire a job on the last weekday of the month at The following Cron-style Time Expression expresses this requirement. String timeexpression = " $b * *"; The above time expression says to fire at 1830 on the last business day of the month. For our purposes, business days are Monday through Friday. You can retrieve a Business Interval that includes Monday through Friday and excludes Saturday and Sunday like so: BusinessIntervalFactory bif = helper.makebusinessintervalfactory(); BusinessInterval bi = bif.makesaturdaysundayweekend(); Now you've got everything you need to create the Timer Trigger for this job: timertrigger.setbusinessinterval(bi); timertrigger.settimeexpression(timeexpression); timertrigger.setenddate(newyearsday2003); This Timer Trigger will fire at 1830 on the last weekday of every month until New Year s Day Relative Time Expressions Relative time expressions specify an offset, relative to a particular point in time. As with Cron-style Time Expressions, there is a helper object, Relative, that can be used to create Relative Time Expressions, if desired. A Relative object is obtained by calling EngineHelper.makeRelative(). By using the Relative helper object, the exact syntax of Relative Time Expressions does not need to be learned or memorized. However, it is useful to read and understand the syntax. Examples include: Flux Manual 131

133 +90s (90 seconds in the future) -45s (45 seconds in the past) +2H+30m+15s (2 hours, 30 minutes, 15 seconds in the future) >thu (forward to next Thursday) >2sat (forward 2 Saturdays) >oct (advance to October) ^M (go to the beginning of the current (go to the first Friday of the current (go to the second Friday of the current month) ^M>3fri (go to the third Friday of the current month) +M^M>2tue (go to the second Tuesday of next (go to the first Friday of the current (go to the last Friday of the current (go to the 22 nd day of the current (go to 3 AM on the current day) >b (advance to the next business day) +M^M>b>9H (advance to 9 AM on the first business day of next month) <b (back up to the previous business day)?sat{-d}?sun{+d} (if today is Saturday, back up to Friday but if today is Sunday, advance to Monday} Relative time expressions consist of the following syntax. They may be combined to form more complex time expressions. Table 26-2: Relative Time Expression Syntax + <number> <unit> Move forward in time by number years, months, weeks, days, hours, minutes, seconds, or milliseconds. If number is omitted, it defaults to 1. - <number> <unit> Move backward in time by number years, months, weeks, days, hours, minutes, seconds, or milliseconds. If number is omitted, it defaults to 1. > <number> <unit> Move forward in time by number absolute days-of-the-week, absolute months, holidays, non-holidays, weekdays, weekend days, or business days. If number is omitted, it defaults to 1. If date is already on the given day-of-week or month, then number is first decremented by 1. < <number> <unit> Move backward in time by number absolute days-of-the-week, absolute months, holidays, non-holidays, weekdays, Flux Manual 132

134 ^ <unit> weekend days, or business days. If number is omitted, it defaults to 1. If date is already on the given day-of-week or month, then number is first decremented by 1. Go to the beginning of the current year, month, week, day, hour, minute, second, or millisecond. Going to the beginning of one time unit (such as hour) also causes any sub-units (such as minute, second, and millisecond) to go to the beginning. $ <unit> Go to the end of the current year, month, week, day, hour, minute, second, or millisecond. Going to the end of one time unit (such as hour) also causes any sub-units (such as minute, second, and millisecond) to go to the <n> <n> <day>? <unit> { <then-time-expression> } <{ < else-time-expression> }> The following units are recognized. Table 26-3: Recognized Units Go to the nth year, month, day-of-month, hour, minute, second, or millisecond. Recognized units are y, M, d, H, m, s, and S. Go to the nth day-of-week in the current month. n is 1-based. If n is omitted, go to the first day-of-week in the current month. Recognized day-of-week values are sun, mon, tue, wed, thu, fri, and sat. Go to the last day-of-week in the current month. If the current time occurs on the given unit (day-of-week, month-of-year, business day, etc), apply the then-time-expression. Otherwise, apply the optional else-time-expression. These conditional Time Expressions may be nested. Unit y M w d H m s Description Year Month Week Day Hour Minute Second Flux Manual 133

135 S h n D e b mon tue wed thu fri sat sun jan feb mar apr may jun jul aug sep oct nov dec Millisecond Holiday Non-holiday Weekday: Monday through Friday Weekend day: Saturday through Sunday Business day: A day specified using a business interval to define an acceptable business day. For instance, a day that is not a weekend or holiday. Monday Tuesday Wednesday Thursday Friday Saturday Sunday January February March April May June July August September October November December 26.4 More Examples Table 26-4: Relative Time Expression Examples +30d-8H Move forward 30 days, then backup 8 hours. -12H+30s Move backward 12 hours, then forward 30 seconds. Flux Manual 134

136 +23y-23M+23S-8m Move forward 23 years, then backup 23 months, then move forward 23 milliseconds, then backup 8 minutes. >sat >3sat>nov <5may>fri Move forward to Saturday. Move forward 3 Saturdays, then skip ahead to November. Move backward 5 Mays, then advance to Go to the year 2005, then move forward 2 years, then backup to ^M>4mon +M^M>4mon ^d+7h Go to March 22 nd, 8:30 AM of the current year. Go to the beginning of the month, then advance 4 Mondays. Go to the beginning of next month, then advance 4 Mondays. Go to the beginning of today, then advance 7 hours. >3D Move forward three weekdays. <4e Move backward 4 weekend days. >n Move forward to the next non-holiday.?d{+d}?b{}{+2d} If today is a weekday, advance a day. If today is not a business day, advance two days Another Real World Example from our Technical Support Department Question: I want to define an activity that occurs each month on the last weekday at a particular time. For example, the last Thursday, 18:30, starting 1 January 2002, and ending 31 December [Note: This example can be solved using a Cron-style Time Expression too. See Section 26.4 for details.] Answer: Ok, so you want to run a job on the last weekday of the month at And the period you want to run this in is from 1 January 2002 through 31 December First, you orient yourself to midnight on New Year's Day, 2002, like so: EngineHelper helper = factory.makeenginehelper(); // New Year's Day 2002, at midnight Date newyearsday2002 = helper.makedateindefaulttimezone("2002 Jan 1 00:00:00.000"); // New Year's Day 2003, at midnight Date Flux Manual 135

137 newyearsday2003 = helper.makedateindefaulttimezone("2003 Jan 1 00:00:00.000"); Now find the last weekday at 1830 in January Date lastweekdayinjanuary = EngineHelper.applyTimeExpression(newYearsDay2002, "$M<D^d+18H+30m", null); Let me explain that relative time expression a bit. $M takes you to the end of the current month. Then <D backs you up to the first weekday that it finds. ^d resets the time-of-day to midnight. Then +18H+30m advances you to 1830 on the last weekday of January So now you know when the first job should fire. All you need now is a time expression to repeatedly apply so that your job fires every month at the proper time. String timeexpression = "+M$M<D^d+18H+30m"; This time expression will be applied to a date that represents the last weekday of the current month. +M takes you into the next month. $M takes you to the end of that month. <D backs you up to the first weekday that it finds. ^d resets the time-of-day to midnight. And +18H+30m advances you to 1830 on the last weekday of the month. Now you've got everything you need to create the Timer Trigger for this job: timertrigger.setscheduledtriggerdate(lastweekdayinjanuary); timertrigger.settimeexpression(timeexpression); timertrigger.setenddate(newyearsday2003); This Timer Trigger will fire at 1830 on the last weekday of every month throughout the year in Business Intervals Business Intervals are periods of time that define when jobs may, or may not, run. While the ordinary calendars are oriented to entire days, Business Intervals have millisecond resolution. Picture a timeline running from left to right. Business Intervals define ranges of time in that timeline that are either included or excluded. Included ranges of time define when jobs may run. Excluded ranges of time define the periods when jobs may not run. Once defined, Business Intervals can be supplied to TimerTrigger objects or used when calculating Time Expressions using the EngineHelper interface. Business Intervals define when jobs are allowed to run and when they are not allowed to run. Business Intervals are not restricted to 24 hour periods. You can define them to include any range of time on the timeline. For example, to allow jobs to run only between 8:45 am and 4:30 pm, Monday through Friday, define a Business Interval as shown below. The resulting Business Interval can be used to "mask in" allowable times for running jobs. BusinessInterval bi = enginehelper.makebusinessinterval("my bi"); Flux Manual 136

138 bi.include("daytimes", null, "+7H+45m", " * * mon-fri"); On instantiation, the Business Interval "bi" excludes all time across its entire timeline. In other words, there are no legal times in which a job can run. The call to bi.include() states that certain time ranges on the time line are to be included and otherwise made legal for running jobs. These time ranges are dictated by the Cron-style Time Expression " * * mon-fri", which specifies starting times of 8:45 am Monday through Friday. At each of those starting times, a time offset of 7 hours and 45 minutes ("+7H+45m") is applied. Consequently, the resulting Business Interval masks in the range of time from 8:45 am through 4:30 pm, Monday through Friday, because 8:45 am plus 7 hours and 45 minutes is 4:30 pm. By defining Business Intervals programmatically, you can define your organization s holidays or otherwise define legal times for running jobs. The EngineHelper object returns some pre-defined Business Intervals. Using code like the above example, you can programmatically create your own Business Intervals. Finally, you can specify that Business Intervals be created by delegating to a Java class of your creation. By using delegation, you can access your company s holiday information if it is stored in a database or an external information system. Once a Business Interval is defined using any of the techniques described above, it can be combined with other Business Intervals. The BusinessIntervalFactory interface has methods for creating unions, intersections, and differences of Business Intervals. Since a Business Interval defines a timeline, set theory operations can be applied to it to form new timelines. For example, if you have a European Business Interval that defines company holidays in Europe and an Asian Business Interval that defines company holidays in Asia, a combined Europe-Asia Business Interval can be created simply by creating a union of these two Business Intervals with the BusinessIntervalFactory interface. Specific excludes are accomplished by using the various exclude methods mentioned in the BusinessInterval javadoc. For example, say you wanted to exclude all tasks from firing on July 4 th. The method excludeday can make this adjustment by simply providing a name and date. This is indicated in the code below bi.excludeday( Fourth of July, new Date(2007,7,4,0,0,0)); Note that when using Time Expressions within your Business Interval to define included and excluded time periods, you cannot use the five special symbols b, D, e, h, or n. Finally, when using Business Intervals with Relative Time Expressions, the Relative Time Expression treats a day as a holiday only when the Business Interval excludes that entire calendar day, from midnight to midnight. If a day is partially excluded only, it is not considered a holiday. 28 Forecasting Flux can generate forecasts when your jobs are expected to fire. You specify a time range and a branch in the hierarchical flow chart namespace, possibly including the wildcard characters "*" and "?". The "*" character matches any character zero or more times, and the "?" character matches any character exactly once. The result is an ordered collection of expected firing times for all jobs in that namespace that contain a timer trigger. Flux Manual 137

139 Note that the actual firing of jobs may not directly correspond with the forecast due to a variety of factors, including the behavior of a job s flow chart during execution, engine and cluster-wide concurrency constraints, job load, and whether any engines were enabled and firing jobs. Example usage is as follows. ForecastIterator it = engine.forecast("/"); ForecastElement fe; while(it.hasnext()) { fe = it.next(); System.out.println(it.getFlowChartName() + ":" it.getpredictedfiringdate()); } The method Engine.forecast() is responsible for generating forecasts. Detailed information on this method can be found in the Javadoc documentation. 29 Caching Caching is designed to increase performance when loading and running flow charts. This performance increase is achieved by loading each flow chart into memory before being executed. Upon execution of these flow charts the database will not need to be queried, minimizing runtime Local Caching Use the LOCAL option for the CACHE_TYPE configuration option when you are not clustering your engines or when you are not using the cluster networking configuration option Networked Caching When using the NETWORKED option for the CACHE_TYPE configuration option, the Flux engines in the cluster create a shared network cache. In this case, a network address equal to the value of the CLUSTER_ADDRESS configuration property is used. Also, a multicast port that is one higher than the CLUSTER_PORT configuration property is u sed Enabling Caching You are required to enable caching manually. To enable caching, set the CACHE_TYPE configuration option to either LOCAL or NETWORKED. This configuration option defaults to NONE. Flux provides two methods of caching, Network Caching and Local Caching. Network caching is used only if the CLUSTER_NETWORKING configuration option is enabled. When using local caching with multiple engines running in a cluster on a network, performance increases will not be as noticeable. Flux Manual 138

140 29.4 Cache Size While caching is enabled you may wish to increase, or decrease, the number of flow charts allowed to be loaded into memory. The CACHE_SIZE configuration option can control this behavior. For example: CACHE_SIZE = 100 Or configuration.setcachesize(100); The integer on the right of the equals sign represents the maximum number of flow charts allowed in memory. The code shows using a configuration object to set the cache size. The CACHE_SIZE property is set to "200" by default. Inside the cache, flow charts are given a weight. Once the cache is full, this weight is examined to decide which flow charts will be removed from the memory cache. Once enough flow charts have been removed from the cache, incoming flow charts are then allowed to be added. Each flow chart's weight is calculated using the following formula: (Size) * (Date Last Accessed) = (Weight) Larger flow charts, used frequently, receive a higher weight. Smaller, less frequently used flow charts receive a lower weight. Once the cache is full, flow charts with higher weights are saved while flow charts with lower weights are ejected from the cache. Depending on the size of the flow charts stored in the cache, the memory required by the JVM may vary greatly. In certain instances, it may be necessary to increase the amount of memory available to the Flux JVM instance. By default, the JVM is allocated 64 MB of memory. Memory for the JVM can be set using the "-X" option upon startup of the JVM. The "-Xms" option refers to the initial memory provided to the JVM. The "-Xmx" option refers to the maximum memory the JVM is allowed. For example: java Xmx256m classpath.\;flux.jar;<jar files> flux.main server start The "-Xmx256m" option above specifies that the JVM is allocated a maximum of 256 MB of memory Updating JBoss Cache Flux uses JBoss Cache version 1.2.3, which uses JGroup 2.2.9, to implement the caching functionality. To use a different or updated version of JBoss cache, either replace the appropriate JBoss cache JAR files in the Flux lib directory or point to the new JAR files from within the engine's class path. See Chapter 2 Technical Specifications for a list of JAR files required for caching Caching Performance Caching was implemented for larger, more complex, flow charts to execute quicker. When using caching for smaller flow charts, performance may not be increased as much compared to using caching with larger and more complex flow charts. The following data displays various test results representing performance increases. All tests were conducted in a MYSQL version 4.1 database. Flux Manual 139

141 Throughput Performance Throughput measures the maximum number of flow charts that can be executed per second depending on the number of variables mapped into the flow chart's variable manager. Figure 29-1 below shows the effect caching has on performance. The flow chart that is being executed is a very simple Timer Trigger to Java Action pair. While caching is disabled, the number of flow charts per second decreases as the number of variables in the flow chart increases. While caching is enabled the number of flow charts executed per second remains constant as the number of variables in the flow chart increases. Figure 29-1: Effect of Caching on Throughput Performance Flow Chart Retrieval Performance Flow chart retrieval performance measures the amount of time taken to retrieve a flow chart to put into the engine. The flow chart being executed in Figure 29-2 below displays the amount of time, in seconds, it takes for Flux to retrieve a flow chart with a certain number of variables. Flux Manual 140

142 Figure 29-2: Effect of Caching on Flow Chart Retrieval Time Split Performance Split performance measures the amount of time a complex flow chart with 7 splits, 15 actions, and a varying number of variables in its variable manager. Below in Figure 29-3 are the results of the split performance testing. Figure 29-3: Effect of Caching on Split Performance Sub Flow Chart Performance Sub flow chart performance tests measure the amount of time taken to execute a single flow chart with 10 sub flow chart actions each with 5 actions, and 1000 variables in the flow chart. Figure 29-4 below shows the percentage caching has over caching disabled. Flux Manual 141

143 Figure 29-4: Effect of Caching on Sub Flow Chart Performance 30 Databases and Persistence Optionally, Flux can store jobs, workflows, and business processes in your database. This way, you do not need to re-schedule your flow charts whenever your application restarts Database Deadlock From time to time while running your flow charts, the engine may encounter an SQL exception indicating database deadlock. If database deadlock occurs, the database transaction associated with that flow chart is rolled back and the flow chart will be re-executed. The job is not lost. An engine instance will re-execute the job automatically. If you should encounter database deadlock while calling a client API of the engine, your client code will have to re-execute that call. However, you can wrap your engine with a new engine that retries failed client calls automatically. See the wrapengine() method in the flux.factory class. Once your job is successfully added to the engine, database deadlocks do not require any action on your part. In general, row-level locking is preferred in databases, because it minimizes the opportunity for deadlock and connection timeouts. If possible, enable row-level locking at the database level. Also see for further information on minimizing the chances of database deadlock by using database indexes and by flagging database tables with optimization hints. If you see more than an average of one deadlock per hour or if you can reproduce a deadlock regularly by following a well defined sequence of steps, then contact our Technical Support department at support@fluxcorp.com with an explanation of the deadlock situation. We will work with you to attempt to reduce the number of deadlocks to a tolerance of less than an average of one deadlock per hour. Flux Manual 142

144 30.2 Database Properties When you create Flux tables using one of the provided SQL scripts, you may need to increase the size of a database column to an appropriate size for your application's needs Initializing Database Connections When the engine acquires its database connections directly from a database server, instead of acquiring them through an application server, you have the option of executing custom SQL initialization statements on the JDBC connections as they are created. Some database servers and their JDBC drivers, such as Informix, require special initialization in order to work with Flux. To initialize a JDBC connection with custom SQL, set the following property in your database properties. CONNECTION_INIT=initializing SQL statements go here where the property key is CONNECTION_INIT and the property value is "initializing SQL statements go here". If more than one initialization SQL statement is needed, you can create additional properties, as shown below. CONNECTION_INIT.2=this SQL initialization statement runs second CONNECTION_INIT.3=this SQL initialization statement runs third As many additional properties may be set as required Oracle Notes If you are storing a byte[] Java type in a persistent variable, this type maps to the VARBINARY database type. However, if this VARBINARY database type is mapped to an Oracle BLOB database column, Oracle truncates the binary data in the byte[] field to a size of 64 KB when storing data with the usual JDBC APIs. Flux must use these standard JDBC APIs so that Flux s database functionality is portable across databases. For this reason, if you are storing byte[] fields in your persistent variables and you are using Oracle, make sure that each byte[] fields holds less than 64 KB of data. One simple technique to store more than 64 KB of data in Oracle is to spread your byte array data out over several byte[] fields. If multiple database users on Oracle are running separate Flux instances, you might see exceptions like "ORA-00942: table or view does not exist" if your user permissions are not set up properly. If this exception occurs, it is likely that your Flux instance is able to see the Flux tables created by another database user when Flux queries your database s meta data. However, when your Flux instance issues SQL statements at runtime, you may encounter SQL exceptions, because the tables do not exist in your user space. If you do not have permission to use another user s tables, you must make sure that you cannot view that user s table information when querying the database meta data. Alternatively, to resolve this issue, change your database table prefix in your Flux configuration or drop all the Flux tables in all user spaces. The JDBC driver class name for Oracle 8 is oracle.jdbc.driver.oracledriver, and the JDBC driver class name for Oracle 9 is oracle.jdbc.oracledriver. Be careful not to confuse the two. Using the Oracle 8 driver to connect to an Oracle 9 Flux Manual 143

145 database might result in a "ORA-01000: maximum open cursors exceeded" error. To resolve this issue, you must use the Oracle 9 driver to connect to your Oracle 9 database Oracle LOBs By default, standard JDBC APIs are used to persist large binary and character data to Oracle databases. Due to an Oracle limitation, standard JDBC cannot persist more than 4k bytes of data when communicating with an Oracle database. However, using an Oracle-specific API, it is possible to workaround this Oracle limitation. The ORACLE_LARGE_OBJECT_ADAPTER configuration setting accepts the name of an adapter class that uses database-specific and driver-specific APIs to overcome this Oracle limitation. This adapter class must implement the flux.oraclelargeobjectadapter interface. Two such adapter classes are provided by default. They can be used when the Oracle JDBC driver is used directly and when a Weblogic driver wraps an Oracle driver. These default adapters work only when an Oracle driver, supplied by Oracle Corporation, is used directly or indirectly. The class names for these drivers can be found as constants in the flux.oraclelargeobjectadapter interface. You can also provide your own adapter. It must implement the flux.oraclelargeobjectadapter interface. Your custom adapter can be used to support non-oracle-corporation drivers or other application servers that provide JDBC drivers that wrap Oracle JDBC drivers. If you need to overcome this Oracle limitation in a situation where a built-in adapter is unavailable or you cannot create one, the workaround is to create a second data source in your applications server that directly uses an Oracle JDBC driver supplied by Oracle Corporation. In this case, you can use the built-in adapter flux.oraclelargeobjectadapter.oracle DB2 Notes You can enable an additional setting directly on your DB2 server to further increase performance and further reduce the probability of deadlocks. With DB2 7.2, you can set the DB2_RR_TO_RS flag to true. By doing so, you will likely see greater performance and fewer deadlocks. This setting refers to a DB2 feature called "next key locking". Further information on DB2 s next key locking can be found on the IBM website DB2 Troubleshooting Tips The SQLException DB2 SQL error: SQLCODE: -911, SQLSTATE: 40001, SQLERRMC: 2 may be thrown from the DB2 JDBC driver if the maximum number of database locks is reached. To correct this issue, increase the value of the DB2 instance s LOCKLIST configuration parameter within the database. The Exception " flux.engineexception: com.ibm.db2.jcc.c.sqlexception: DB2 SQL error: SQLCODE: -911, SQLSTATE: 40001, SQLERRMC: 2" may be thrown as a result of the transaction log being full. To correct this issue, increase the value of the DB2 instance's "LOGFILSIZ" configuration parameter to accomodate a larger log. If storage size is an issue, adjust the "LOGPRIMARY" and "LOGSECOND" parameters to reduce the number of logs that are saved on the system. Flux Manual 144

146 SQL Server Notes In general, SQL Server does not perform as well as Oracle and DB2 and possibly MySQL. Furthermore, due to SQL Server's locking strategy, deadlocks tend to occur more frequently with SQL Server than other supported databases Sybase Notes By default, Sybase uses table-level locking. With this setting, you may experience many database deadlocks, depending on circumstances. When Flux is firing flow charts and encounters a database deadlock, that transaction is rolled back and later retried. The flow chart is not lost. It will be retried after rolling back the transaction. To avoid excessive database deadlocks with Sybase, run the following procedure to enable row-level locking. sp_configure "lock scheme", 0, datarows Sybase must be restarted before this change takes effect. If debugging has been enabled on the Sybase database connections, you may see Sybase exceptions with the error code "JZ0EM" when database connections are closed. Sybase documentation and technical support state that these exceptions are not harmful and are not meant to be propagated outside the Sybase JDBC driver. They state that a future version of the Sybase drivers will resolve this situation MySQL Mappings and Settings The MySQL BLOB database column has a storage limit of 64 KB. You might want to use the MEDIUMBLOB or LONGBLOB MySQL database types to store larger binary variables. Use the MySQL Connector/J JDBC driver, version 3 or newer. The driver class name is com.mysql.jdbc.driver. Flux also requires MySQL s transactional tables. These tables are called "InnoDB". For Flux to work correctly with MySQL, all database tables used by Flux must be of type "InnoDB". Finally, MySQL's default transaction isolation level is REPEATABLE_READ, which leads to a large number of database deadlocks. Flux requires only the READ_COMMITTED transaction isolation level, which is less strict and therefore allows higher levels of performance. To configure MySQL to use the READ_COMMITTED transaction isolation level, add the following line to your MySQL my.ini file. This configuration setting resolves most of the MySQL database deadlocks. transaction-isolation=read-committed HSQL Mappings and Settings The only transaction isolation mode that HSQL supports is READ_UNCOMMITED. This limitation restricts what you can do with Flux and HSQL. If you plan to use HSQL for any non-trivial work or if you encounter problems while using HSQL, read the flux.databasetype.hsql Javadoc documentation. Flux Manual 145

147 PostgreSQL Mappings and Settings Note: PostgreSQL is not a supported database. Flux is not tested against this database. Flux may not work correctly or at all with this database. However, if you would like to see if Flux works correctly with PostgreSQL, the following information may be helpful. The suggested SQL type mappings for PostgreSQL include the following. BIT=BOOLEAN FLOAT=DOUBLE PRECISION VARBINARY=BYTEA VARCHAR=CHARACTER VARYING(128) Informix Mappings and Settings Note: Informix is not a supported database. Flux is not tested against this database. Flux may not work correctly or at all with this database. However, if you would like to see if Flux works correctly with Informix, the following information may be helpful. Informix JDBC database connections need to be initialized in such a way to minimize deadlocks and other database performance problems. To initialize Informix JDBC database connections properly, set the following SQL initialization statements in your database properties. CONNECTION_INIT=SET LOCK MODE TO WAIT CONNECTION_INIT.2=SET TRANSACTION ISOLATION LEVEL READ COMMITTED Oracle Rdb Mappings and Settings Note: Oracle Rdb is not a supported database. Flux is not tested against this database. Flux may not work correctly or at all with this database. However, if you would like to see if Flux works correctly with Oracle Rdb, the following information may be helpful. Oracle Rdb is an Oracle database for OpenVMS platforms. The Oracle Rdb SQL types are the same as for Oracle. Consequently, the mappings for Oracle Rdb are the same as the mappings for Oracle, as described in Section Customers have reported success using Flux and Oracle Rdb with the following caveats: 1. Oracle Rdb, version Oracle Rdb Java driver, version Note that version 9.2 of the driver does not work. Also note that the WebLogic 6.1 SP3 driver cannot accommodate tables with a large number of columns and may not work either. 3. Automatic table creation may not work, because Oracle Rdb apparently is not able to create tables when there are multiple database connections open. Consequently, you may need to create tables manually. Note that Oracle Rdb s OCI dispatcher, which is an interface process that connects JDBC clients to the database, is reported to have a memory leak. Although this supposed memory leak is not in Flux, but in Oracle Rdb, it may affect Flux s ability to obtain Oracle Rdb database connections. If the OCI dispatcher goes down and is later restarted, Flux will automatically detect this situation and continue retrieving database connections once the OCI dispatcher comes back up. Flux Manual 146

148 Database Properties Summary Database properties are configured in an engine configuration properties object or directly in an engine configuration file. The database properties that can be configured in this way are: table renamings column renamings SQL type remappings database connection initializations The following property syntax is supported. Table 30-1: Supported Database property syntax Property Key Property Value Description table.name new_table_name table is the recognized name of a Flux table. new_table_name is the new name that you assign to that table. The string ".name" is a literal and must be included in the property key. Any table can be renamed. You may choose to rename tables from their standard names if the standard table names are too long for your database or if you need to adhere to a table naming standard. For example, using the key "FLOW_CHART.NAME" and the value "MY_FLOW_CHART", you rename the standard table name FLOW_CHART to MY_FLOW_CHART. When renaming tables, do not use any database table prefix, such as "FLUX_", in a key or value. Your configured table prefix is automatically applied to the old and new table names. table.column.name new_column_name table is the recognized name of a Flux table, and column is the recognized name of a Flux column. Flux Manual 147

149 new_column_name is the new name that you assign to that column in that table. The table is the original Flux table name, not the new name for a table that may have been assigned previously. The string ".name" is a literal and must be included in the property key. For example, using the key "FLOW_CHART.PK.NAME" and the value "MY_PK", you rename the standard column name PK in the FLOW_CHART table to MY_PK. sql_type.type new_sql_type sql_type is a recognized SQL type, such as BIGINT. new_sql_type is a new recognized SQL type that should be used instead of sql_type, such as DECIMAL. The string ".type" is a literal and must be included in the property key. connection_init sql sql is an SQL statement that is executed on a database connection when it is first created. This property does not apply to data sources. The entire string "connection_init" is a literal and must be included in the property key. connection_init.n sql sql is an SQL statement that is executed on a database connection when it is first created. This property does not apply to data sources. N is a number greater than one. The database connection initialization statements are run in numerical order. The string "connection_init." is a literal and must be included in the property key. Flux Manual 148

150 30.3 Database Indexes For better database performance and database deadlock avoidance, you must create database indexes. Flux includes database scripts for creating database indexes. These scripts are the same scripts used to create Flux s database tables. These scripts are located in the "doc" directory underneath the Flux installation directory. Find the script appropriate for your database. Each script is named after the database for which it is used, and all the scripts contain the file extension ".sql" Oracle For Flux 5.3 only, it has been found that with Oracle 9i, you can increase performance by deleting the statistics so that the optimizer uses the "RULE" based optimizer. Of course, in your particular Oracle 9i database, this recommendation may not necessarily prove useful and may even be detrimental. To be safe, you should run performance experiments before implementing this recommendation in your production environment DB2 Appropriate database indexes can not only increase database performance, but especially in the DB2 database, they can also reduce the chances of deadlock. In addition to database indexes, the database script for DB2 includes statements to flag certain tables as "VOLATILE", which can improve performance and reduce the opportunity for deadlock on DB2 systems Persistent Variables When flow charts are scheduled, the triggers and actions in that flow chart may contain job-specific data. For example, an action that updates a database may need to store the primary key and table name in a flow chart variable. Flux automatically stores and retrieves these persistent variables for use by triggers and actions. Flux uses a built-in object/relational mapping system, called the Variable Manager, to store data from your Java application to a database. This mapping system also retrieves your database data and loads it into your Java application. To define such a variable in Flux, you can use the built-in Java types, including String, the eight primitive types (int, long, etc), and the object equivalents of the eight primitive types (java.lang.integer, java.lang.long, etc). You may also use the standard collections classes from the java.util package in the JDK, including List, Map, Set, SortedMap, and SortedSet. You can retrieve a Variable Manager in code with the following invocation: VariableManager vm = flowcontext.getflowchart().getvariablemanager(); From the Variable Manager you may add and retrieve variables in the flow chart s variable manager using the get() and put() methods as shown below. manager.get("variable name"); manager.put("variable name", object); For more complex variables, you need to define a simple Java class that follows two simple rules. First, the Java class must have a public default constructor. Second, each Flux Manual 149

151 element in the variable must be a public field. For example, here is a variable that stores a primary key and a table name. public class MyVariable { public String recordnumber; public String address; } // class MyVariable If you define such a class, called a user-defined variable or persistent variable, you can instantiate this class and set the recordnumber and address fields. Once created, this variable can be used by passing it into your job s flow chart. The variable information passed in will automatically be stored into your database. Later, as your job executes, this variable will be retrieved out of the database automatically. If your variable class will be used in conjunction with a Flux RMI server or with an application server, your variable class should implement java.io.serializable. Furthermore, if you create custom Business Intervals, they should implement java.io.serializable as well, since Business Intervals are also persistent variables. But for purely local Flux work, your variable class does not need to extend anything. Regardless of which environment Flux is created and run in, the newly constructed persistent variable class need not be cloneable. You can store strings, integers, floating point numbers, booleans, binary data, java.sql.time objects, java.sql.date objects, java.sql.timestamp objects, and even java.util.date objects in your persistent variables. Flux stores this information to your database using native SQL types, which makes it easy for you to browse your data using a standard database tool. You can also store objects that are serializable inside your variable class. In that case, Flux automatically serializes and deserializes that object to and from your database. Furthermore, you can store collections of the above data types. For example, you can create a list of String objects. You can also create a list of user-defined variable objects. These collections and user-defined variable objects can be nested arbitrarily deep. Finally, if you have an object that does not fall into any of these categories but is still serializable, you can still store it to your database, albeit in a binary format. Persistent variables should be used to provide a small or moderate amount of job-specific data to your jobs. However, a significant or large amount of job-specific data should be treated differently. That data should be stored separately in your database, and you should store a key to your large data in a persistent variable. This way, as your job executes, your job can use this key to retrieve your large amount of data using separate means. As a final example, consider the following user-defined variable, which contains a nested user-defined variable. public class MyVariable { public String recordnumber; public Person person; } // class MyVariable Flux Manual 150

152 public class Person { public String name; public String address; } // class Person The user-defined variable MyVariable contains two public fields. The first public field is a basic type, String. The second public field, however, is a nested user-defined variable, which contains two public fields of its own, both basic types. Because the second public field refers to a nested user-defined variable, that user-defined variable class, Person, must contain at least one public field of a recognized type. Otherwise, if the second public field referred to a recognized type, such as java.lang.integer, it is saved automatically. If the second public field is not a recognized type, does not have any public fields, and is serializable, then it is saved in binary form in the database. Finally, if the second public field referred to a class that has public fields but is not meant to be stored using Flux s persistent mechanism, then the second public field must implement the flux.alwaysserialized interface. This marker interface instructs Flux to store the object in binary form, whether or not it has any public fields. Nested user-defined variables exist because sometimes you can create a cleaner design by using a nested user-defined variable, rather than putting all job attributes into a single user-defined variable. Generally speaking, serializing your job data to the database is to be taken as a last resort. The advantages of not serializing data to the database are discussed in Section 44. In general, serialization of data can create annoyances, and you may want to avoid it. Use the special flux.alwaysserialized with appropriate caution. Flux evaluates each of the above rules for persisting variables to a database in the following order: 1. The variable is checked to verify that it implements the flux.alwaysserialized interface. 2. If the variable does not implement the flux.alwaysserialized interface, then the variable s non-transient static public fields are evaluated. 3. If no non-transient static public fields are found, a check is performed to verify that the variable implements the Serializable interface. If the variable implements this interface, then the variable is serialized and stored in the database as a byte array. 4. If the variable does not follow the above rules, the variable cannot be persisted and an exception is thrown Rules for Persistent Variables Here are the formal rules for creating persistent variables. These rules are listed in the order in which the scheduler applies them. If your persistent variable does not follow these rules, an exception is thrown. 1. If your persistent variable matches one of the basic types defined in the table below, then your persistent variable is stored using the corresponding SQL type listed in the table below. These SQL types defined below are standard SQL types. Your database may use somewhat different SQL types. See Section 30.2 for more Flux Manual 151

153 information on mapping to database-specific types. For example, Oracle uses VARCHAR2 instead of VARCHAR. Table 30-2: Java to SQL Type Conversions Java Type boolean byte byte[] char char[] double float int long short java.lang.boolean java.lang.character java.lang.character[] java.lang.float java.lang.integer java.lang.long java.lang.short java.lang.string java.math.bigdecimal java.math.biginteger java.sql.date java.sql.time java.sql.timestamp java.util.date null serialized data SQL Type BIT VARBINARY VARBINARY VARCHAR VARCHAR DOUBLE FLOAT INTEGER BIGINT SMALLINT BIT VARCHAR VARCHAR FLOAT INTEGER BIGINT SMALLINT VARCHAR DECIMAL DECIMAL DATE TIME TIMESTAMP TIMESTAMP not applicable VARBINARY 2. If your persistent variable is a collection, including List, Map, Set, SortedMap, or SortedSet from the java.util package in the JDK, then your collection is stored to the database using native SQL types and tables. Your collection is not merely serialized to the database. The elements of your collection have to follow these rules for persistent variables. In effect, these rules are applied recursively to elements inside your collection. Flux Manual 152

154 Consequently, your collections can consist of nested elements that are basic types, collections, or user-defined variable objects. This nesting of elements can be arbitrarily deep. 1. If your persistent variable is an array that is not mentioned above (such as byte[ ]), then it is treated as a List, as described above. 2. If your persistent variable is a user-defined variable, then the scheduler will apply these rules to each non-final, non-static, and non-transient public field of the variable recursively Pre-defined Persistent Variables Note that flux.flowchart objects are persistent variables. They can be stored in any location that a persistent variable can Persistent Variable Lifecycle Event Callbacks Persistent variables have a simple lifecycle. They are created and populated from data in the database. Later, after they have been changed and updated, they are stored back to the database. If you need to perform additional work on your persistent variables after they are retrieved from the database or before they are stored back to the database, you can register for lifecycle event callbacks. To do so, simply have your persistent variable class implement the flux.persistentvariablelistener interface. Each persistent variable object whose class implements flux.persistentvariablelistener will receive callbacks during the lifecycle of the persistent variable. These lifecycle callbacks permit you to initialize and prepare your persistent variable data as you see fit. For an example of persistent variables in code, see the binary_key example located in the examples/software_developers/binary_key directory under the Flux installation directory Database Failures and Application Startup Sometimes databases fail. Sometimes they crash and are later restarted. Sometimes the network connection between Flux and the database goes down or is up only intermittently. Furthermore, when your application starts up, it is possible that Flux begins running before your database has fully started and is available. In either of these cases, Flux may be running without the database being available. In this situation, Flux waits until the database becomes available and then continues operating normally. Flux can recover from database failures very soon after the database recovers. The time delay for Flux to recover is governed by the SYSTEM_DELAY configuration property. If Flux detects that the database is not available or unreachable, it waits for a period of time as specified in the SYSTEM_DELAY property and then attempts to use the database again. If the database has recovered, Flux runs normally. Otherwise, Flux again sleeps for a time as defined by SYSTEM_DELAY. In Flux, it is not possible to instantiate an engine before your database is available. Flux always assumes that all necessary database tables have already been created. In Flux Manual 153

155 production settings, it is typical that all database tables have been created ahead of time. 31 Transactions The Flux engine executes successive actions in the same transaction until either a trigger is reached or a non-transactional action is found. This transition is called a transaction break. The Flux engine commits a flow chart s database transaction at transaction breaks. Because triggers may wait for a long time to fire an event, a transaction break occurs automatically at the entrance to every trigger, and the Flux engine commits the transaction. Each trigger and action comes with a default transaction setting. All the core triggers and the Null Action have the transaction break property set to true by default. The transaction break property on the core triggers and the Null Action can not be overridden. Actions on the other hand default to a false transaction break property. The transaction break property on actions can be set to true. By default, the Null Action forces a transaction break to occur. It can be used to introduce transaction breaks in sequences of transactional actions. For example, if your flow chart has three Java Actions in sequence, they will all run in a single transaction. If your application crashes mid-transaction, your transaction will rollback, and the Flux engine will re-execute those three Java Actions in sequence. If the transaction break property is set to true on all three Java Actions, the flow chart transaction will only be rolled back to the last executing action on an application crash. So, when the Flux engine is restarted, the flow chart will start executing from the last transaction break. When a transactional trigger fires, a transaction is started, and the trigger executes in that transaction context at the moment of firing. Then as control flows into a subsequent action, that action runs in the same transactional context. Transaction breaks are therefore defined as occurring at the entrance to all triggers. Furthermore, a transaction break occurs when an action is executed with its transaction break property enabled. By default, when the Flux engine fires a flow chart asynchronously and that flow chart invokes EJBs on an application server (or publishes JMS messages to an application server), that flow chart's transaction is conducted within the scope of a user transaction. So, if both Flux and your EJB are using the same data source, they will be integrated in the same transaction. You can choose to separate the Flux flow chart transaction from your EJB transaction by having your EJB transaction attribute set to RequiresNew, or by using Flux with a direct database connection instead of a data source. If Flux uses a different data source from your EJB, you can integrate your Flux transaction with your EJB transaction by using XA transactions. To do so, simply configure the Flux engine to use your application server's data source in the usual way. Finally, enable XA transactions. XA transactions, also called distributed transactions, make it possible for Flux transactions to be integrated with your application server transactions across multiple data sources. If your application server does not accept the standard java:comp/usertransaction JNDI name when a javax.transaction.usertransaction object is looked up, you can change the JNDI lookup name by changing the DATA_SOURCE_USER_TRANSACTION_CLIENT_JNDI_NAME or Flux Manual 154

156 DATA_SOURCE_USER_TRANSACTION_SERVER_JNDI_NAME configuration properties. No additional configuration is needed. Note that when you make client calls into the Flux engine, those transactions are seamless by default, and XA transactions do not need to be configured, because the transaction originates in the application server, not the Flux engine. For clarity, J2SE triggers and actions execute in the same transaction as the Flux engine itself. Consequently, should a system crash occur, the J2SE actions will re-execute and eventually commit the transaction. There is no requirement to configure XA transactions when your jobs do not invoke EJBs or publish JMS messages. Even in this latter case, the decision to configure XA transactions is up to you. One final note: there is at least one additional case where it makes sense to use XA transactions even though your jobs do not call EJBs or publish JMS messages. Suppose you are using a flow chart that consists of a simple timer trigger, which flows into a Java action. As part of your Java action listener code, you may want to update a separate database from the database that the Flux engine uses. In this case, it is not sufficient to use the database connection provided to you by the FlowContext and KeyFlowContext objects. This database connection, which is used by the Flux engine, cannot access other databases. To bridge this gap, you can use XA transactions. Configure the Flux engine to use XA transactions. When your job fires and your Java action listener code is invoked, write your listener code so that it accesses a second database by looking up a second XA data source from your application server. The database connection obtained from this second XA data source must be governed by the JTA user transaction that is already in progress. Using this technique, a job can update data directly in separate databases within a single transaction. Figure 31-1 shows the use of XA transactions. Figure 31-1: Flux XA Transactions Flux Manual 155

157 31.1 J2SE Client Transactions When you call methods on the Flux engine from a J2SE application (outside a J2EE application server), each method normally runs within its own transaction and commits at the end of the method. However, if you wish to call multiple methods on the Flux engine all within a single transaction, use the J2seSession object. J2seSession session = engine.makej2sesession(); session.put(job1); session.put(job2); session.put(job3); session.commit(); Using the above code, either all three jobs are added to the Flux engine, or none are. Note that if you configure the Flux engine to use an XA data source, J2SE client calls into the Flux engine must be part of an XA transaction. You must make sure that your client call is participating in an XA transaction. A simple way for a J2SE client to start an XA transaction is to employ the usual JTA and javax.transaction.usertransaction techniques J2EE Client Transactions When you call methods on the Flux engine from a transactional EJB (inside a J2EE application server), by default, all the method calls on the Flux engine occur within the scope a single user transaction, emanating from inside the application server. engine.put(job1); engine.put(job2); engine.put(job3); Using the above code, either all three jobs are added to the Flux engine, or none are. There is no need to use the J2seSession object. However, you must have initialized the Flux engine with your application server s database connection pool, using a data source or an XA data source. If you call the above methods on a Flux engine from a non-transactional client, for example, a Servlet or an EJB with the SUPPORTS transaction attribute then the Flux engine will detect no existing transaction and will start a transaction for each of the above method calls. In short, the three method calls will be in their own separate transaction. You can choose to start a global transaction in your non-transactional client by looking up a user transaction object in your application server and calling UserTransaction.start() on it. After doing this all your Flux method calls will be in the same transaction until you call UserTransaction.commit() or UserTransaction.rollback(). The Flux engine will relinquish control of the transaction to the client call. The above section applies to using Flux with XA data sources as well. The Flux engine is capable of detecting an existing J2EE user transaction and participates in it. Flux Manual 156

158 32 Clustering and Failover NOTE: Cluster networking in Flux is dependent on log4j. To use cluster networking, you will need to make sure log4j jar (available in the lib directory under your Flux installation directory) is available on your engine's class path. Flux has built-in clustering abilities. You can start multiple Flux engine instances. Then they will cooperate to fire jobs appropriately. The clustered Flux engine instances cooperate to fire each job only once across the cluster. Each Flux engine in the cluster does not fire jobs independently, as long as the Flux engine instances are pointing at the same set of database tables. Flux supports failover. If one of the Flux engine instances should crash or become inaccessible, the remaining Flux engine instances cooperate to recover and resume the scheduling work from the downed instance. In all cases, the database must remain accessible for the engines to operate correctly. Clustering and failover do not require an application server. Each of the engine instances periodically checks to see if another engine instance has become inaccessible. If it has, the jobs that were in the middle of firing on the inaccessible engine instance are failed over and made available to any of the remaining engine instances. Another engine instance will then fire the failed over jobs in the usual manner. By default, each engine instance checks for other failed instances once every three minutes. If you need to change this setting, set the FAILOVER_TIME_WINDOW configuration property. The default value for the failover time window is +3m, three minutes. However, you can change that property to any positive Relative Time Expression. Every engine instance in the cluster should have the same failover time window setting. To make sure jobs fire as expected, you should synchronize the clocks across all the computers in your cluster. There are many NTP (Network Time Protocol) products available for a variety of operating systems for synchronizing computer clocks to atomic clocks over the Internet. To create a cluster, simply configure several Flux engine instances to point to the same set of database tables. Furthermore, clustered Flux engine instances can communicate over the network to increase responsiveness across the cluster. This network communication is not necessary for clustering, but it does allow the Flux engine instances to cooperate closely with each other. By default, clustered Flux engine instances communicate over the configurable multicast network address CLUSTER_ADDRESS on the local subnet using the configurable multicast network port CLUSTER_PORT. Note that multicast addresses and ports are different than normal TCP/IP addresses and ports. Configuration options are described in Chapter 32 Clustering and Failover. Each Flux engine instance in the cluster must have the same cluster address and port. A second cluster of Flux engine instances must use a different cluster port or a different cluster address. Although networking allows Flux engine instances to closely cooperate with each other, clustering and failover do not require networking. Networking can be turned off. To disable cluster networking while still enabling clustering and failover using the database Flux Manual 157

159 server, set the configuration option CLUSTER_NETWORKING to false. By default, CLUSTER_NETWORKING is enabled. The database server is used for correctness. Networking is simply used for speed, not for correctness. By default, a Flux engine instance is configured to operate in a cluster. A cluster can consist of only one Flux engine instance, if desired. However, to configure a Flux engine instance to run completely by itself, with clustering turned off and without any other Flux engine instances running in the cluster, set the configuration option CLUSTER_ENABLED to false. When a non-clustered Flux engine instance is configured, certain optimizations can be applied. For example, a non-clustered Flux engine instance can make optimizing assumptions about its environment, including when to failover jobs and disabling cluster networking. 33 Application Servers Although Flux does not require an application server to run, if you have one, Flux can integrate with it. This section contains information on how to configure Flux to run with application servers. NOTE: The Flux Operations Console will not load properly after being deployed if there are any spaces in the application server's classpath entries. The following techniques work across all application servers. These techniques work in both clustered and non-clustered environments. In subsequent sections, popular configuration techniques for specific application servers are listed. Servlet. You can configure Flux to start automatically in a Servlet. This technique is particularly convenient. You simply put your Flux initialization and configuration code inside the init() method of your Servlet. To force immediate initialization of Flux, you may need to set your Servlet to be loaded on startup in order. By configuring your Servlet as a load on startup Servlet, your application server will load your Servlet automatically when your application server starts up. Otherwise, the first time your Servlet is loaded through other means, Flux will be initialized and configured. For a complete example showing the mechanics of setting up Flux in a Servlet, see Chapter 42 Examples. Separate JVM. You can instantiate Flux in a JVM separate from the application server s JVM. Flux can communicate with the application server if configured correctly. EJBs from the application server can communicate with Flux as long as Flux has been started as an RMI server. Singleton. If Flux is placed into a singleton class somewhere inside your application server, then Flux can be initialized and configured the first time an EJB accesses it. You must be sure, however, that the singleton is accessed in order to initialize it. Otherwise, background jobs will not fire. It would not be advisable to initialize Flux in the static initializer block of the singleton class, since application servers use many different class loaders. Furthermore, since application servers automatically load and unload EJBs, their associated classes, and their associated objects, you should carefully consider how this approach will work in your architecture. To be sure, this approach has worked successfully with other Flux users, but you should be aware of the issues involved. Flux Manual 158

160 33.1 WebLogic There are different ways to configure Flux to run inside WebLogic. You can use one of the above techniques, or you can use the following technique. Each is valid. You should determine which technique best fits into your architecture. Startup Class. Flux can be configured as a startup class in WebLogic. This technique is particularly convenient in WebLogic, since WebLogic will initialize and configure Flux automatically when the server starts. Similarly, when WebLogic shuts down, Flux will shut down. When configuring Flux to use a WebLogic data source for obtaining database connections, it is recommended that you use the actual WebLogic data source name, not a "resource reference" pointing to the data source. There has been a report of an inexplicable error when using a resource reference, not the actual data source, with WebLogic Deploying the Flux WebARchive (WAR) File in WebLogic When deploying the Flux WAR file within WebLogic, simply placing the file in WebLogic's web applications directory will not suffice. This limitation is due to WebLogic's restraints on deploying an unexploded WAR file. The following work around has been found to correct this issue: 1. Open the WAR file in WebLogic Builder. 2. Let WebLogic Builder generate the "weblogic.xml" descriptor file. 3. Save it back as a WAR file. In order to run, WebLogic requires some of its own files to supplement the Flux WAR file provided. These files are added when the Flux WAR file is run through the WebLogic Builder as described above. Now that the newly created WAR file is configured to run with WebLogic, it can be deployed to the WebLogic web applications directory Integrating Flux Security with WebLogic For help with configuring Flux security to run inside of Weblogic, refer to the Integrating Flux Security with Weblogic.pdf file located in the doc directory under your Flux installation directory WebSphere There are different ways to configure Flux to run inside WebSphere. WebSphere 4.0 Custom Service. WebSphere 4.0 has a mechanism called a Custom Service. IBM recommends Custom Services for starting services when WebSphere starts. In WebSphere 3.5, this mechanism was called a Service Initializer. In WebSphere 4.0, the Custom Service was introduced. The downside to Custom Services is that when they start, none of WebSphere s J2EE functionality is available for use. For this reason, Flux cannot be initialized to use WebSphere database connections. Due to this reason, it is recommended that when using WebSphere 4.0, you use one of the above configuration options that work across all application servers. Flux Manual 159

161 WebSphere 5.0 Startup Bean. WebSphere 5.0 introduced a startup mechanism called a Startup Bean. Presumably, this startup mechanism replaces the WebSphere 4.0 Custom Service, which had severe limitations. You can use a WebSphere 5.0 Startup Bean to instantiate and initialize Flux 5.3 with database connections provided by a WebSphere 5.0 data source. In any case, it is always appropriate to initialize Flux with WebSphere using one of the general startup mechanisms described in Chapter 33 Application Servers. Because those general startup mechanisms work across all application servers, you may choose to use them with WebSphere for simplicity or portability. Flux 6.1 has been tested successfully with WebSphere 5.1, and Flux 5.3 has been tested successfully with WebSphere 5.0. While testing Flux with WebSphere 5.0, we have found the following caveats. 1. Configure Flux to use a "direct" data source name such as jdbc/mydatasource. Do not use an "indirect" data source name such as java:comp/env/jdbc/mydatasource. 2. When Flux retrieves a database connection from a WebSphere 5.0 data source, you will see an informational message printed on the WebSphere 5.0 console. This message is neither an error message nor a warning message. It is simply a WebSphere 5.0 informational message advising you of what assumptions WebSphere 5.0 has made in association with the retrieved database connection. This WebSphere 5.0 informational message is as follows. Resource reference jdbc/mydatasource could not be located, so default values of the following are used: [Resource-ref settings] res-auth: 1 (APPLICATION) res-isolation-level: 0 (TRANSACTION_NONE) res-sharing-scope: true (SHAREABLE) res-resolution-control: 999 (undefined) An active transaction should be present while processing method. IBM has provided further details on this WebSphere 5.0 informational message at the following IBM web site. In case the above IBM web site is inaccessible, the contents of that page are as follows. The snapshot of the following IBM web page was taken on 20 March <start-of-ibm-web-page> Connection factory JNDI name tips Distributed computing environments often employ naming and directory services to obtain shared components and resources. Naming and directory services associate names with locations, services, information, and resources. Naming services provide name-to-object mappings. Directory services provide information on objects and the search tools required to locate those objects. There are many naming and directory service implementations, and the interfaces to them vary. Java Naming and Directory Interface (JNDI) provides a common interface that is used to access the various naming and directory services. After you have set this value, saved Flux Manual 160

162 it, and restarted the server, you should be able to see this string when you run dumpnamespace. For WebSphere Application Server specifically, when you create a data source the default JNDI name is set to jdbc/data_source_name. When you create a connection factory, its default name is eis/j2c_connection_factory_name. You can, of course, override these values by specifying your own. In addition, if you click the checkbox Use this data source for container managed persistence (CMP) when you create the data source, another reference is created with the name of eis/jndi_name_of_datasource_cmp. For example, if a data source has a JNDI name of jdbc/mydatasource, the CMP JNDI name is eis/jdbc/mydatasource_cmp. This name is used internally by CMP and is provided simply for informational purposes. When creating a connection factory or data source, a JNDI name is given by which the connection factory or data source can be looked up by a component. Generally an "indirect" name with the java:comp/env prefix should be used. This makes any resource-reference data associated with the application available to the connection management runtime, to better manage resources based on the res-auth, res-isolation-level, res-sharing-scope, and res-resolution-control settings. While the use of a direct JNDI name is supported, such use results in default values of these resource-ref data. You will see an informational message logged such as this: J2CA0122I: Resource reference abc/mycf could not be located, so default values of the following are used: [Resource-ref settings] res-auth: 1 (APPLICATION) res-isolation-level: 0 (TRANSACTION_NONE) res-sharing-scope: true (SHAREABLE) res-resolution-control: 999 (undefined) <end-of-ibm-web-page> Running WebSphere 5.x on the J2EE 1.3 Architecture If you are using Flux within a J2EE 1.3 web application and V5 data sources inside WebSphere 5.x, you should use the default setting for the DATA_SOURCE_USER_TRANSACTION_CLIENT_JNDI_NAME configuration option. Also, set the DATA_SOURCE_USER_TRANSACTION_SERVER_JNDI_NAME configuration option to jta/usertransaction JBoss There have been a few reports of problems when using Flux, a JBoss data source, and the MySQL database. If you experience problems using Flux with JBoss and MySQL, it is recommended that you configure Flux to directly access MySQL, instead of going through a JBoss data source. In other words, configure Flux with the MySQL JDBC driver class name, JDBC URL, etc, instead of providing JBoss data source information to Flux. For specific details on how to properly configure Flux with MySQL, see Chapter 30 Databases and Persistence. Flux Manual 161

163 Deploying the Flux JMX MBean on JBoss Every application server deploys JMX MBeans differently. To deploy the Flux JMX MBean on JBoss, follow these steps. Of course, you can also consult the JBoss documentation, but the directions below provide adequate assistance in deploying the Flux JMX MBean to JBoss. 1. Create a file called jboss-service.xml containing the following text. <?xml version="1.0" encoding="utf-8"?> <service> <mbean code="flux.jmx.fluxenginemanagerforjmx " name="fluxenginemanagerforjmx:service=fluxenginemanagerforjmx"> </mbean> <!-- Make sure the "codebase" setting below points to the --> <!-- directory that contains your flux.jar file. --> <classpath codebase="/flux-6-1-0" archives="flux.jar"/> </service> 1. After making any appropriate edits to your jboss-service.xml file, copy it to your deploy directory. 2. At this point, your Flux JMX MBean should automatically deploy, and you should be able to open a web browser to and view your Flux JMX Mbean. 3. To instantiate a Flux engine from your JMX management console, you will need to specify the file path to a Flux properties or XML configuration file in your JMX management console. 4. Finally, make sure that your Flux license key file is included in your flux.jar file or elsewhere on your Java class path so that Flux can find it Configuring the Flux JMS Action to run with JBoss When using the JMS Action with the JBoss application server, there are some configuration properties that must be set in order for the JMS Action to communicate effectively with the JBoss server. Refer to the following example below for a detailed look at these properties. The JBoss specific properties are highlighted in bold lettering. The JMS Action initialized above will send a JMS message with the body set as "Hello World". import flux.factory; import flux.engine; import flux.enginehelper; import flux.flowchart; import flux.timertrigger; import flux.j2ee.jmsaction; import flux.j2ee.jmsmessagetype; import javax.naming.context; import java.util.properties; public class FluxPublisher { Flux Manual 162

164 public static void main(string[] args) throws Exception { Factory factory = Factory.makeInstance(); Engine engine = factory.makeengine(); EngineHelper enginehelper = factory.makeenginehelper(); FlowChart fc = enginehelper.makeflowchart(); // Make the flow chart publish a JMS message every 3 seconds. TimerTrigger timer = fc.maketimertrigger("timer"); timer.settimeexpression("+3s"); JmsAction jmsaction = fc.makej2eefactory().makejmsaction("jms"); jmsaction.setconnectionfactory("connectionfactory"); jmsaction.setinitialcontextfactory("org.jnp.interfaces.namingcontext Factory"); jmsaction.setproviderurl("jnp://localhost:1099"); Properties prop = new Properties(); prop.put(context.url_pkg_prefixes, "org.jboss.naming:org.jnp.interfaces"); jmsaction.setextrainitialcontextproperties(prop); jmsaction.setdestinationname("queue/a"); jmsaction.setmessagetype(jmsmessagetype.text); jmsaction.setbody("hello World"); // Setup the recurring flows timer.addflow(jmsaction); jmsaction.addflow(timer); String id = engine.put(fc); System.out.println("id = " + id); engine.start(); System.out.println("Engine started"); } } Starting a Secure Engine in JBoss There are certain difficulties in creating an engine that can be used to log in to in JBoss. This section provides a method for creating a secured Flux engine with the properly configured settings. The procedure can be broken down into the following steps: Flux Manual 163

165 1. The system policy should have an entry that points the java security policy file to the fluxjaas.policy file included with Flux. The following code accomplishes this: System.setProperty("java.security.policy","c:\\fluxjaas.policy"); Policy mypol = Policy.getPolicy(); mypol.refresh(); 1. Update the JBoss login-config.xml file (usually in \jboss\server\\conf\login-config.xml) to use the FluxLoginModule as a means of authentication; refer to the sample login-config.xml file below. <application-policy name = "fluxengineservlet"> <authentication> <login-module code = "flux.security.fluxloginmodule" flag = "required" /> </authentication> </application-policy> That s all there is to it. Sample code is provided to demonstrate the implementation of a servlet. public class JBossTest extends HttpServlet { Engine flux; Factory fluxfactory = Factory.makeInstance(); RemoteSecurity secureinterface; public void init() throws ServletException { try { Configuration config = fluxfactory.makeconfiguration(); System.setProperty("java.security.policy","c:\\fluxjaas.policy"); Policy mypol = Policy.getPolicy(); mypol.refresh(); //Config options config.setsecurityenabled(true); config.setregistryport(1199); config.setsecuritypolicyfile("c:\\fluxjaas.policy"); Flux Manual 164

166 config.setsecurityconfigurationfile("c:\\fluxjaas.config"); //Making an engine flux = fluxfactory.makeengine(config); secureinterface = fluxfactory.makeremotesecurity(config,flux); //Log in to engine flux = secureinterface.login("admin", "admin"); flux.start(); } catch (Throwable e) { } } e.printstacktrace(); e.getcause().printstacktrace(); throw new ServletException(e.getMessage()); public void destroy() { try { flux.dispose(); } catch (Throwable d) { } d.printstacktrace(); } } 33.4 Tomcat When using Flux with Tomcat, it is easy to get Flux up and running quickly and efficiently. Simply place the flux.war file into Tomcat's webapps directory, also known as deploying your "flux.war" file, and then browse to your Tomcat server's starting URL. This URL will most likely be To use the Flux Operations Console, use your Tomcat server's URL plus "flux/" appended to the end of the URL. When viewing flow charts that contain SFTP URLs, the Flux Operations Console may not fully display the flow chart listing due to classpath issues. To prevent this problem from occurring, either place the flux.jar file inside the servlet containers JVM's lib/ext directory, or include flux.jar in the system classpath. If deploying the Operations Console Flux Manual 165

167 inside Tomcat, adding the flux.jar file to the system classpath will not resolve this issue since Tomcat creates the system class loader itself Oracle9i (OC4J) If Flux is configured to call EJBs or publish JMS messages, you may need to start Oracle9i (OC4J) with the "-userthreads" switch Orion Since the Oracle9i application server is built on the foundation of Orion application server, see Chapter 30 Databases and Persistence for specific configuration details when configuring Orion Sybase EAServer If Flux is configured to call EJBs or publish JMS messages, you may need to set an extra initial context property. That property name is the string literal com.sybase.corba.local. The appropriate value for this property is the string literal "false". This property setting allows Flux to call EJBs or publish JMS messages to Sybase EAServer. 34 Embedding the Web-based Designer The Flux Web-based Designer can be embedded into a web application as a JavaScript widget, allowing for easy support of rich workflow operations within your Java web application. Embedding the Web-based Designer as a JavaScript widget will allow you to visually create, edit, load, save, and monitor flow charts within your Java web application. This section will cover the minimum requirements for creating and packaging the files needed to embed the Flux Web-based Designer as a JavaScript widget. To embed the Flux Web-based Designer, create a WAR file to package the necessary elements. In this WAR file, create a directory called WEB-INF. In the WEB-INF directory, create two additional directories, classes and lib. For an example of this structure, run the build script for the designer example, included with Flux, located in the examples/web_developers/designer/ directory of the Flux installation directory. In the WEB-INF/classes/ directory of the WAR file, create a file called fluxwebapp.properties. This file will contain all the information that must be set for the designer servlet to find and connect to an existing Flux RMI engine. These properties include registry_port, server_name, host, security_enabled, and, if security is enabled, username and password. For example, to connect to a secure Flux engine running on port 1099, the fluxwebapp.properties file would look like this: registry_port=1099 server_name=flux host=localhost security_enabled=true username=admin password=admin Flux Manual 166

168 The fluxwebapp.properties file is used by the designer to lookup, access, and use an existing Flux RMI engine. It cannot be used to create a new Flux engine. The WEB-INF/lib/ directory of your WAR file will contain the JAR files necessary to embed and use the Flux Web-based Designer widget. The first file you ll need to place in this directory is the flux.jar file, located in the main Flux installation directory. This flux.jar file needs to match the one used by the Flux engine. The next file to place in the WEB-INF/lib/ directory is the designer.jar. This JAR file is most easily obtained by executing the build script included in the directory examples/web_designers/designer/ in your Flux installation directory. This JAR file contains all the necessary classes, JavaScript code, CSS, and images for embedding the designer. The designer.jar file can also be manually created, using the build script as a guide for the necessary libraries to extract and bundle. Next, create a web.xml file in the WEB-INF directory of the WAR file. Configure this document to use the DesignerServlet class within your servlet container. This XML document must contain a servlet definition and servlet mapping for the designer servlet. The web.xml document should look something like this: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" " <web-app> <servlet> <servlet-name>designerservlet</servlet-name> <servlet-class> fluximpl.bpmwui.opsconsole.repository.designerservlet</servlet-class > </servlet> <servlet-mapping> <servlet-name>designerservlet</servlet-name> <url-pattern>/designerservlet</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>designer.html</welcome-file> </welcome-file-list> </web-app> The default url-pattern for this servlet is /DesignerServlet. The url-pattern can be changed using the flux.designerconfiguration.setservleturl() JavaScript method when creating a designer configuration using JavaScript. Finally, create an HTML page in the base directory of your WAR file to display the designer in a web browser. This HTML page should have the same filename as your Flux Manual 167

169 WAR file for example, if your WAR file is named designer.war, your HTML file should be named designer.html. At a minimum, the HTML page must include the following code: <script type="text/javascript" src="designerservlet?resource=javascript"></script> <link rel="stylesheet" type="text/css" href=" DesignerServlet?resource=css"/> <div id="designer"></div> <body onload= "instantiatedesigner();"></body> <script type="text/javascript"> function instantiatedesigner() {var designer = new flux.designer();}</script> When creating a new designer JavaScript object (using the statement new flux.designer()), you can pass in an optional flux.designerconfiguration JavaScript object to specify additional properties. For example, to add some space above the designer to accommodate a page header, modify the JavaScript code to include the following code within the instantiatedesigner() method: var designerconfig = new flux.designerconfiguration(); designerconfig.setloadingtop(150); var designer = new flux.designer(designerconfig); For a complete list of the DesignerConfiguration methods, refer to the flux.designerconfiguration JSDoc, which can be accessed from the file index.html, in the doc/jsdoc/ directory of your Flux installation directory. The following basic example demonstrates how to embed the Flux Web Designer as a JavaScript widget, including creating and using the designer configuration object: <html> <head> <title>flux Designer</title> <link rel="stylesheet" type="text/css" href="designerservlet?resource=css"/> <script type="text/javascript" src="designerservlet?resource=javascript"></script> </head> <body onload="instantiatedesigner();"> </body> <div id="designer"></div> <script type="text/javascript"> function instantiatedesigner() { } </script> </html> var designerconfig = new flux.designerconfiguration(); designerconfig.setloadingtop(150); var designer = new flux.designer(designerconfig); Flux Manual 168

170 The designer has JavaScript API methods for opening, saving, exporting, and monitoring flow charts on Flux engines, as well as setting the design mode to either view or design. For a list of these methods and the functionality they provide, refer to the JSDoc file index.html, located in the doc/jsdoc/ directory of your Flux installation directory. Once all of the necessary elements have been obtained and packaged in the WAR file, the WAR file will be ready to deploy in your application server. For an example of the embeddable Web-based Designer in action, execute the run script in the examples/web_developers/designer example mentioned previously. Then browse to in your web browser. 35 Logging and Audit Trail Logging is used to help programmers and system administrators determine what Flux is doing at any particular moment. These log messages assist in tracing your interactions with Flux, with debugging, with tracing the internal steps that Flux takes to do its work, and with recording the actions taken by each flow chart as it runs. The audit trail is a mechanism for recording all the data that is involved in performing some action involved with Flux. The audit trail is intended to recreate the time sequence in which jobs run. The audit trail provides a full data dump of each significant step that Flux takes. In Flux, the difference between logging and the audit trail is that logging is meant to help monitor and debug your application. The audit trail is meant to recreate the time sequence of significant events that occur in Flux, including a full accounting of data associated with each of these significant events. Logging would be used by programmers and administrators to debug, trace, and monitor your application. The audit trail would be used for legal purposes to attempt to show that certain activities are, or are not, taking place. When using Flux logging, you may desire the logs to be generated in a directory different from that of which a Flux engine is running. To change the directory, use the INTERNAL_LOGGER_FILE_DIRECTORY configuration option and specify a relative or absolute path to the directory you wish the logs to be generated in. When the engine runs using this configuration option, the logs will be placed into the specified directory, not where the engine is created. By default, this configuration option is set to "." meaning the directory from which the engine was created. When the Flux logging and audit trail features are used, the log, and especially audit trail, entries will become exceptionally large, unless an expiration time expression is set, due to the amount of information logged to the database. By using the LOG_EXPIRATION configuration property, you can set a time expression indicating how long individual log entries are saved in the Flux logs before they are automatically pruned. Expired log entries are automatically deleted from the database. A null log expiration indicates that log entries are never deleted from the database. By default, the value for the LOG_EXPIRATION and AUDIT_TRAIL_EXPIRATION configuration option is set to "+w", meaning one week. However, if the database type is set to the default value of HSQL, the log expiration defaults to "+1H", or one hour Six Different Loggers Defined in Flux Flux defines six different loggers: Flux Manual 169

171 Client Logger: Logs client API calls made to the flux.engine interface. These API calls are logged whether the engine was instantiated as a plain Java object or as an RMI server. The instantiation of an engine object is logged to the System Logger, described below. Job Logger: Logs steps taken by each job as it executes, including before and after each job action or trigger executes. System Logger: Logs the instantiation of an engine object, plus the inner workings of Flux. The internal steps that a Flux engine instance performs to coordinate the use of Flux and the firing of jobs are logged to the system logger. Among other things, these internal steps include refreshing database connections and failing over jobs. Audit Trail Logger: Records recreate the time sequence of significant events that occur in Flux, including a full accounting of data associated with each of these significant events. The Audit Trail Logger logs the entry and exit points of client API calls made to the flux.engine interface. It also logs the entry and exit points of each trigger and action that is executed in a running flow chart. The complete data involved in each of these significant events logged by the Audit Trail Logger is provided to your Log4j or JDK loggers. You can use these data to perform a complete data dump to your logging destination, such as a file or a database. These data are delivered as an object to the Audit Trail Logger. This object implements the flux.audittrail.abstractaudittrailevent interface as well as sub-interfaces of that interface. All the audit trail events are defined as interfaces in the flux.audittrail package. Some of these interfaces names are prefixed with the word Abstract, meaning that they do not represent a concrete audit trail event. However, sub-interfaces of these abstract interfaces do represent concrete audit trail events. Internal Synchronous Logger: Uses an internal logger that logs information to both the Flux database and to logging files stored on the file system. Using this logger, logging operations are stored synchronously to the database using the same database connection that is used by the Flux operation being executed. This technique records information only if it is committed by the database connection used by the Flux operation. If a transaction is rolled back, the logs are also rolled back. For example, if an action encounters an error while executing, the transaction will be rolled back along with the logs for that event giving the appearance that the action was never executed. When this logger is used, the Flux logs set at the SEVERE and WARNING logging levels only, are stored in the database. However, the complete Flux logs, down to the FINEST logging level, are also stored in rotating log files on the file system asynchronously. The active log file name is flux<engine name>.log, where "< engine name>" is the name of the engine that generated the log. The name of an engine is found in the engine's configuration under the configuration property NAME. To configure the logging level for this logger, use the Configuration.setInternalLoggerLevel() method for your engine's configuration. This property will not affect the logging level for any other loggers. By default, the internal logging level is set to FINEST. Flux Manual 170

172 Six daily log file rotations are stored using file names patterned after flux-<engine name>-<date>.log, where "<date>;" is in the form "day-month-year". The three letter month is written using the localized month name. For example, a logging file that would be created 1 January, 2005 on a Flux engine with the name "FluxEngine" would have the name flux-fluxengine-1-jan-2005.log. To set the maximum size that these log files can reach before being rotated, use the Configuration.setInternalLoggerFileRotationSize() method. Once the files reach this specified attribute, they will be rotated. Also when this logger is used, all the Flux audit trail entries are stored in the database in their entirety. Internal Asynchronous Logger: Uses an internal logger that logs information to both the Flux database and to logging files stored on the file system. Using this logger, logging operations are stored asynchronously to the database using a different database connection than the database connection used by the Flux operation executed. This technique increases throughput while still recording information that may be rolled back in the database connection used by the Flux operation. For example, if an action encounters an error while executing, the transaction will be rolled back, however, the logs will record that the action tried to execute but failed to complete. When this logger is used, all the Flux audit trail entries are stored in the database in their entirety. Also, in the result of a system crash while using internal asynchronous logging, a small amount of logs not written to a file or the database could be lost. Flux uses this logger type by default. When you configure Flux, you can set the names of each of these six loggers. The default names are listed in the flux.configuration interface, but you can change them. The names are simple strings. All of these loggers are always enabled. Configure your logging tool (see Logging and Audit Trail for details) to decide which messages from which loggers you want to see Using Different Logging Tools with Flux You can plug in different logging tools into Flux. Flux supports Log4j, the logger built into JDK 1.4 or greater, Apache Jakarta Commons Logging, a simple logger that prints messages to standard error, or no logger at all. By default, Flux logs error message to standard error, but this setting can be changed in your Flux configuration. To enable different logging tools, set the Flux configuration property called LOGGER_TYPE. This configuration property is represented by an enumeration in the flux.loggertype interface. The enumeration values are listed below. LOG4J: Uses the Log4j facility, which works with JDK 1.2 or greater. JDK: Uses the logging facility built into JDK 1.4 or greater. COMMONS_LOGGING: Uses Apache Jakarta Commons Logging, which works with JDK 1.2 or greater. STANDARD_ERROR: Uses a simple, built-in logging facility. Logs stack traces and error messages to standard error. Use this option if you want to see errors but do not want to use one of the standard logging systems such as Log4j or the JDK. If you use this logger, the distinctions between the Client, Flow Chart, System, and Flux Manual 171

173 Audit Trail loggers are lost. When using this logger, stack traces and error messages only are logged. More detailed logging information is not available, unless you switch to the Log4j or JDK logger. INTERNAL_SYNCHRONOUS: Uses an internal logger to log information to the database and to a file on the file system. Logging operations are stored synchronously to the database using this logger type. INTERNAL_ASYNCHRONOUS: Uses an internal logger to log information to the database and a file on the file system. Logging operations are stored asynchronously to the database using this logger type. This logger type is the default used by Flux. NULL: Logs nothing. No logging output is generated, displayed, or sent anywhere. After a name is assigned to each of the four loggers listed in Logging and Audit Trail, logger-specific configurations can be set up. You can configure each of the four loggers individually. Log4j and the JDK logger support configuration via configuration files or API calls. Using these configurations, you can enable or disable different loggers, set the logging levels, etc. When using Log4j, the JDK logger, or Commons Logging, the name you assign to each logger is used to link that logger to the logging mechanism of your choice. Once this linkage is established, you can configure the Flux loggers just like you would configure your own application loggers. In fact, you can choose to have Flux and your application share the same logger. To avoid concurrency problems when Flux and your application want to use a logger at the same time, Flux always uses the synchronization policy of synchronizing on a logger object before calling methods on it. If you configure Flux to share loggers with your application, you should use this same synchronization policy. Note that the Log4j documentation states that Log4j logger objects are thread-safe, so if you use Log4j, you may not need to synchronize on Log4j logger objects when sharing them with Flux Creating Audit Trail Listeners The audit trail serves an additional purpose. You can use the audit trail to create various kinds of job listeners. For example, you can notify your application when different audit trail events occur, such as when the engine is started or when a job fires. To be sure, Flux s flow chart model already serves this purpose well. The Flux flow chart model is vastly superior to using a model of listeners. It is easier to coordinate a job s activities using a flow chart and its workflow model than chaining together listeners manually. That being said, listeners can still be useful. A listener could deliver a message when a Flux engine is disposed. The delivery of that message can trigger other activities within your application. A listener could also deliver a message to a monitoring user interface when a job fires or finishes. This user interface could display recent job firing activity. To install an audit trail listener, you must use the Log4j logger or the JDK logger. Log4j: Configure an ObjectRenderer or a Layout using Log4j s configuration files or APIs. org.apache.log4j.or.objectrenderer: Implement this interface and its dorender() method. In the body of this method, cast the dorender() Flux Manual 172

174 argument to a flux.audittrail.abstractaudittrailevent object. Then process and re-deliver the audit trail event to its destination in your application. org.apache.log4j.layout: Sub-class this class and implement its format() method. In the body of this method, call LoggingEvent.getMessage() on the format() argument. Cast the result to a flux.audittrail.abstractaudittrailevent object. Then process and re-deliver the audit trail event to its destination in your application. JDK: Configure a Formatter using the JDK s configuration files or APIs. java.util.logging.formatter: Sub-class this class and implement its format() method. In the body of this method, call LogRecord.getParameters() on the format() argument. Extract the first element from the returned object array. Cast that first element to a flux.audittrail.abstractaudittrailevent object. Then process and re-deliver the audit trail event to its destination in your application Multiple Loggers Multiple loggers in Flux provide the ability to specify multiple logger types within Flux such as Flux's internal logger, the Log4j logger, and a JDK logger collectively. A java.util.set of flux.logging.loggertype objects are used to indicate the logging systems that the Flux engine will use. For example, if you wish to view logs and audit trail entries in the Flux Operations Console and at the same time send events to your Log4j audit trail listeners, the flux.logging.loggertype.internal_asynchronous and flux.logging.log4j loggers may be specified in the Flux configuration. To set multiple loggers in Flux, use the setloggertypes() configuration option as shown below: Configuration config = factory.makeconfiguration(); Set loggers = new HashSet(); loggers.add(loggertype.internal_asynchronous); loggers.add(loggertype.log4j); config.setloggertypes(loggers); The above code sets the "Log4j" and Flux "Internal Asynchronous" logger types to be used. Note that it is illegal to specify both LoggerType.ASYCHRONOUS and LoggerType.SYNCHRONOUS in the same configuration. It is also invalid to use LoggerType.NULL in addition to other logger types Heartbeat The heartbeat event contains general diagnostic information about the health of the Flux engine instance that generated it. This event is generated according to the frequency specified in the HEARTBEAT_FREQUENCY engine configuration property. However, some heartbeat events are generated as soon as they occur. In particular, the EngineDatabaseCommandsFailing heartbeart event is generated and delivered as soon as it occurs. Flux Manual 173

175 In general there are four types of heartbeats: Ok, EngineMainBlocked, EngineMainStopped, and EngineDatabaseCommandsFailing. The Ok, EngineMainBlocked, and EngineMainStopped heartbeats all give diagnostics for the engine at the heartbeat frequency, which defaults to one hour. The EngineDataba secommandsfailing heartbeat, however, returns the problem immediately following the occurrence of the error. The Ok heartbeat indicates that no error has been diagnosed in the Flux engine. This heartbeat event is sent to the audit trail as well as the logs at the info logging level. An example of the print out for this heartbeat is shown below. The EngineMainStopped heartbeat indicates that the Flux engine's primary flow of control has stopped due to an error condition. This event is not generated if the Flux engine is stopped during the normal course of operation. The EngineMainBlocked heartbeat indicates that the Flux engine's primary flow of control is alive, but it is blocked or deadlocked and cannot make progress. The EngineDatabaseCommandsFailing indicates that at least five closely spaced database commands issued by the Flux engine have all failed with an error. Typically, this error condition indicates that the database server is down, the database server cannot be contacted, or the Flux database schema contains errors or is misconfigured. This event is delivered as soon as this error condition is detected. Furthermore, until it is identified that this error condition no longer exists, this event is delivered at the regular heartbeat reporting interval. Below is code for generating a Flux engine. This code also incorporates a logger to record the engine's events. A logger is needed to receive the heartbeat information for the Flux engine. Configuration configuration = factory.makeconfiguration(); // Set heartbeat frequency and logger type. configuration.setheartbeatfrequency("+10s"); configuration.setname("flux"); configuration.setloggertype(loggertype.log4j); // Create the Flux engine. Engine engine = factory.makeengine(configuration); Once this code is executed, the logger will begin recording events from the engine. Notice that the setheartbeatfrequency() method is set to "+10s", meaning that the heartbeat of the engine will be printed out every ten seconds. An example print out is shown below of an Ok heartbeat. [cc]jun-14 15:48:25 system - #### START HEARTBEAT MESSAGE #### Name: flux.audittrail.status.ok Engine Name: Flux Free Memory: 1.00 MB Total Memory: 4.14 MB Maximum Connections: 5 Occupied Connections: 0 Running Flow Charts: 0 Flux Manual 174

176 #### END HEARTBEAT MESSAGE #### [cc]jun-14 15:48:25 audit_trail - fluximpl.core.heartbeat.okimpl@676e3f 36 Best Practices Refer to the knowledge base for a growing collection of design tips and best practice article. The knowledge base can be accessed through the Flux Portal at: Design Best Practices Tips for designing your flow charts: Consider the use of messaging. Flux provides a very flexible point-to-point and broadcast messaging model. When designing your flow charts, take messaging into account in your design. Your application could be designed much better if it makes judicious use of messaging. Messaging can be very useful for queuing and producer/consumer scenarios. For example, you can set up 10 consumer flow charts that process message data. The exact number of consumer flow charts probably should correlate to your concurrency throttle settings. Message data can be generated within a running flow chart as well as from the Flux Designer. If you have a large number of fine-grained flow charts, consider grouping them into smaller batches of coarse-grained flow charts. For example, have one heavyweight flow chart perform a collection of sub-tasks instead of having 10 flow charts do 10 small things Performance Best Practices Tips for improving your Flux system's performance: Apply the recommended indexes on the Flux database tables. Setting indexes sometimes leads to a huge increase in performance. Also, make sure that your Flux tables are properly optimized. Most performance problems can be resolved by proper database table optimizations. Use the concurrency throttle settings and MAX_CONNECTIONS configuration property fine tune the amount of load on your database. You can also throttle the amount of flow charts running across the entire cluster in a particular branch of the flow chart tree. Use one set of Flux engines to add/remove/monitor flow charts and another set of engines to actually run the flow charts. The set of Flux engines that is used to add/remove/monitor flow charts should be stopped and should not run flow charts. The only purpose of these add/remove/monitor engines is to update and query the database, not to run flow charts. Use a Flux cluster to share the flow chart load. Set the FAILOVER_TIME_WINDOW configuration property to a higher value than the default of 3 minutes. For example, if you have a FAILOVER_TIME_WINDOW configuration property set to 15 minutes, you will see Flux update the heartbeat of all the flow charts in the database every 5 minutes. If you have a significant Flux Manual 175

177 number of flow charts, you should tune this parameter such that Flux does not query your database too frequently. When increasing the FAILOVER_TIME_WINDOW, be aware that if a Flux engine crashes for some reason, it will take longer for another clustered Flux engine to failover and rerun the failed flow chart. Note: you should use the same FAILOVER_TIME_WINDOW configuration property on all Flux engines in the cluster. If your scheduled flow chart firing times are very close together, you might want to increase the SYSTEM_DELAY configuration property. For example, the default SYSTEM_DELAY is 3 minutes, meaning that Flux scans the database for flow charts to run every 3 minutes, unless notified earlier to do so. This scanning frequency might be too much if all your flow charts are scheduled to fire at exactly, or nearly exactly, the same time. You might want to increase your SYSTEM_DELAY to a value like +10m (every 10 minutes). Note: you should use the same value on all Flux engines in the cluster. To increase the responsiveness of engines in a Flux cluster, you should allow cluster networking (set CLUSTER_NETWORKING to true). With cluster networking all Flux engines in a cluster immediately know when a new flow chart is added through one of the Flux engines. For example, if you are adding flow charts using a stopped engine, the stopped engine can notify the other engines in the cluster via networking that a new flow chart has been added. The running engines can immediately retrieve and start running that flow chart. Without cluster networking, the running engines must wait until the SYSTEM_DELAY polling interval has elapsed. In summary, cluster networking is not needed for correctness, but it does help increase the responsive of the engines in a cluster. If you are using a data source, make sure that the connection timeout is long enough. For example, if some of your flow charts take a long time to execute, you want to make sure that your connections do not time out during flow chart execution. Also, make sure that your data source connections are refreshed and tested periodically. Flow charts that are scheduled to fire at closely spaced frequencies will result in Flux putting a load on your CPUs and database at flow chart firing time. This load is due almost entirely to the running flow chart, not Flux itself. To minimize the load that Flux can put on your system, properly configure your Flux engines to limit the number of concurrent flow charts and database connections. These limits are applied using concurrency throttles and the MAX_CONNECTIONS configuration property Frequently Asked Questions 1. Question: If I use Flux in my J2EE application, do I need to configure Flux as an RMI server? Answer: No. Flux lives outside your EJB container, but you can still instantiate Flux inside your application server s JVM. Then you can simply store the Flux engine variable in a "public static" variable in one of your classes and reference Flux directly. Flux Manual 176

178 The typical Flux installation in WebLogic creates a Flux engine in a WebLogic Startup Class and stores the Flux engine in a public static variable in that WebLogic Startup Class. Once configured in this manner, your EJBs, MDBs, and other objects in your J2EE application can call Flux methods simply by making calls against the Flux engine stored in that public static variable. In this configuration, there is no need to create Flux as an RMI server. Many Flux users run in this configuration successfully. 1. Question: Does Flux support load balancing in a Flux cluster? Answer: Yes, to a pretty good extent. Due to Flux s support for concurrency throttles, a single Flux engine instance can never be overloaded with jobs. Each Flux engine instance s concurrency throttles prevent too many jobs from firing at the same time. When one Flux engine instance is filled up with running jobs, any other jobs in the cluster that are eligible for execution tend to gravitate to other Flux engine instances within the Flux cluster in order to be execution. This load balancing technique is not perfect, but it meets a wide variety of needs in real life production environments. Even so, this technique will be further improved in a future Flux release. 1. Question: How do I start Flux automatically inside a Servlet? Answer: See Chapter 42 Examples for example code that shows you how to do this. 1. Question: How do I start Flux automatically inside WebLogic Server? Answer: See Chapter 42 Examples for example code that shows you how to do this. 1. Question: After I modify my job to update my Timer Trigger s time expression (schedule), the next Timer Trigger firing is on my old schedule, not my new schedule. However, the firing after that is on my new schedule, as expected. Why was the next firing on the old schedule but the firing after that on the new schedule? Answer: When a Timer Trigger is scheduled to fire, a future firing date is stored in the Timer Trigger s scheduled trigger date property. When you modify or update a Timer Trigger s time expression, you are indeed updating the schedule at which that Timer Trigger will fire. However, the scheduled trigger date is still left at its original firing date. This behavior explains why the first firing after modifying a Timer Trigger s time expression is set to the scheduled trigger date. When the Timer Trigger fires on that original firing date, then, and only then, the time expression is consulted and a new scheduled trigger date is calculated. This behavior explains why the second firing after modifying a Timer Trigger s time expression is on the new schedule. To avoid this behavior when you update a Timer Trigger s time expression, update the Timer Trigger s scheduled trigger date at the same time. 1. Question: Does Flux violate the EJB programming restrictions? Answer: No. Flux is not an Enterprise JavaBean, but it does integrate tightly with J2EE APIs, if that is desired. J2EE integration in Flux is completely optional. Flux does not violate any EJB or J2EE programming restrictions, because Flux itself does not reside in the EJB container. However, Flux may be instantiated in application server s JVM, just not its EJB container. Flux Manual 177

179 Notes JMX MBean Flux provides a JMX MBean for management use with any application server that supports JMX. In essence, the Flux JMX MBean is a job monitor, much like the Flux Job Monitor that is available in the Flux Operations Console. Bear in mind that the JMX MBean job monitor will never be as functional or usable as the Operations Console's job monitor, but it may still prove useful in your environment. Flux s JMX MBean follows the rules for a standard MBean, so it can be used with any JMX management console. The Flux MBean is specified by the flux.jmx.fluxenginemanagerforjmxmbean interface. You can create the Flux JMX MBean programmatically by using the flux.jmx.jmxfactory interface. A concrete implementation of JmxFactory can be created by the usual flux.factory class. If you need to directly instantiate the Flux JMX MBean, instantiate the flux.jmx.fluxenginemanagerforjmx class. Using one of these techniques, you can create the Flux JMX MBean and deploy it to your JMX management console in the usual way. After displaying the Flux JMX MBean in your JMX management console, you first need to create or look up a Flux engine instance using a Flux configuration file. Once created or looked up, you can start, stop, and dispose this Flux engine instance. When you use the Flux JMX MBean to instantiate a Flux engine instance, that engine instance will be created in the same JVM where the Flux JMX MBean is running. The location of the Flux JMX MBean, of course, is determined by your JMX management system. Note that the location of any Flux configuration files that you use must be located in the current class loader s class path or the system class path. Additional documentation on the individual Flux JMX MBean methods can be found in the flux.jmx.fluxenginemanagerforjmxmbean Javadoc documentation. 38 Flux Command Line Interface The Flux command line interface can be used from a shell or command prompt in order to create a Flux server, start the server, stop the server, check if the server is running, dispose/shutdown the server, run the Flux Designer, and start Flux Agents. In conjunction with the Flux Designer, the Flux command line interface allows Flux to be used as a standalone job scheduler. End-users and administrators create and shutdown Flux using Flux's command line interface. To perform other job scheduler tasks, such as creating, editing, removing, controlling, and monitoring jobs, end-users and administrators use the Flux Designer and web user interface. Using Flux in its standalone job scheduling mode, you can use Flux to run your office, data center, and company operations. No programming required. A manual targeted directly at end-users and administrators who perform no programming is available in the file flux-end-users-manual.pdf. Flux Manual 178

180 38.1 Displaying the Flux Version To display the version of Flux from the command line interface, run the following command from a shell or command prompt. java jar flux.jar Alternately, if your jobs contain classes that are not bundled with Flux, you need to start the GUI differently. java classpath.;flux.jar flux.main gui 38.2 Obtaining Help To display help with the command line interface, run the following command from a shell or command prompt. java jar flux.jar help This command displays a list of available commands. For help on a specific command, run the following command, java jar flux.jar help <command> where "<command>" is the name of the actual command. The supported commands include: agent-client Runs a command on a pre-existing Flux agent instance. agent-server Creates and optionally starts a Flux agent instance. client Runs a command on a pre-existing Flux engine instance. gui Launches the Flux GUI. help Displays help messages for each command. server Creates and optionally starts a Flux engine instance Creating a Flux Server To create a Flux server from the command line, run the following command while inside your Flux installation directory. java jar flux.jar server This command creates a standard Flux server using default parameters. It will also only run on your local machine, within the HSQL in memory database. If you wish to The alternate syntax that allows you to include your own jar files on the command line is shown below. java classpath flux.jar;<your jars here> flux.main server In general, the flux.main class has main methods that provide the API to the Flux command line. The default server configuration creates a Flux engine instance as an RMI server on RMI registry port 1099, binding the server to the RMI registry using the server name "Flux". The RMI registry port and server name can be changed using the command line options. Flux Manual 179

181 The above command line syntax for creating a Flux server does not actually start the server after it is created. It just instantiates the server in the stopped state. A stopped server does not fire jobs. The command line option start creates a Flux server in the started state, which does, in fact, allow jobs to fire. java classpath.;flux.jar flux.main server start The above command not only creates a default server, but it starts that server as well Creating a Flux Server from a Configuration File The above section describes how to create and optionally start a standard Flux server. More practically, however, you will likely have to create a Flux server from a Flux configuration file, in order to customize the server. The command line syntax is straightforward. java jar.;flux.jar server cp <properties_conf_file> [start] The above line creates a Flux server from a properties configuration file called < properties_conf_file>. java jar.;flux.jar server cx <xml_conf_file> [start] Similarly, the above line creates a Flux server from an XML configuration file called < xml_conf_file> Starting, Stopping, and Shutting Down a Server After the server has been created, it can be started and stopped from a second command line prompt using the client command. The client command has similar options to the server command described above. You can start a remote Flux server, stop a remote server, check if a remote server is started, and shut down a remote server all using the client command. To control an engine through the second command prompt window, use the following commands while in the Flux installation directory. All of the following commands use the hsqldb.jar file that allows Flux to run on your local machine's in memory database. To change this behavior, simply add the appropriate jar files needed to run Flux on your database into the command line. Start: java cp.;flux.jar flux.main client start Stop: java cp.;flux.jar flux.main client stop Dispose: java cp.;flux.jar flux.main client dispose 38.6 Running the Flux Designer Finally, the Flux command line is also used to run the Flux GUI, which is explained in detail in Chapter 38 Flux Command Line Interface. To run the Flux Designer, use the following command. java cp.;flux.jar flux.main gui This code will start the Flux Designer, however, additional JAR files may need to be added in order for the Flux Designer to operate correctly. Flux Manual 180

182 38.7 Starting a Flux Agent Flux Agents can be started using the Command Line Interface (CLI). To start a Flux Agent using the Command Line Interface, run the following command: java -cp.;flux.jar flux. Main agent-server -enginehost <your engine's host> start The above command will start a Flux Agent that registers with your Flux engine if the default configuration settings are used to start your Flux engine instance. For more information on starting an agent using the CLI, refer to Chapter 21 Flux Agent Architecture on creating an agent. Note: Flux Agents require that the Flux engine they are registering with has the jbosscache.jar file in the Flux engine classpath. 39 Flux Designer The Flux Designer is a standalone Java application that allows you to create and edit your flow charts using a graphical user interface. Features provided by the Designer includes the ability to create and test time expressions to verify that the firing frequencies are expected, to create various flow charts with many different actions and flows, and to export flow charts to a running Flux engine. This section outlines the methods for starting the Flux Designer, and for displaying its components using Java code Creating, Editing, Removing, Controlling, and Monitoring Jobs When you use the Flux Designer to create, edit, remove, control, and monitor jobs, the Designer merely "attaches" itself to a running Flux engine. The Designer itself is not the Flux engine. In order for the Designer to monitor your jobs properly, you must have a Flux engine up and running as an RMI server with cluster networking enabled. An easy way to start a Flux engine is to run one of the Flux example programs. In the examples/rmi directory in the Flux download package, run the Windows batch file runserver.bat to launch a Flux engine. Or if you are on Unix, run the file runserver.sh. For ease of use, you can create an engine with the bat or sh scripts, start-s ecure-flux-engine.bat/sh and start-unsecured-flux-engine.bat/sh, that are located in the base directory of your Flux install. Eventually you will probably want to set up your own custom, robust engine to cover your needs. If your flow charts contain classes that are not bundled with Flux, you will need to include your custom JAR files in the classpath when starting the Designer from the command line Using Flux Designer Components in your own GUI It is possible to use components of the Flux Designer within your own GUI. The interface flux.ui.uifactory can make Java Swing components, including a time expression editor, a flow chart image renderer, and a job monitor. Flux Manual 181

183 40 Flux Operations Console The Flux Operations Console is a web application that allows you to monitor your jobs through a web browser. From the Operations Console you can view jobs as well as pa use, resume, remove, interrupt, and expedite them. Future releases of the Operations Console will expose the remainder of the Flux API. The Flux Operations Console's minimum requirements are Java Servlets version 2.3 and JavaServer Pages version Installing the Flux Operations Console The Flux Operations Console is pre-packaged as a WAR file, flux.war. The Flux WAR file is located in the "webapp" directory, underneath your Flux installation directory, in the Flux distribution bundle. To deploy the Flux Operations Console, deploy the flux.war file to your Servlet and JSP container in the usual manner. You will also have to place your Flux license key in your class path, preferably in the WEB-INF/classes directory. In addition, copy the flux.jar file into your WEB-INF/lib directory Configuring the Flux Operations Console to Run with a Custom Flux Engine Running the Operations Console with a Custom Flux Engine If your engine is not running on the default specification, you will need to change the fluxconfig.properties file located in the Application Server\webapps\flux\WEB-INF\classes directory to point to the engine you wish to use in order for the Flux Operations Console to detect the engine. There are three fields, shown below, that are in the.properties file you will need to modify. registry_port=1099 server_name=flux host=localhost Once you have changed these fields to point to your engine, browse to the Operations Console in your web browser and the application should detect your engine Disabling the "Remember Me" feature on the engine sign-in page The Operations Console provides a "remember me" feature on the engine sign-in page, allowing a single user to remain signed in to an engine on a particular machine. In some cases, security concerns (for example, multiple users accessing the same machine) may require that this feature be disabled. You can determine whether the "remember me" option is displayed by setting the "remember_me_enabled" property in fluxwebapp.properties, located in either /home/username (Unix) or C:\Documents and Settings\username (Windows). To disable the option, add the following line to fluxwebapp.properties: remember_me_enabled=false This property is set to "true" by default. Flux Manual 182

184 40.3 Deploying Custom Classes to the Flux Operations Console If your flow charts contain user-defined persistent variables, custom triggers, or custom actions, you need to deploy the corresponding class files to the Flux Operations Console. To deploy them, place the corresponding class files in a jar file and place that jar file in the web application s WEB-INF/lib directory. Alternately, do not place those class files in a jar file, but place them in the web application s WEB-INF/classes directory Flux Tag Library Many of the Flux APIs are available through tag libraries. Web designers can use the Flux tag libraries to create web applications that communicate with Flux engines. By using the Flux tag library, web designers do not need to write Java code or scriptlets. The Flux tag library is documented under the doc/taglib directory within the Flux distribution bundle. The tag library s jar and TLD files are located under the taglib directory. 41 Troubleshooting Cannot contact a remote engine on a Linux system. The error message says that "localhost" or " " cannot be contacted, when you had, in fact, tried to contact a remote system. This problem is a known Linux and Java issue. It is not Flux-specific. In general, when a Java application tries to lookup an RMI object on a Linux computer, the RMI remote reference that is returned to the Java application may contain a reference to (localhost), instead of the remote computer s actual (routable) IP address and host name. Very likely, the first entry in your Linux system s /etc/hosts file matches, or is similar to, the following line: localhost There may be additional lines below this line that specify other IP addresses and host names. However, if the very first line is similar to the above line, this Linux/Java problem can occur. To resolve this problem, move the first line farther below in your /etc/hosts file, beneath the line that lists your computer s real (routable) IP address and host name. Additionally, you can set the java.rmi.server.hostname property on the client side to the IP address of the Linux server where your remote engine resides. You can set this property on your Flux client's Java command line like so:... -Djava.rmi.server.hostname="<IP address of Linux server where remote engine resides>"... The JDK 1.4 documentation contains further information on the java.rmi.server.hostname Java RMI property 1. If you encounter this problem while using DHCP and the above solution does not work for you, contact support@fluxcorp.com. We will work with you to find a solution, and afterwards we will update this troubleshooting section with the solution for DHCP. Why did my flow chart fire twice? Flux Manual 183

185 If a database deadlock occurs while your flow chart is firing, the database transaction is rolled back and tried again automatically. At this point, your flow chart will fire again but only because it did not run to completion successfully the last time it fired. This behavior is normal. You can completely eliminate the possibility of "your flow chart firing twice" by tying your flow chart s work into the same database connection that your Flux flow chart uses. That way, if Flux s database connection rolls back, your flow chart s work rolls back too, and there is no harm done. You can also tie the Flux database connection into your work by using an XA resource or an XA database connection. Again, if the Flux database connection rolls back, your flow chart s work rolls back too no harm done. mmiverifytpandgetworksize: stack_height=2 should be zero; exit You may see the above message on your console. It is a harmless message emitted directly from the IBM Java Runtime Environment (JRE) or the JRE s Just In Time (JIT) Compiler. The following IBM website explains that it is an IBM error and provides the solution. The Flux Designer behaves erratically, flickers, freezes, or hangs. There are reports of Java Swing (GUI) bugs in certain versions of the Java Runtime Environment (JRE) or the Java Development Kit (JDK). These bugs may cause your Flux Designer to behave strangely, flicker, freeze, or hang. A possible solution is to set the following Java system property on your Flux Designer command line: -Dsun.java2d.noddraw=true For example, the following (very long) command line runs the Flux Designer and sets the Java system property in an effort to minimize Java Swing (GUI) bugs. Note that this command line must be written on a single, albeit very long, line. java Dsun.java2d.noddraw=true -classpath.;flux.jar;lib\j2ee.jar;lib\xerces.jar;lib\batik.jar;lib\jaas.jar flux.main gui I see database deadlocks! What is wrong? Probably nothing. Database deadlocks are a normal part of any database application. Deadlocks occur in a normally functioning software application. If a database deadlock occurs while a flow chart is running, Flux rolls back the current database transaction and automatically retries the flow chart. No administrative action is required. If a deadlock occurs while using the Flux Designer, you must manually retry the GUI action that you attempted. Once your flow chart is successfully added to the engine, database deadlocks do not require any action on your part. In general, row-level locking is preferred in databases, because it minimizes the opportunity for deadlock and connection timeouts. If possible, enable row-level locking at the database level. If you see more than an average of one deadlock per hour or if you can reproduce a deadlock regularly by following a well defined sequence of steps, then contact our Technical Support department at support@fluxcorp.com with an explanation of the Flux Manual 184

186 deadlock situation. We will work with you to attempt to reduce the number of deadlocks to a tolerance of less than an average of one deadlock per hour. When I deploy the "flux.war" file to my servlet container, then try to display the application in my web browser, an error occurs. Compare the following exception to the one shown on your web browser. javax.servlet.servletexception: javax.servlet.jsp.jspexception: RemoteException occurred in server thread; nested exception is: java.rmi.unmarshalexception: error unmarshalling arguments; nested exception is: java.net.malformedurlexception: no protocol: 5.5/webapps/flux/WEB-INF/classes/ org.apache.jasper.runtime.pagecontextimpl.dohandlepageexception(page ContextImpl.java:844) org.apache.jasper.runtime.pagecontextimpl.handlepageexception(pageco ntextimpl.java:781) org.apache.jsp.flowcharts_jsp._jspservice(org.apache.jsp.flowcharts_ jsp:514) org.apache.jasper.runtime.httpjspbase.service(httpjspbase.java:97) javax.servlet.http.httpservlet.service(httpservlet.java:802) org.apache.jasper.servlet.jspservletwrapper.service(jspservletwrappe r.java:322) org.apache.jasper.servlet.jspservlet.servicejspfile(jspservlet.java: 291) org.apache.jasper.servlet.jspservlet.service(jspservlet.java:241) javax.servlet.http.httpservlet.service(httpservlet.java:802) org.apache.struts.action.requestprocessor.doforward(requestprocessor.java:1056) org.apache.struts.action.requestprocessor.processforwardconfig(reque stprocessor.java:388) org.apache.struts.action.requestprocessor.process(requestprocessor.j ava:231) org.apache.struts.action.actionservlet.process(actionservlet.java:11 64) org.apache.struts.action.actionservlet.doget(actionservlet.java:397) javax.servlet.http.httpservlet.service(httpservlet.java:689) javax.servlet.http.httpservlet.service(httpservlet.java:802) fluximpl.characterencodingfilter.dofilter(characterencodingfilter.ja va:2) If you are seeing this exception, there is most likely a problem with your servlet container's directory name on your local file system. The directory name can not contain a space. This problem is a limitation of RMI classloading, not Flux. The workaround to this limitation is to install Tomcat, Resin, Weblogic, or any other servlet container you are using with Flux, in a directory without spaces. This solution should allow the Flux web application to load correctly within your web browser. If you are using WebSphere and running Flux inside a web application in conjunction with the Java 2 Enterprise Edition (J2EE ) 1.3 platform with V5 data sources, the Flux web application will not function properly due to WebSphere's inability to access Flux user transactions. To correct this problem, configure WebSphere to use J2EE 1.2 with V4 data sources. EJB 2.0 restriction on Flux client calls Flux Manual 185

187 If you are using EJB 2.0 and are making client calls into a Flux engine from your EJB, Flux will not operate properly if the calling EJB has Container Managed Transactions (CMT) enabled. This issue occurs because the EJB 2.0 specification does not allow other applications (in this case, a Flux engine) to look up user transactions while CMT is enabled. Flux engines utilize user transactions in order to allow them to participate in EJB transactions. The workaround for this issue is to either configure your EJB 2.0 beans to use Bean Managed Transactions (BMT) or simply have your beans use EJB 1.1. First, you should make sure the Flux engine s loggers are enabled. These loggers record useful information about the state of Flux and running flow charts. By default, loggers write their logging information to the console (standard out or stdout), but these logs can be reset to log to other destinations. If some of your previously scheduled jobs seem to fire very late, if at all, after you restart your scheduler, the cause may be that your scheduler is not being disposed properly. If you terminate your JVM without performing a clean shutdown of the engine by calling engine.dispose(), some of your jobs may have been left in the FIRING state. In this case, when your scheduler restarts, your scheduler leaves these jobs alone, assuming they are being fired by a second, clustered scheduler instance. After a few minutes, according to the configuration parameter FAILOVER_TIME_WINDOW, your scheduler instance will failover these jobs, and they will begin running again. In order to avoid this delay, be sure to shutdown cleanly by calling engine.dispose(). Alternately, configure your scheduler so that it is running standalone, not as part of a cluster. When configured as a standalone scheduler, Flux fails over jobs immediately on startup. To configure a standalone scheduler, simply disable clustering in your engine configuration, like so: flux.configuration config = factory.makeconfiguration(); config.setclusterenabled(false); // Set other configuration options. Engine engine = factory.makeengine(config); If you are using a Sybase database, refer to Chapter 30 Databases and Persistence for further instructions on configuring the scheduler to work with Sybase databases. Because Flux runs inside the same JVM as the rest of your application, if parts of your application exhaust database, memory, or virtual machine resources, this excessive consumption of resources may be revealed in Flux. If a lack of resources is reported by Flux, it does not necessarily imply that Flux itself leaked or consumed these resources. Other parts of your application residing in the same JVM may have consumed most or all of these resources. For example, suppose you call a Flux engine method and an SQLException is thrown, indicating that that database has run out of database cursors. This SQLException does not necessarily imply that Flux is leaking database resources. It may imply that other parts of the application are leaking database resources, but that this leak was merely exposed by a call to Flux. Flux Manual 186

188 If a java.lang.error is thrown inside your JVM, it indicates that a grave error has occurred. Java applications are not meant to catch java.lang.error exceptions. These exceptions include java.lang.outofmemoryerror, java.lang.virtualmachineerror, and other gravely serious errors. If a java.lang.error exception is thrown, Flux s behavior is undefined. You should track down the cause of the Error and remedy it. On some JDKs, when you add a new job to the Flux engine, you might notice the exception, "The time zone "Custom" could not be found in the JDK". This problem can occur only when you are using an unstable JDK and you have a TimerTrigger in your job. To fix this problem, upgrade your JDK to a stable JDK release. If you are starting Flux as an RMI engine, you might experience ClassNotFoundExceptions if the Flux RMI stub classes are not in the class path of the RMI registry. To remedy these exceptions, make sure that the RMI registry where Flux binds the engine remote reference has the Flux RMI stub classes in its class path. You can simply add flux.jar to your RMI registry class path to make the Flux RMI stub classes available to the RMI registry. If you are starting Flux on a standalone laptop or on a computer not otherwise connected to a network, you might experience a java.net.noroutetohostexception or another networking error. In this case, your Flux engine might not be instantiated successfully. This situation may occur because, by default, Flux uses the network to send multicast messages for clustering purposes. If a network connection is not available on your machine, you might see this exception. To stop Flux from sending network messages, you can disable networking in Flux. Disabling networking in Flux does not disable clustering. It simply prevents Flux from publishing network messages. Clustering still works in all situations, unless explicitly disabled with the CLUSTER_ENABLED configuration property, regardless of whether networking is enabled or not. You can disable cluster networking in the following manner: flux.configuration config = factory.makeconfiguration(); config.setclusternetworking(false); // Set other configuration options. Engine engine = factory.makeengine(config); When I deploy the "flux.war" file to my servlet container, then try to display the application in my web browser, an error occurs. Compare the following exception to the one shown on your web browser. javax.servlet.servletexception: javax.servlet.jsp.jspexception: RemoteException occurred in server thread; nested exception is: java.rmi.unmarshalexception: error unmarshalling arguments; nested exception is: java.net.malformedurlexception: no protocol: 5.5/webapps/flux/WEB-INF/classes/ org.apache.jasper.runtime.pagecontextimpl.dohandlepageexception(page Flux Manual 187

189 ContextImpl.java:844) org.apache.jasper.runtime.pagecontextimpl.handlepageexception(pageco ntextimpl.java:781) org.apache.jsp.flowcharts_jsp._jspservice(org.apache.jsp.flowcharts_ jsp:514) org.apache.jasper.runtime.httpjspbase.service(httpjspbase.java:97) javax.servlet.http.httpservlet.service(httpservlet.java:802) org.apache.jasper.servlet.jspservletwrapper.service(jspservletwrappe r.java:322) org.apache.jasper.servlet.jspservlet.servicejspfile(jspservlet.java: 291) org.apache.jasper.servlet.jspservlet.service(jspservlet.java:241) javax.servlet.http.httpservlet.service(httpservlet.java:802) org.apache.struts.action.requestprocessor.doforward(requestprocessor.java:1056) org.apache.struts.action.requestprocessor.processforwardconfig(reque stprocessor.java:388) org.apache.struts.action.requestprocessor.process(requestprocessor.j ava:231) org.apache.struts.action.actionservlet.process(actionservlet.java:11 64) org.apache.struts.action.actionservlet.doget(actionservlet.java:397) javax.servlet.http.httpservlet.service(httpservlet.java:689) javax.servlet.http.httpservlet.service(httpservlet.java:802) fluximpl.characterencodingfilter.dofilter(characterencodingfilter.ja va:2) If you are seeing this exception, there is most likely a problem with your servlet container's directory name on your local file system. The directory name can not contain a space. This problem is a limitation of RMI classloading, not Flux. The workaround to this limitation is to install Tomcat, Resin, Weblogic, or any other servlet container you are using with Flux in a directory without spaces. This solution should allow the Flux web application to load correctly within your web browser. When running the Flux Operations Console on a headless UNIX server such as Telnet, you will need to set the following JVM option, "-Djava.awt.headless=true", inside your application server in order to view flow chart images. If you are using WebSphere and running Flux inside a web application in conjunction with the Java 2 Enterprise Edition (J2EE ) 1.3 platform with V5 data sources, the Flux web application will not function properly due to WebSphere's inability to access Flux user transactions. To correct this problem, configure WebSphere to use J2EE 1.2 with V4 data sources. EJB 2.0 restriction on Flux client calls If you are using EJB 2.0 and are making client calls into a Flux engine from your EJB, Flux will not operate properly if the calling EJB has Container Managed Transactions (CMT) enabled. This issue occurs because the EJB 2.0 specification does not allow other applications (in this case, a Flux engine) to look up user transactions while CMT is enabled. Flux engines utilize user transactions in order to allow them to participate in EJB transactions. The workaround for this issue is to either configure your EJB 2.0 beans to use Bean Managed Transactions (BMT) or simply have your beans use EJB 1.1. Flux Manual 188

190 Notes Examples Many examples are bundled with Flux to show how to get Flux up and running in different situations. Full source code is available for all these examples by looking under the examples/software_developers directory in your Flux package. Start with the README.txt file, which describes each example in greater depth. It is highly recommended that you work through the examples before beginning to build workflows or business processes with Flux. Where applicable, these examples include batch files and scripts to run them immediately with no changes needed. The Simple example listed in the examples/software_developers/simple directory is the easiest way to get started with Flux programming. You can run the Simple example immediately using the run script. 43 Upgrading from Flux 3.1 For users who have been using Flux for a long time, there have been significant improvements made to Flux to accommodate fundamentally more powerful kinds of job scheduling. With these improvements come an evolved programming model and an updated API. This section describes how to transition your software that uses Flux 3.1 to newer versions of Flux. IT IS NOT NECESSARY TO UPGRADE TO THE NEWER FLUX API IF YOU DO NOT NEED TO! We continue to support Flux 3.1 and will continue to do so as long as a current customer is using Flux 3.1. However, new functionality available in the newer versions of Flux will not be available to older versions of Flux Packages Instead of importing Flux classes from the org.publicinterface.js package, import them from the flux package Initial Context Factory Instead of using an initial context factory, Flux has been simplified to use a factory mechanism of its own. Instead of creating an InitialContext object to create a factory, use this code. import flux.*; Factory fluxfactory = Factory.makeInstance(); Once you have a Flux factory, you can instantiate or lookup all manner of Flux schedulers. The Scheduler interface has been replaced by the Engine interface. You will see that the methods are largely the same. Perhaps the biggest change is that the schedule() method was renamed to put(), and the unschedule() method was renamed to remove(). Flux Manual 189

191 43.3 SchedulerSource The helper object SchedulerSource has been replaced by EngineHelper. To create an EngineHelper object, call the makeenginehelper() method on your Flux factory object Job Instead of creating a Job object in order to set up a job for execution, create a FlowChart object. Instead of calling the addexclusivelistener() method off the Job object to set up a job listener, you need to perform these steps. 1. Create a FlowChart object by calling EngineHelper.makeFlowChart(). 2. Create a TimerTrigger object by calling flowchart.maketimertrigger(). You can then set your Time Expression and other properties on the TimerTrigger object. 3. Create a JavaAction object by calling flowchart.makejavaaction(). The JavaAction object is similar to the Job object. You set your job listener on this object. 4. Finally, connect the TimerTrigger and JavaAction objects using an unconditional flow by calling timertrigger.addflow(javaaction) Job Listeners Job listeners have been replaced by actions. After creating a Timer Trigger, control should flow into an action. Instead of registering a standard job listener, control should flow into a JavaAction. Similarly, instead of registering an EJB listener, control should flow into an EjbSessionAction. Instead of registering a JMS queue listener, control should flow into a JmsQueueAction. Finally, instead of registering a JMS topic listener, control should flow into a JmsTopicAction Scheduling a Job Once your flow chart is complete, schedule it by calling engine.add(flowchart). Scheduled jobs can be retrieved by calling other appropriate methods on the scheduler, which implements the flux.engine interface. The flux.engine interface replaces the org.publicinterface.js.scheduler interface. 44 Migrating Collections in Persistent Variables from Flux 3.2, 3.3, and 3.4 to Flux 3.5 and Higher Note that if you need to migrate your collections data to Flux 6.x, simply migrate to 5.x, then follow the migration steps from Flux 5.x to 6.x. Otherwise, to migration to versions 3.5, 4.x, or 5.x, follow the directions below. In Flux versions 3.2, 3.3, and 3.4, Flux did not explicitly support storing collections in persistent variables that were stored with jobs. These collections classes from the java.util package, including the objects that implement the List, Map, Set, SortedMap, and SortedSet interfaces, allowed you to store collections of data with your jobs. Flux Manual 190

192 Although Flux versions 3.2, 3.3, and 3.4 did not explicitly support these collections within persistent variables, Flux would recognize that all these objects were serializable and would serialize the collections to a VARBINARY database column. In general, when older versions of Flux encountered persistent variable data of a type that it did not recognize, if that data were serializable, it was serialized to a VARBINARY column in the database. Beginning with Flux 3.5, Flux explicitly supported persistent variable data that consisted of collections. Instead of serializing these data to a database column, these data were stored clearly in the database using additional database tables. By not serializing the collections, several problems are avoided: 1. Java class versioning problems are avoided. 2. The data can be viewed in the clear with a normal database tool. Database tools cannot view serialized objects. 3. The data can be queried and searched. Serialized objects cannot be queried or searched. Because the database schema for collections used in persistent variables changed, you must migrate your collections data when upgrading to Flux 3.5, 4.0, 4.1, and 4.2. To perform this migration, perform the following steps. 1. Create a wrapper class for each collection type in your user-defined, persistent variable. For example, suppose you have a variable class called OldVariable containing a String variable called "string" and a HashMap variable called "hashmap", as shown below. public class OldVariable { public String string; public HashMap hashmap; } // class OldVariable You must create another user-defined, persistent variable that wraps the HashMap type, as shown below. public class NewVariable { public String string; public Wrapper wrapper; } // class NewVariable public class Wrapper implements Serializable { public HashMap hashmap; } // class Wrapper Because Wrapper is serializable, the entire Wrapper object, including the HashMap, will be serialized to the database when the job is stored. Because the Wrapper class encapsulates the HashMap and hides it from Flux, the HashMap will be serialized to a VARBINARY column, even in a Flux 3.5 or greater database. This data migration is possible using the Flux API, because the user-defined, persistent variable is serialized in both the old and new versions of Flux. Flux Manual 191

193 1. While the Flux 3.2/3.3/3.4 engine is stopped, retrieve all your FlowChart objects and modify them to use the NewVariable structure. This step must be performed on the old version of a Flux engine, version 3.2, 3.3, or 3.4. For example, suppose your FlowChart objects have a JavaAction containing an OldVariable object as the key. You must migrate all the data contained in OldVariable to NewVariable. The following code shows a simple way to perform this migration on a typical flow chart. FlowChart flowchart = engine.get(myflowchartid); JavaAction javaaction = (JavaAction) flowchart.getaction("my Java action"); OldVariable oldvariable = (OldVariable) javaaction.getkey(); NewVariable newvariable = new NewVariable(); newvariable.string = oldvariable.string; newvariable.wrapper = new Wrapper(); newvariable.wrapper.hashmap = oldvariable.hashmap; javaaction.setkey(newvariable); engine.modify(flowchart); NOTE: You will need an additional table to store the NewVariable variables. This table will have the following fields. Table 44-1: Variable Table Fields PK STRING WRAPPER VARCHAR2(128) NOT NULL VARCHAR2(128) BLOB 1. After all the FlowChart objects have been modified, you must alter the FLUX_VAR_TIMER_TRIGGER table to the new schema definition that was introduced in Flux 4.1. The DDL for the Flux 4.1 tables is available in the doc directory of your Flux distribution. You will have to create new tables like FLUX_VAR_LIST_ITEM, FLUX_VAR_META_COLLECTION, FLUX_VAR_RELATIVE_EXP, FLUX_NESTED_META_VAR, and FLUX_VAR_TIMER_RESULT for a flow chart consisting of a Timer Trigger-Java Action combination. Also, you might need to create additional tables depending on the actions and triggers used in your FlowChart and the types of persistent variables you are using. That's it. You are done with the migration. You can now use Flux 3.5 or greater to fire your jobs. 45 Flux API Documentation The complete Javadoc API documentation for Flux is available. Navigate to the doc/javadoc directory in your Flux package and open the file index.html with your web browser. The following classes will be helpful to review before you begin working with Flux: flux.configuration flux.engine Flux Manual 192

194 flux.enginehelper flux.factory flux.bpm.bpmfactory 46 Trigger and Action Properties (JavaBeans Documentation) The configuration properties of each trigger and action are defined by a set of JavaBean properties. These configuration properties tailor the behavior of the triggers and actions in your flow charts and jobs. These trigger and action properties are fully documented in the Flux JavaBeans documentation. Navigate to the doc/javabean directory in your Flux package and open the file index.html with your Web browser. 47 Typical Flux Activities This section describes typical interactions that you will have with Flux. Once you are familiar with these typical Flux activities, you can use this knowledge to create other kinds of flow charts that meet your application and operational needs. You will be walked through the typical steps of: 1. Starting the Flux Designer. The Flux Designer is used to create, edit, and monitor flow charts. 2. Creating a Flow chart. Flow charts are created using the Flux Designer. 3. Running the Flow Chart. The Flux engine is used to execute your flow charts. 4. Watching the flow chart run. When your flow chart is running, you can observe it using the Flux Operations Console. 5. Shutting down your Flux engine. After your flow chart is finished, you can safely shut down the Flux engine Starting the Flux Designer To run the Flux Designer, execute the flux-designer 1 script from the Flux installation directory. The splash screen will appear while the Designer loads. After the splash screen has disappeared, you will arrive at the Designer main screen shown below. Flux Manual 193

195 Figure 47-1: Main Screen 47.2 Creating a Flow Chart Now that the Flux Designer is up and running, you can begin creating a flow chart. From the "File" menu, select "New". Select "New Flow Chart" from the available options, then click "OK". The "New Flow Chart" dialog box will appear. Enter myjob into the "Name" field and click "OK". You will return to the main screen. Notice that "myjob" has appeared in the "Project View" panel, and that a new workspace has appeared labelled "myjob". To the left of the workspace, notice the "Action" and "Trigger" tabs, which display all of the actions and triggers available for your flow chart. These tabs are visible in Figure 47-2, below. Flux Manual 194

196 Figure 47-2: New Flow Chart Defining a Flow Chart s Properties All flow charts have a set of properties that provide an easy way to specific variables on your flow chart. To view and edit these properties, hover the cursor over the "Flow Chart Properties" tab to the right of the workspace. This is demonstrated in Figure 47-3 below. Figure 47-3: Viewing the Flow Chart Properties These options provide an easy way to set specific variables on your flow chart. The various properties are listed below. Listener Classpath: The Listener Classpath feature is disabled by default. To enable it, add the configuration option LISTENER_CLASS_LOADER_ENABLED = true to your engine s configuration file. If the Listener Classpath is enabled but no value is set, The default classloader will be used instead. Flux can use the path specified in this field to dynamically load Java class files at runtime. This allows you to load Java class files without restarting the Flux engine. Flux Manual 195

197 This can be useful when your flow chart uses Java classes that are frequently updated, or when your flow chart requires class files that were created after the Flux engine was started. Files and directories in the Listener Classpath are specified relative to the directory where the engine was started. For example, if the engine was run from the directory C:\flux and you wanted to load Java classes from the directory C:\flux\classes, you would simply enter classes for the Listener Classpath. If the classes directory was instead located under C:\classes, you would enter..\classes for the Listener Classpath. As an alternative to directories and JAR files, the listener classpath may be an HTTP URL that points to a JAR file on the network. To load a JAR file from a URL, simply type the URL of the JAR file into the Listener Classpath property. NOTE: We strongly recommend that you do not set a Listener Classpath if you are using Flux within an application server environment. The application used in your application server (most likely packaged as a WAR file) should be structurally sound within itself, and therefore should not require any outside classes. Paused: This option pauses the flow chart, and by extension pauses all execution flow contexts within the flow chart. When a flow chart is paused, no actions are executed and no triggers will fire. Priority: This option sets the priority of the flow chart, overriding any priorities set in the runtime configuration. If this option is not set, the priority from the runtime configuration is used. Run As User: Applicable for secure Flux engines only. This option sets the user whose security permissions will be used to execute the flow chart. If this field is set, only a user with Administrator priveleges can submit the flow chart to an engine. If this field is not set, the flow chart will run as the user that is specified by the runtime configuration tree. If no such user is specified, this flow chart runs as the user who submitted the flow char to the engine. NOTE: When running a flow chart on a secure Flux engine, the flow chart could fail if the "Run As User" property is not specified, the user specified by the property does not exist, or the login for the user fails. If any of these problems occur, the flow chart will immediately enter the "Failed" sub-state, bypassing the default error handler. If you are able to solve the problem (for example, by creating the specified user account), you can then recover the flow chart. If the problem was solved successfully, the flow chart will then resume normal execution Adding Triggers and Actions Now that the Designer is open with a blank workspace, we can begin creating a flow chart. The following sections will guide you through adding triggers and actions, setting up the flows between them, and executing the flow chart. Begin by clicking on the "Trigger" tab to the left of the Designer workspace. Select and drag the "Timer Trigger" icon from the panel to the Designer workspace to add a new Timer Trigger to the flow chart. Flux Manual 196

198 This Timer Trigger will be used to control when (and how often) your flow chart is executed. Next, click on the "Action" tab, then the "Core" button. Drag a new "Process Action" to the workspace. Your workspace should now look something like Figure 47-4 below. Figure 47-4: Laying Out the Flow Chart Next, you must create a flow from the Timer Trigger to the Process Action. To create a new flow: 1. Click anywhere on the white workspace background to deselect all triggers and actions. 2. Click the Timer Trigger, then hold down the left mouse button and drag the cursor to the Process Action. 3. Release the left mouse button to create the new flow. After adding the flow, your workspace should now look like Figure 47-5 below. Flux Manual 197

199 Figure 47-5: Adding the Flow This flow (represented by an arrow on the workspace) directs the execution of the flow chart. After the Timer Trigger fires, flow will pass into the Process Action, which will then perform its work. Now, draw a flow from the Process Action back into the Timer Trigger. This will create a looping flow chart, as the flow of the flow chart will now pass back into the Timer Trigger after the Process Action has executed Cleaning Up the Flow Chart If your flow chart drawing skills are not perfect, don't worry: the Flux Designer provides a way to automatically reposition your flow chart in a pleasing aesthetic fashion. To redraw your flow chart, select "Hierarchic Layout" from the Designer's "Layout" menu. In the window that opens, configure the options, then click the "OK" button. The Designer will automatically redesign your flow chart according to the options you specified. Flux Manual 198

200 Figure 47-6: Redesigned Flow Chart Explore some of the other layout options until you find that best suits your individual style. Remember, the layout of a flow chart does not affect its performance, only its appearance in the Designer Setting Action Properties Now that your flow chart has been created and redesigned, you will need to configure your Timer Trigger and Process Action. Timer Trigger. Your Timer Trigger needs to be configured to fire on a set schedule. To keep things simple, we will define a time expression that will fire at the top of every minute, except when that occurs at the top of the hour (for example, the Timer Trigger would fire at 7:59 and 8:01, but not 8:00). To define this top-of-the-minute schedule, select the Timer Trigger by clicking it on the Designer workspace. The "Action Properties" panel will appear to the right of the workspace. Flux Manual 199

201 Figure 47-7: The Timer Trigger's Action Properties These properties allow you to configure your Timer Trigger, among other things setting up when and how often the trigger will fire. Find the "Time Expression" property in the Timer Trigger's "Action Properties" panel. Click the button labelled " " to open the "Time Expression Editor". In Flux, a schedule is referred to as a time expression. Flux time expressions are extremely powerful, succinct representations of schedules. You will find that Flux time expressions can easily express almost any schedule that you might require. In the Time Expression Editor, Click the "+" button to add a new time expression. The editor will open with the "Basic" tab selected. For this example, we will want to use the guided editor, so select the "Guided" tab from the top of the window. Give your time expression a name, then click the "Next >" button to continue. Next, you will be given a choice of creating a Cron-style time expression or a Relative time expression. Make sure that the "Cron-style" option is select, then click "Next >". On the next pane, select "Regular Cron", then click "Next >" again. On the next pane, you can begin creating the new time expression. If they are not already, set the "Milliseconds" and "Seconds" fields to 0. It may help to imagine a clock that has not only "hour" and "minute" hands, but "second" and "millisecond" hands as well. The minute hand will move each time the second hand completes a revolution and returns to 0. The second hand itself will move each time the millisecond hand completes a revolution and returns to 0; therefore, the millisecon d and second hands must both be on 0 each time the minute hand moves. In a Cron-style time expression, you specify the values that the hands of the clock must be on to cause your Timer Trigger to fire. If we want the minute hand to fire at the top of every minute (except at the top of the hour), we require the millisecond Flux Manual 200

202 hand to be on 0, the second hand to be on 0, and the minute hand to be on some number between 1 and 59. After you have marked the "Milliseconds" and "Seconds" fields as 0, click the checkbox next to the "Minutes" field to enable editing of the field. Click the field when it becomes editable; a new editor will appear, with detailed instructions on editing the minutes column. Click the "Column Editor >" button. In the column editor, select the "Run within a range" option. Notice the menus labelled "From" and "To"; select "1" in the "From" menu and "59" in the "To" menu, then click the "+" button. Finally, click the "Finish" button to save the expression to the "Minutes" field. At the bottom of the Time Expression Editor, the field labelled "Time Expression Result:" should now read as follows: * * * * * * * * To verify that your time expression is correct, click the "Test" button. The "Test Time Expression" window will appear. Set your desired options, then click the "Test" button. A forecast of firing times will appear; verify that the Time Expression is working as expected, then click "OK". Once you have verified your time expression and your schedule, click the "Next >" button. If the time expression is not firing as expected, carefully review the instructions and make any appropriate changes. You will arrive at the "Cron OR Constructs" pane. Verify that your time expression is correct, then click the "Finish" button. You will return to the "Basic" editor. Click "Cancel" to go back to the Time Expression Editor. Click the "OK" button - notice that the "Time Expression" property of the Timer Trigger has now changed. The last property that you will need to set for the Timer Trigger is the "Count". This property specifies how many times the trigger can fire. Set the field to 3, which will restrict the Timer Trigger to 3 firings. Process Action. Process Actions can be configured to run any native program on your system. To configure the Process Action, open its "Action Properties" panel by selecting the action on the Designer workspace. Edit the "Command" argument to read notepad.exe, or the name of any program that lies on your system classpath and can be executed from the command line Verifying the Flow Chart The flow chart is now complete! The only step left is to execute it, but first we will use Flux's built-in verification tool to ensure that the flow chart is ready to run. Verifying the flow chart will save you time by notifying you of simply errors - for example, a crucial field left blank - before you attempt to run the run flow. To verify the flow chart, select the Verify button from the toolbar above the Designer workspace, as in Figure 47-8 below. Figure 47-8: Selecting the Verify Button Flux Manual 201

203 If the flow chart is fundamentally sound, a brief animation will appear and the Designer will play a sound indicating success. If, however, Flux discovers any errors, the "Flow Chart Verification" panel will appear, displaying a description of the error. Figure 47-9, below, demonstrates an example of the type of error you may encounter. Figure 47-9: Errors Discovered During Verification The verification sound and animation can both be disable from the Designer's "Options" menu, displayed in Figure below. The "Messages" panel cannot be disabled, and will automatically appear if the verification process discovers errors in your flow chart. To close the "Messages" panel, you can deselect it from the Designer's "View" menu, or you may simply click the "X" button on the top-right of the panel. Once you have verified your flow chart, save your work by select "Save All" from the Designer's "File" menu. Once the flow chart is saved, follow the steps below to start an engine and use it to execute the flow chart Running the Flow Chart Figure 47-10: Disabling Verification Alerts The first step in executing a flow chart is to create the Flux engine that it will run on. To start a new Flux engine, run the start-unsecured-flux-engine 2 script from the Flux installation directory. Next, you will need to inform the Designer that you have created a new engine. To do so, hover the mouse cursor over the "Flux Engines" tab, located in the upper-left corner of the Designer just under the "File" menu. When the "Flux Engines" panel appears, click the "+" button to add your new engine, as in 202. Figure 47-11: Adding a New Engine Flux Manual 202

Flux Software Component

Flux Software Component Flux Software Component Job Scheduler Workflow Engine Business Process Management System Version 6.2, 30 July 2004 Software Developers Manual Copyright 2000-2004 Sims Computing, Inc. All rights reserved.

More information

Architecture and Mode of Operation

Architecture and Mode of Operation Open Source Scheduler Architecture and Mode of Operation http://jobscheduler.sourceforge.net Contents Components Platforms & Databases Architecture Configuration Deployment Distributed Processing Security

More information

Oracle WebLogic Server 11g Administration

Oracle WebLogic Server 11g Administration Oracle WebLogic Server 11g Administration This course is designed to provide instruction and hands-on practice in installing and configuring Oracle WebLogic Server 11g. These tasks include starting and

More information

CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS

CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS Java EE Components Java EE Vendor Specifications Containers Java EE Blueprint Services JDBC Data Sources Java Naming and Directory Interface Java Message

More information

Oracle WebLogic Foundation of Oracle Fusion Middleware. Lawrence Manickam Toyork Systems Inc www.toyork.com http://ca.linkedin.

Oracle WebLogic Foundation of Oracle Fusion Middleware. Lawrence Manickam Toyork Systems Inc www.toyork.com http://ca.linkedin. Oracle WebLogic Foundation of Oracle Fusion Middleware Lawrence Manickam Toyork Systems Inc www.toyork.com http://ca.linkedin.com/in/lawrence143 History of WebLogic WebLogic Inc started in 1995 was a company

More information

FileNet Business Activity Monitor (BAM) Release Notes

FileNet Business Activity Monitor (BAM) Release Notes FileNet Business Activity Monitor (BAM) Release Notes Release 3.6.0 September 2006 FileNet is a registered trademark of FileNet corporation. All other product and brand names are trademarks or registered

More information

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Third edition (May 2012). Copyright International Business Machines Corporation 2012. US Government Users Restricted

More information

E-mail Listeners. E-mail Formats. Free Form. Formatted

E-mail Listeners. E-mail Formats. Free Form. Formatted E-mail Listeners 6 E-mail Formats You use the E-mail Listeners application to receive and process Service Requests and other types of tickets through e-mail in the form of e-mail messages. Using E- mail

More information

Oracle WebLogic Server 11g: Administration Essentials

Oracle WebLogic Server 11g: Administration Essentials Oracle University Contact Us: 1.800.529.0165 Oracle WebLogic Server 11g: Administration Essentials Duration: 5 Days What you will learn This Oracle WebLogic Server 11g: Administration Essentials training

More information

Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014

Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014 Contents Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014 Copyright (c) 2012-2014 Informatica Corporation. All rights reserved. Installation...

More information

s@lm@n Oracle Exam 1z0-102 Oracle Weblogic Server 11g: System Administration I Version: 9.0 [ Total Questions: 111 ]

s@lm@n Oracle Exam 1z0-102 Oracle Weblogic Server 11g: System Administration I Version: 9.0 [ Total Questions: 111 ] s@lm@n Oracle Exam 1z0-102 Oracle Weblogic Server 11g: System Administration I Version: 9.0 [ Total Questions: 111 ] Oracle 1z0-102 : Practice Test Question No : 1 Which two statements are true about java

More information

ActiveVOS Server Architecture. March 2009

ActiveVOS Server Architecture. March 2009 ActiveVOS Server Architecture March 2009 Topics ActiveVOS Server Architecture Core Engine, Managers, Expression Languages BPEL4People People Activity WS HT Human Tasks Other Services JMS, REST, POJO,...

More information

Oracle EXAM - 1Z0-102. Oracle Weblogic Server 11g: System Administration I. Buy Full Product. http://www.examskey.com/1z0-102.html

Oracle EXAM - 1Z0-102. Oracle Weblogic Server 11g: System Administration I. Buy Full Product. http://www.examskey.com/1z0-102.html Oracle EXAM - 1Z0-102 Oracle Weblogic Server 11g: System Administration I Buy Full Product http://www.examskey.com/1z0-102.html Examskey Oracle 1Z0-102 exam demo product is here for you to test the quality

More information

WebLogic Server 11g Administration Handbook

WebLogic Server 11g Administration Handbook ORACLE: Oracle Press Oracle WebLogic Server 11g Administration Handbook Sam R. Alapati Mc Graw Hill New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore

More information

NetIQ Identity Manager Setup Guide

NetIQ Identity Manager Setup Guide NetIQ Identity Manager Setup Guide July 2015 www.netiq.com/documentation Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS OF A LICENSE

More information

1z0-102 Q&A. DEMO Version

1z0-102 Q&A. DEMO Version Oracle Weblogic Server 11g: System Administration Q&A DEMO Version Copyright (c) 2013 Chinatag LLC. All rights reserved. Important Note Please Read Carefully For demonstration purpose only, this free version

More information

Administering batch environments

Administering batch environments Administering batch environments, Version 8.5 Administering batch environments SA32-1093-00 Note Before using this information, be sure to read the general information under Notices on page 261. Compilation

More information

Architecture and Mode of Operation

Architecture and Mode of Operation Software- und Organisations-Service Open Source Scheduler Architecture and Mode of Operation Software- und Organisations-Service GmbH www.sos-berlin.com Scheduler worldwide Open Source Users and Commercial

More information

1Z0-102. Oracle Weblogic Server 11g: System Administration I. Version: Demo. Page <<1/7>>

1Z0-102. Oracle Weblogic Server 11g: System Administration I. Version: Demo. Page <<1/7>> 1Z0-102 Oracle Weblogic Server 11g: System Administration I Version: Demo Page 1. Which two statements are true about java EE shared libraries? A. A shared library cannot bedeployed to a cluster.

More information

Release 6.2.1 System Administrator s Guide

Release 6.2.1 System Administrator s Guide IBM Maximo Release 6.2.1 System Administrator s Guide Note Before using this information and the product it supports, read the information in Notices on page Notices-1. First Edition (January 2007) This

More information

Deploying to WebSphere Process Server and WebSphere Enterprise Service Bus

Deploying to WebSphere Process Server and WebSphere Enterprise Service Bus Deploying to WebSphere Process Server and WebSphere Enterprise Service Bus Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 4.0.3 Unit objectives

More information

WEBLOGIC ADMINISTRATION

WEBLOGIC ADMINISTRATION WEBLOGIC ADMINISTRATION Session 1: Introduction Oracle Weblogic Server Components Java SDK and Java Enterprise Edition Application Servers & Web Servers Documentation Session 2: Installation System Configuration

More information

Kony MobileFabric. Sync Windows Installation Manual - WebSphere. On-Premises. Release 6.5. Document Relevance and Accuracy

Kony MobileFabric. Sync Windows Installation Manual - WebSphere. On-Premises. Release 6.5. Document Relevance and Accuracy Kony MobileFabric Sync Windows Installation Manual - WebSphere On-Premises Release 6.5 Document Relevance and Accuracy This document is considered relevant to the Release stated on this title page and

More information

Deploying Rule Applications

Deploying Rule Applications White Paper Deploying Rule Applications with ILOG JRules Deploying Rule Applications with ILOG JRules White Paper ILOG, September 2006 Do not duplicate without permission. ILOG, CPLEX and their respective

More information

JBS-102: Jboss Application Server Administration. Course Length: 4 days

JBS-102: Jboss Application Server Administration. Course Length: 4 days JBS-102: Jboss Application Server Administration Course Length: 4 days Course Description: Course Description: JBoss Application Server Administration focuses on installing, configuring, and tuning the

More information

WASv6_Scheduler.ppt Page 1 of 18

WASv6_Scheduler.ppt Page 1 of 18 This presentation will discuss the Scheduler Service available in IBM WebSphere Application Server V6. This service allows you to schedule time-dependent actions. WASv6_Scheduler.ppt Page 1 of 18 The goals

More information

Flux Standalone Software Application

Flux Standalone Software Application Flux Standalone Software Application Job Scheduler Workflow Engine Business Process Management System Version 6.2, 30 July 2004 End Users Manual Copyright 2000-2004 Sims Computing, Inc. All rights reserved.

More information

IBM WebSphere Server Administration

IBM WebSphere Server Administration IBM WebSphere Server Administration This course teaches the administration and deployment of web applications in the IBM WebSphere Application Server. Duration 24 hours Course Objectives Upon completion

More information

Contents 1 Overview 2 Introduction to WLS Management Services iii

Contents 1 Overview 2 Introduction to WLS Management Services iii Contents 1 Overview Objectives 1-2 Agenda 1-3 Target Audience 1-4 Course Objectives 1-5 Course Agenda 1-7 Classroom Guidelines 1-9 Course Environment 1-10 Summary 1-11 Practice 1-1 Overview: Obtaining

More information

IKAN ALM Architecture. Closing the Gap Enterprise-wide Application Lifecycle Management

IKAN ALM Architecture. Closing the Gap Enterprise-wide Application Lifecycle Management IKAN ALM Architecture Closing the Gap Enterprise-wide Application Lifecycle Management Table of contents IKAN ALM SERVER Architecture...4 IKAN ALM AGENT Architecture...6 Interaction between the IKAN ALM

More information

24x7 Scheduler Multi-platform Edition 5.2

24x7 Scheduler Multi-platform Edition 5.2 24x7 Scheduler Multi-platform Edition 5.2 Installing and Using 24x7 Web-Based Management Console with Apache Tomcat web server Copyright SoftTree Technologies, Inc. 2004-2014 All rights reserved Table

More information

Basic TCP/IP networking knowledge of client/server concepts Basic Linux commands and desktop navigation (if don't know we will cover it )

Basic TCP/IP networking knowledge of client/server concepts Basic Linux commands and desktop navigation (if don't know we will cover it ) About Oracle WebLogic Server Oracle WebLogic Server is the industry's best application server for building and deploying enterprise Java EE applications with support for new features for lowering cost

More information

F Cross-system event-driven scheduling. F Central console for managing your enterprise. F Automation for UNIX, Linux, and Windows servers

F Cross-system event-driven scheduling. F Central console for managing your enterprise. F Automation for UNIX, Linux, and Windows servers F Cross-system event-driven scheduling F Central console for managing your enterprise F Automation for UNIX, Linux, and Windows servers F Built-in notification for Service Level Agreements A Clean Slate

More information

IBM Sterling Control Center

IBM Sterling Control Center IBM Sterling Control Center System Administration Guide Version 5.3 This edition applies to the 5.3 Version of IBM Sterling Control Center and to all subsequent releases and modifications until otherwise

More information

Application Servers - BEA WebLogic. Installing the Application Server

Application Servers - BEA WebLogic. Installing the Application Server Proven Practice Application Servers - BEA WebLogic. Installing the Application Server Product(s): IBM Cognos 8.4, BEA WebLogic Server Area of Interest: Infrastructure DOC ID: AS01 Version 8.4.0.0 Application

More information

Course Description. Course Audience. Course Outline. Course Page - Page 1 of 5

Course Description. Course Audience. Course Outline. Course Page - Page 1 of 5 Course Page - Page 1 of 5 WebSphere Application Server 7.0 Administration on Windows BSP-1700 Length: 5 days Price: $ 2,895.00 Course Description This course teaches the basics of the administration and

More information

Glassfish, JAVA EE, Servlets, JSP, EJB

Glassfish, JAVA EE, Servlets, JSP, EJB Glassfish, JAVA EE, Servlets, JSP, EJB Java platform A Java platform comprises the JVM together with supporting class libraries. Java 2 Standard Edition (J2SE) (1999) provides core libraries for data structures,

More information

Business Process Management with @enterprise

Business Process Management with @enterprise Business Process Management with @enterprise March 2014 Groiss Informatics GmbH 1 Introduction Process orientation enables modern organizations to focus on the valueadding core processes and increase

More information

Install guide for Websphere 7.0

Install guide for Websphere 7.0 DOCUMENTATION Install guide for Websphere 7.0 Jahia EE v6.6.1.0 Jahia s next-generation, open source CMS stems from a widely acknowledged vision of enterprise application convergence web, document, search,

More information

WebSphere Server Administration Course

WebSphere Server Administration Course WebSphere Server Administration Course Chapter 1. Java EE and WebSphere Overview Goals of Enterprise Applications What is Java? What is Java EE? The Java EE Specifications Role of Application Server What

More information

MagDiSoft Web Solutions Office No. 102, Bramha Majestic, NIBM Road Kondhwa, Pune -411048 Tel: 808-769-4605 / 814-921-0979 www.magdisoft.

MagDiSoft Web Solutions Office No. 102, Bramha Majestic, NIBM Road Kondhwa, Pune -411048 Tel: 808-769-4605 / 814-921-0979 www.magdisoft. WebLogic Server Course Following is the list of topics that will be covered during the course: Introduction to WebLogic What is Java? What is Java EE? The Java EE Architecture Enterprise JavaBeans Application

More information

5 Days Course on Oracle WebLogic Server 11g: Administration Essentials

5 Days Course on Oracle WebLogic Server 11g: Administration Essentials PROFESSIONAL TRAINING COURSE 5 Days Course on Oracle WebLogic Server 11g: Administration Essentials Two Sigma Technologies 19-2, Jalan PGN 1A/1, Pinggiran Batu Caves, 68100 Batu Caves, Selangor Tel: 03-61880601/Fax:

More information

Configuring IBM WebSphere Application Server 6.1 to Support SAS 9.2 Web Applications

Configuring IBM WebSphere Application Server 6.1 to Support SAS 9.2 Web Applications Configuration Guide Configuring IBM WebSphere Application Server 6.1 to Support SAS 9.2 Web Applications This document is for SAS installers who want to configure IBM WebSphere Application Server for use

More information

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1 Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Automated Process Center Installation and Configuration Guide for UNIX

Automated Process Center Installation and Configuration Guide for UNIX Automated Process Center Installation and Configuration Guide for UNIX Table of Contents Introduction... 1 Lombardi product components... 1 Lombardi architecture... 1 Lombardi installation options... 4

More information

This training is targeted at System Administrators and developers wanting to understand more about administering a WebLogic instance.

This training is targeted at System Administrators and developers wanting to understand more about administering a WebLogic instance. This course teaches system/application administrators to setup, configure and manage an Oracle WebLogic Application Server, its resources and environment and the Java EE Applications running on it. This

More information

NetBeans IDE Field Guide

NetBeans IDE Field Guide NetBeans IDE Field Guide Copyright 2005 Sun Microsystems, Inc. All rights reserved. Table of Contents Introduction to J2EE Development in NetBeans IDE...1 Configuring the IDE for J2EE Development...2 Getting

More information

WebSphere Training Outline

WebSphere Training Outline WEBSPHERE TRAINING WebSphere Training Outline WebSphere Platform Overview o WebSphere Product Categories o WebSphere Development, Presentation, Integration and Deployment Tools o WebSphere Application

More information

FileNet System Manager Dashboard Help

FileNet System Manager Dashboard Help FileNet System Manager Dashboard Help Release 3.5.0 June 2005 FileNet is a registered trademark of FileNet Corporation. All other products and brand names are trademarks or registered trademarks of their

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com info@flexense.com 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

Desktop Activity Intelligence

Desktop Activity Intelligence Desktop Activity Intelligence Table of Contents Cicero Discovery Delivers Activity Intelligence... 1 Cicero Discovery Modules... 1 System Monitor... 2 Session Monitor... 3 Activity Monitor... 3 Business

More information

Oracle WebLogic Server

Oracle WebLogic Server Oracle WebLogic Server Creating WebLogic Domains Using the Configuration Wizard 10g Release 3 (10.3) November 2008 Oracle WebLogic Server Oracle Workshop for WebLogic Oracle WebLogic Portal Oracle WebLogic

More information

Web Server Configuration Guide

Web Server Configuration Guide Web Server Configuration Guide FOR WINDOWS & UNIX & LINUX DOCUMENT ID: ADC50000-01-0680-01 LAST REVISED: February 11, 2014 Copyright 2000-2014 by Appeon Corporation. All rights reserved. This publication

More information

CA Workload Automation Agent for Databases

CA Workload Automation Agent for Databases CA Workload Automation Agent for Databases Implementation Guide r11.3.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the

More information

KillTest. http://www.killtest.cn 半 年 免 费 更 新 服 务

KillTest. http://www.killtest.cn 半 年 免 费 更 新 服 务 KillTest 质 量 更 高 服 务 更 好 学 习 资 料 http://www.killtest.cn 半 年 免 费 更 新 服 务 Exam : 1Z0-599 Title : Oracle WebLogic Server 12c Essentials Version : Demo 1 / 10 1.You deploy more than one application to the

More information

Introduction to WebSphere Administration

Introduction to WebSphere Administration PH073-Williamson.book Page 1 Thursday, June 17, 2004 3:53 PM C H A P T E R 1 Introduction to WebSphere Administration T his book continues the series on WebSphere Application Server Version 5 by focusing

More information

echomountain Enterprise Monitoring, Notification & Reporting Services Protect your business

echomountain Enterprise Monitoring, Notification & Reporting Services Protect your business Protect your business Enterprise Monitoring, Notification & Reporting Services echomountain 1483 Patriot Blvd Glenview, IL 60026 877.311.1980 sales@echomountain.com echomountain Enterprise Monitoring,

More information

Sample copy. Introduction To WebLogic Server Property of Web 10.3 Age Solutions Inc.

Sample copy. Introduction To WebLogic Server Property of Web 10.3 Age Solutions Inc. Introduction To WebLogic Server Property of Web 10.3 Age Solutions Inc. Objectives At the end of this chapter, participants should be able to: Understand basic WebLogic Server architecture Understand the

More information

Learn Oracle WebLogic Server 12c Administration For Middleware Administrators

Learn Oracle WebLogic Server 12c Administration For Middleware Administrators Wednesday, November 18,2015 1:15-2:10 pm VT425 Learn Oracle WebLogic Server 12c Administration For Middleware Administrators Raastech, Inc. 2201 Cooperative Way, Suite 600 Herndon, VA 20171 +1-703-884-2223

More information

Oracle WebLogic Server 11g: Monitor and Tune Performance

Oracle WebLogic Server 11g: Monitor and Tune Performance D61529GC10 Edition 1.0 March 2010 D66055 Oracle WebLogic Server 11g: Monitor and Tune Performance Student Guide Author Shankar Raman Technical Contributors and Reviewer s Werner Bauer Nicole Haba Bala

More information

EVALUATION ONLY. WA2088 WebSphere Application Server 8.5 Administration on Windows. Student Labs. Web Age Solutions Inc.

EVALUATION ONLY. WA2088 WebSphere Application Server 8.5 Administration on Windows. Student Labs. Web Age Solutions Inc. WA2088 WebSphere Application Server 8.5 Administration on Windows Student Labs Web Age Solutions Inc. Copyright 2013 Web Age Solutions Inc. 1 Table of Contents Directory Paths Used in Labs...3 Lab Notes...4

More information

IBM Business Monitor Version 7.5.0. IBM Business Monitor Installation Guide

IBM Business Monitor Version 7.5.0. IBM Business Monitor Installation Guide IBM Business Monitor Version 7.5.0 IBM Business Monitor Installation Guide ii Installing Contents Chapter 1. Installing IBM Business Monitor............... 1 Chapter 2. Planning to install IBM Business

More information

WebLogic Server: Installation and Configuration

WebLogic Server: Installation and Configuration WebLogic Server: Installation and Configuration Agenda Application server / Weblogic topology Download and Installation Configuration files. Demo Administration Tools: Configuration

More information

ITG Software Engineering

ITG Software Engineering IBM WebSphere Administration 8.5 Course ID: Page 1 Last Updated 12/15/2014 WebSphere Administration 8.5 Course Overview: This 5 Day course will cover the administration and configuration of WebSphere 8.5.

More information

SOSFTP Managed File Transfer

SOSFTP Managed File Transfer Open Source File Transfer SOSFTP Managed File Transfer http://sosftp.sourceforge.net Table of Contents n Introduction to Managed File Transfer n Gaps n Solutions n Architecture and Components n SOSFTP

More information

TIBCO iprocess Web Services Server Plug-in Installation. Software Release 11.3.0 October 2011

TIBCO iprocess Web Services Server Plug-in Installation. Software Release 11.3.0 October 2011 TIBCO iprocess Web Services Server Plug-in Installation Software Release 11.3.0 October 2011 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED

More information

HP Business Service Management

HP Business Service Management HP Business Service Management for the Windows and Linux operating systems Software Version: 9.10 Business Process Insight Server Administration Guide Document Release Date: August 2011 Software Release

More information

By Wick Gankanda Updated: August 8, 2012

By Wick Gankanda Updated: August 8, 2012 DATA SOURCE AND RESOURCE REFERENCE SETTINGS IN WEBSPHERE 7.0, RATIONAL APPLICATION DEVELOPER FOR WEBSPHERE VER 8 WITH JAVA 6 AND MICROSOFT SQL SERVER 2008 By Wick Gankanda Updated: August 8, 2012 Table

More information

Simba XMLA Provider for Oracle OLAP 2.0. Linux Administration Guide. Simba Technologies Inc. April 23, 2013

Simba XMLA Provider for Oracle OLAP 2.0. Linux Administration Guide. Simba Technologies Inc. April 23, 2013 Simba XMLA Provider for Oracle OLAP 2.0 April 23, 2013 Simba Technologies Inc. Copyright 2013 Simba Technologies Inc. All Rights Reserved. Information in this document is subject to change without notice.

More information

Content Server Installation Guide

Content Server Installation Guide Content Server Installation Guide Version 5.3 SP3 July 2006 Copyright 1994-2006 EMC Corporation. All rights reserved. Table of Contents Preface... 11 Chapter 1 Server Installation Quick Start... 13 Installing

More information

FileMaker Server 10 Help

FileMaker Server 10 Help FileMaker Server 10 Help 2007-2009 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker, the file folder logo, Bento and the Bento logo

More information

Table of Contents. 2015 Cicero, Inc. All rights protected and reserved.

Table of Contents. 2015 Cicero, Inc. All rights protected and reserved. Desktop Analytics Table of Contents Contact Center and Back Office Activity Intelligence... 3 Cicero Discovery Sensors... 3 Business Data Sensor... 5 Business Process Sensor... 5 System Sensor... 6 Session

More information

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support...

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support... Informatica Corporation B2B Data Exchange Version 9.5.0 Release Notes June 2012 Copyright (c) 2006-2012 Informatica Corporation. All rights reserved. Contents New Features... 1 Installation... 3 Upgrade

More information

FileMaker Server 11. FileMaker Server Help

FileMaker Server 11. FileMaker Server Help FileMaker Server 11 FileMaker Server Help 2010 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc. registered

More information

ArcGIS 9. Installing ArcIMS 9 on Red Hat Linux

ArcGIS 9. Installing ArcIMS 9 on Red Hat Linux ArcGIS 9 Installing ArcIMS 9 on Red Hat Linux Table Of Contents Introduction...1 Introduction...1 Overview...2 What s included with ArcIMS 9.0?...2 ArcIMS components...2 Five steps to get ArcIMS up and

More information

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015 Integration Guide IBM Note Before using this information and the product it supports, read the information in Notices on page 93.

More information

Introduction to Sun ONE Application Server 7

Introduction to Sun ONE Application Server 7 Introduction to Sun ONE Application Server 7 The Sun ONE Application Server 7 provides a high-performance J2EE platform suitable for broad deployment of application services and web services. It offers

More information

Novell Access Manager

Novell Access Manager J2EE Agent Guide AUTHORIZED DOCUMENTATION Novell Access Manager 3.1 SP3 February 02, 2011 www.novell.com Novell Access Manager 3.1 SP3 J2EE Agent Guide Legal Notices Novell, Inc., makes no representations

More information

ActiveVOS Clustering with JBoss

ActiveVOS Clustering with JBoss Clustering with JBoss Technical Note Version 1.2 29 December 2011 2011 Active Endpoints Inc. is a trademark of Active Endpoints, Inc. All other company and product names are the property of their respective

More information

USING JE THE BE NNIFE FITS Integrated Performance Monitoring Service Availability Fast Problem Troubleshooting Improved Customer Satisfaction

USING JE THE BE NNIFE FITS Integrated Performance Monitoring Service Availability Fast Problem Troubleshooting Improved Customer Satisfaction THE BENEFITS OF USING JENNIFER Integrated Performance Monitoring JENNIFER provides comprehensive and integrated performance monitoring through its many dashboard views, which include Realuser Monitoring

More information

Code:1Z0-599. Titre: Oracle WebLogic. Version: Demo. Server 12c Essentials. http://www.it-exams.fr/

Code:1Z0-599. Titre: Oracle WebLogic. Version: Demo. Server 12c Essentials. http://www.it-exams.fr/ Code:1Z0-599 Titre: Oracle WebLogic Server 12c Essentials Version: Demo http://www.it-exams.fr/ QUESTION NO: 1 You deploy more than one application to the same WebLogic container. The security is set on

More information

FileMaker Server 12. FileMaker Server Help

FileMaker Server 12. FileMaker Server Help FileMaker Server 12 FileMaker Server Help 2010-2012 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc.

More information

Elixir Schedule Designer User Manual

Elixir Schedule Designer User Manual Elixir Schedule Designer User Manual Release 7.3 Elixir Technology Pte Ltd Elixir Schedule Designer User Manual: Release 7.3 Elixir Technology Pte Ltd Published 2008 Copyright 2008 Elixir Technology Pte

More information

Xpert.ivy 4.2. Server Guide

Xpert.ivy 4.2. Server Guide Xpert.ivy 4.2 Server Guide Xpert.ivy 4.2: Server Guide Copyright 2008-2011 ivyteam AG Table of Contents 1. Preface... 1 Audience... 1 2. Introduction... 2 Overview... 2 Installation Environment... 2 Server

More information

KINETIC SR (Survey and Request)

KINETIC SR (Survey and Request) KINETIC SR (Survey and Request) Installation and Configuration Guide Version 5.0 Revised October 14, 2010 Kinetic SR Installation and Configuration Guide 2007-2010, Kinetic Data, Inc. Kinetic Data, Inc,

More information

Building Web Applications, Servlets, JSP and JDBC

Building Web Applications, Servlets, JSP and JDBC Building Web Applications, Servlets, JSP and JDBC Overview Java 2 Enterprise Edition (JEE) is a powerful platform for building web applications. The JEE platform offers all the advantages of developing

More information

ODEX Enterprise. Introduction to ODEX Enterprise 3 for users of ODEX Enterprise 2

ODEX Enterprise. Introduction to ODEX Enterprise 3 for users of ODEX Enterprise 2 ODEX Enterprise Introduction to ODEX Enterprise 3 for users of ODEX Enterprise 2 Copyright Data Interchange Plc Peterborough, England, 2013. All rights reserved. No part of this document may be disclosed

More information

Oracle WebLogic Server

Oracle WebLogic Server Oracle WebLogic Server Deploying Applications to WebLogic Server 10g Release 3 (10.3) July 2008 Oracle WebLogic Server Deploying Applications to WebLogic Server, 10g Release 3 (10.3) Copyright 2007, 2008,

More information

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016 Integration Guide IBM Note Before using this information and the product it supports, read the information

More information

Server Manager Guide Release 9.1.x

Server Manager Guide Release 9.1.x [1]JD Edwards EnterpriseOne Tools Server Manager Guide Release 9.1.x E18837-15 June 2015 Describes how to administer, deploy, configure, and manage JD Edwards EnterpriseOne. JD Edwards EnterpriseOne Tools

More information

New Features Guide. Appeon 6.1 for PowerBuilder

New Features Guide. Appeon 6.1 for PowerBuilder New Features Guide Appeon 6.1 for PowerBuilder DOCUMENT ID: DC20033-01-0610-01 LAST REVISED: October 2008 Copyright 2008 by Appeon Corporation. All rights reserved. This publication pertains to Appeon

More information

Version 14.0. Overview. Business value

Version 14.0. Overview. Business value PRODUCT SHEET CA Datacom Server CA Datacom Server Version 14.0 CA Datacom Server provides web applications and other distributed applications with open access to CA Datacom /DB Version 14.0 data by providing

More information

Oracle WebLogic Server

Oracle WebLogic Server Oracle WebLogic Server Configuring and Using the WebLogic Diagnostics Framework 10g Release 3 (10.3) July 2008 Oracle WebLogic Server Configuring and Using the WebLogic Diagnostics Framework, 10g Release

More information

TIBCO MDM Installation and Configuration Guide

TIBCO MDM Installation and Configuration Guide TIBCO MDM Installation and Configuration Guide Software Release 8.3 March 2013 Document Updated: April 2013 Two-Second Advantage Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO

More information

TIBCO ActiveMatrix BusinessWorks Process Monitor Server. Installation

TIBCO ActiveMatrix BusinessWorks Process Monitor Server. Installation TIBCO ActiveMatrix BusinessWorks Process Monitor Server Installation Software Release 2.1.2 Published: May 2013 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF

More information

Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota

Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota Michelle Pregler Ball Emerging Markets Account Executive Shahrukh Niazi Sr.System Consultant Java Solutions Quest Background Agenda

More information

IBM Rational Web Developer for WebSphere Software Version 6.0

IBM Rational Web Developer for WebSphere Software Version 6.0 Rapidly build, test and deploy Web, Web services and Java applications with an IDE that is easy to learn and use IBM Rational Web Developer for WebSphere Software Version 6.0 Highlights Accelerate Web,

More information

SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24. Data Federation Administration Tool Guide

SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24. Data Federation Administration Tool Guide SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24 Data Federation Administration Tool Guide Content 1 What's new in the.... 5 2 Introduction to administration

More information

JobScheduler. Architecture and Mode of Operation. Software for Open Source

JobScheduler. Architecture and Mode of Operation. Software for Open Source JobScheduler Architecture and Mode of Operation JobScheduler worldwide Software- und Organisations-Service GmbH www.sos-berlin.com Contents Components Supported Platforms & Databases Architecture Job Configuration

More information