A Review of Distributed Workflow Management Systems F. Ranno and S. K. Shrivastava Department of Computing Science, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK. Abstract: An increasing number of distributed applications are being constructed by composing them out of existing applications. The resulting applications can be very complex in structure, containing many temporal and dataflow dependencies between their constituent applications. An additional complication is that the execution of such an application may take a long time to complete, and may contain long periods of inactivity, often due to the constituent applications requiring user interactions. In a distributed environment, it is inevitable that long running applications will require support for fault-tolerance and dynamic reconfiguration: machines may fail, services may be moved or withdrawn and application requirements may change. In such an environment it is essential that the structure of applications can be modified to reflect these changes. The C3Ds project has chosen transactional workflow management system technology to provide an execution environment where sets of inter-related tasks can be carried out and supervised in a dependable manner. Transactional workflow systems can be built on top of available middleware technologies. This report presents a brief survey of the state of art in distributed workflow management. 1. Introduction Workflows are rule based management software that direct, coordinate and monitor execution of multiple tasks arranged to form complex organisational functions [1,2,3,4]. Although originally intended for automating business procedures, there is no reason why a suitably designed workflow management system cannot be used for managing and overseeing the execution of any distributed application. However, currently available workflow systems are not scalable, as their structure tends to be monolithic. Further, they offer little support for building fault-tolerant applications, nor can they inter-operate, as they make use of proprietary platforms and protocols. The workflow system under development within the C3DS project represents a significant departure from these; our system architecture is decentralized and open: it is being designed and implemented as a set of CORBA services to run on top of a given ORB. 1
C 3 DS Deliverable A3.1 2. Workflow Standards We introduce the basic concepts and terminology first. Tasks (activities) are application specific units of work. A Workflow schema (workflow script) is used explicitly to represent the structure of an application in terms of tasks and temporal dependencies between tasks. An application is executed by instantiating the corresponding workflow schema. CreditCheck Time TravelP lan Tickets Flights Fig. 1: Inter-task dependencies Imagine an electronic travel booking workflow application. Fig. 1 shows its activity diagram depicting the temporal dependencies between its four constituent applications (or tasks), TravelPlan, CreditCheck, Flights and Tickets. Tasks CreditCheck and Flights execute concurrently, but can only be started after the TravelPlan task has terminated and supplied the necessary data, so these two tasks have dataflow dependencies on the TravelPlan task. Task Tickets can only be started after Flights task has terminated and supplied the necessary data and task CreditCheck has terminated in an ok state. In this case, task Tickets has a dataflow dependency on Flights, and a restricted form of dataflow dependency (called notification dependency) on CreditCheck. There are several organizations involved in the above application (customer organization, travel agency, credit card agency, etc.). Each organization may well possess its own workflow system for carrying out its activities. A specific way of executing this application could be: the travel agency has the application description (workflow script) and is responsible for coordinating the overall execution and it itself executes tasks TravelPlan and Tickets; its workflow system will invoke CreditCheck task and Flights task at other organizations. Clearly, there is a need for a standard way of representing application structure and sending and receiving work items, if organizations are to cooperate. Standardization efforts are therefore underway. 2.1. WfMC: Workflow Management Coalition Model The Workflow Management Coalition (WfMC), an industry-wide consortium of workflow system vendors, has proposed a reference model (see fig. 2) that defines interfaces with the aim of enabling different workflow systems to inter-operate [3]. 2
A Review of Distributed Workflow Management Systems Process Definition Tools Interface 1 Administration and Monitoring Tools Interface 5 Workflow API and Interchage Formats Workflow Engine(s) Workflow Enactment Service Interface 4 Workflow Engine(s) Other Workflow Enactment Service Interface 2 Interface 3 Workflow Client Applications Invoked Applications Fig. 2: WfMC Reference Model Created in 1993, and consisting of more than 200 members, the Coalition has proposed a framework for the establishment of workflow standards. This framework includes five interfaces for interoperability and standardisation of communication. The aim is to have a common set of interfaces that will allow multiple workflow products to coexist and interoperate within a user's environment. In [4], the reader will be able to find some further information on the technical details. Several level of compatibility with the framework presented have been created giving some flexibility to the workflow product manufacturers. For instance, for the read/write interface of workflow process definition (API1 between the workflow engine and the process definition program), the interface specified is in fact a set of interfaces, and the number of interfaces supported gives the level of compatibility with the overall specification. All workflow systems contain a number of generic components which interact in a variety of ways. To achieve interoperability between workflow products a standardised set of interfaces and data interchange formats is necessary. These interfaces can be used as references when building interoperability scenarios. For instance processes expected to be shared by several users from potentially different organisations using different workflow engines can be specified using a tool of one workflow system and exported afterwards to the others users regardless of the workflow system that they are using. Also a workflow client application should be able to receive tasks generated by other workflow systems providing that they follow the standards. The model identifies the major components and interfaces listed below: Process Definition Tools Interface (1) - Defines a standard interface between the process definition tool and the workflow engine(s). Workflow Client Application Interface (2) - Defines the standard for the workflow engine to maintain work items which the workflow client presents to the user. 3
C 3 DS Deliverable A3.1 Invoked Application Interface (3) - A standard interface to allow the workflow engine to invoke a variety of applications. This interface has still to be specified. Workflow Interoperability Interface (4) - Definition of a variety of interoperability models and the standards applicable to each. Administration & Monitoring Tools Interface (5) - Definition of monitoring and control functions. We will consider in turn the major components in the following sections. Core component- Workflow Enactment Service The workflow enactment service provides the run-time environment in which one or more workflow processes are executed. This may involve more than one actual workflow engine. The enactment service is distinct from the application and end-user tools which are used to process items of work. A wide range of industry standard or application specific tools can therefore be integrated with the workflow enactment service to provide a complete workflow management system. This integration takes two forms: The invoked application interface which enables the workflow engine directly to activate a specific application to undertake a particular activity. This would typically be server based and require no user action, for example to invoke an Email application or pass data to a mainframe system. The workflow client application interface through which the workflow engine interacts with a separate workflow client application responsible for organising work on behalf of a particular user. API1- Process Definition Tools A variety of tools may be used to analyse, model, and describe a business process. The workflow model is not concerned with the particular nature of such tools, and currently each is in a form specialised for the particular workflow management software for which it was designed. One of the interfaces proposed by the Coalition enables more flexibility in this area. This interface is termed the process definition import/export interface which would provide a common interchange format for the following types of information: Process start and termination conditions. Identification of activities within the process, including associated applications and workflow relevant data. Identification of data types and access paths. Definition of transition conditions and flow rules. 4
Information for resource allocation decisions. A Review of Distributed Workflow Management Systems API2- Workflow Client Applications The workflow client application is the software entity which presents the end user with his or her work items, and may invoke application tools which present to the user the task and the data relating to it, and allow the user to take actions before passing the case back to the workflow enactment service. The workflow client application may be supplied as part of a workflow management system or may be a third party product (such as an Email product) or written specially for a given application. There is thus the need for a flexible means of communication between a workflow enactment service and the workflow client application which would provide a series of functions for connecting to the service and for obtaining and processing items of work. API3- Invoked Applications (not yet fully specified) There is a requirement for workflow systems to deal with a range of invoked applications; for example, to invoke an Email service, a fax service, document management services or existing user applications. The Coalition sees value in the development of standards for the invocation of such applications by building "tool agents" which will provide the interface to invoke applications. In addition it is seen that it may be possible to develop a set of APIs which will allow other developers to build "workflow enabled" applications which can be invoked directly from the workflow engine. The specification of this API is planned to be merged soon with the API2. API4- Workflow Interoperability A key objective of the Coalition is to define standards that will allow workflow systems produced by different vendors to pass work items between one another. Workflow products are diverse in nature ranging from those used for ad-hoc routing of tasks or data to those aimed at highly regularised production processes, each product having its own particular strengths. In its drive for interoperability standards the Coalition is determined not to force workflow product vendors to choose between providing a strong product focused on the needs of its customers and giving up those strengths just to provide interoperability. Interoperability can work at a number of levels from simple task passing through to workflow management systems with complete interchange of process definition, workflow relevant data and a common look and feel. The greatest level of integration is unlikely to be available generally as it relies on a commonality of approach by a wide variety of developers deep in their products. The following interoperability approaches have been identified and are being investigated: Level 1 - Coexistence: The ability for a number of workflow systems to reside on the same hardware and software base Level 2 - Unique Gateways: Developed to allow specific workflow systems to move work between themselves 5
C 3 DS Deliverable A3.1 Level 2A - Common Gateway API: An enhancement of Unique Gateways Level 3 - Limited Common API: A subset of workflow product functionality is reduced to an open API; for example: connect, request task, and completion of task function calls Level 4 - Complete Workflow API: All aspects of workflow system behaviour are embodied via an open API Level 5 - Shared Definition Format: Each workflow product can use the same process definitions at run time Level 6 - Protocol Compatibility: All APIs including transmission of definitions, work items, and recovery is standard Level 7 - Common Look and Feel: Workflow product components appearance and method of operation are very similar API5- Administration & Monitoring Tools A common interface standard which will allow one vendor's status monitoring application to work with one or more vendor's workflow enactment service engines. Firstly it will allow a complete view of the status of work flowing through the organisation regardless of which system it is in, and secondly will allow the customer to choose the best monitoring tool for their purposes. The main problem of the WfMC reference model is that it is quite monolithic, and not suitable for wide-area distribution. Indeed, the workflow service is responsible for process execution, auditing, management of the organisational directory and distribution of activities to appropriate participants. It also manages and hosts the worklists of the participants which is not scalable nor flexible. Currently, the Object Management Group (OMG), the consortium of IT vendors and users, is evaluating new proposals for a workflow facility standard. 2.2. CORBA OMG s Common Object Request Broker Architecture (CORBA) specification provides an industry standard for building applications from distributed objects [5]; its main features are: Object Request Broker (ORB), which enables objects to invoke operations on objects in a distributed, heterogeneous environment. This component is the core of the OMG reference model. Internet Inter-ORB-Protocol (IIOP) has been specified to enable ORBs from different vendors to communicate with each other over the Internet. Common Object Services, a collection of middleware services that support functions for using and implementing objects. Such services are considered to be necessary for the construction of any distributed application. These include transactions (the Object Transaction Service), concurrency control, persistence, and many more. 6
A Review of Distributed Workflow Management Systems Application frameworks that use these services for providing specific frameworks application domains. A workflow facility would form part of such a framework. The system being developed within the C3DS project (see the next section) serves as an example of the use of middleware services to construct a workflow system that provides a fault-tolerant application composition and execution environment for long running distributed applications. 3. Distributed Workflow Systems This section describes a few representative (research) distributed workflow systems currently under development and compares them to the C3DS workflow system. 3.1 Exotica or FlowMark on Message Queue Manager Exotica [6] is a distributed workflow system based on another IBM product, FlowMark [7]. FlowMark uses a layered client/server architecture. Clients are linked to a FlowMark server which is itself client of a centralised database (ObjectStore) where both build and run-time information are kept. At build-time, the client interacts directly with the database and the FlowMark server remains passive. There is no facility for dynamic modification as changes to a schema (specification) do not affect the instances already started. Each activity has a start condition (a Boolean expression) to indicate when the activity can start and an exit condition determines when the activity has successfully completed. Activities are connected by control connectors and data connectors. The start condition is evaluated when all control connector have their origin activity terminated and can be as a result evaluated to true or false. The data connectors link input and output data containers (one of each per activity). FlowMark allows forward recovery and plans have been made to support backward recovery using Spheres of Joint Compensations in the future. Clients are not persistent and there is no provision for crash recovery at the client level. IBM also defined an API standard for message passing called Message Queue Interface (MQI) based on a product MQSeries. Distribution in Exotica is carried out using message-oriented middleware based on MQSeries. The messages exchanged are persistent, which eliminates the need for the centralised database. This allows a set of autonomous node to cooperate to carry out the execution of a process. Exotica supports the mapping of flexible transactions into Flowmark process schema with the restriction that it can not make changes to resource manager. In practice that excludes interesting models such as nested transactions or split transactions. 3.2 RainMan This IBM project aims at supporting decentralised workflow execution, as well as interoperability and dynamic modification. RainMan is a distributed workflow system for the internet implemented in Java [8]. It is based on the RainMaker generic framework that defines a core of abstract interfaces for workflow components. RainMaker has four main abstractions: the workflow instances (sources, service requesters), activities (service requested), the performers (humans, applications that are in charge of executing the activities), and finally 7
C 3 DS Deliverable A3.1 tasks that are the units of work managed by the performers and implement the activities. Tasks are sent to performers independently of their implementation for interoperability reasons. The RainMan system itself is a collection of light-weight services implemented using Java RMI (Remote Method Invocation). The services implemented are a builder tool, a directory service, a repository service, a worklist service, a worklist client and an administrator tool. The builder tool allows users to specify a workflow as a directed, acyclic graph, and then to monitor it. Performers are assigned to the activities specified by querying the RainMan directory service. Those specification are stored and retrieved from a repository service of the system. The builder is also a graph interpreter that generate the sources. As a result, the specification (the graph) can be modified at run-time allowing dynamic reconfiguration. A specification language based on directed acyclic graphs is also provided. The worklist client is provided for an easy access of a human worklist (implemented as a persistent FIFO queue), the client can connect to the (distributed) worklist service to view a worklist and select some task to do locally and disconnected. The client just have to reconnect to let the worklist to return the activity results. The directory service is both a naming service and a trading service and contains information on the different performers. Application level fault tolerance is supported with forward recovery using compensation activities. Performers are expected to provide support for compensation for the activities that they handle. No system level fault tolerance is provided, as the builder represents a central point of co-ordination of the workflow, but is also a single point of failure. 3.3 ORBWork ORBWork is a CORBA-based enactment system for the METEOR2 Workflow Management System [9]. It is fully distributed and intended to support scalability, multi-database access. It provides some fault tolerance in the form of an error detection and a recovery framework using transactional concepts. METEOR2 consists of a designer and two workflow enactment systems, ORBWork (CORBAbased) and WEBWork (Web-based). The designer is a GUI for specifying the workflow, the data objects manipulated, as well as the component tasks. It assumes nothing about the runtime. The specified design is stored in Workflow Intermediate Language (WIL) for subsequent code generation. The specification is kept in the workflow model repository. The designer has two different modes: the process modeler and the workflow builder, the latter allowing the user to refine the specification created by the first one by knowing the design of the run-time system. There are three components: the map designer, the data designer and the task designer which respectively allow to express the dependencies between tasks, data object manipulated and their flow, and finally the details of the individual tasks. At run-time a code generator associated to the enactment system is used to create the workflow application, including the task managers, their scheduling components, and some recovery mechanism. The run-time system consists of the various task managers and associated tasks, the user interfaces, the distributed recovery mechanism and scheduler as well as the monitoring components. The task managers are responsible for the controller and the scheduler, while the tasks are just the executable. Different task models have been provided for the tasks (transactional, non-transactional etc.), each of them having an associated type of task controller 8
A Review of Distributed Workflow Management Systems supporting different features (recovery ) and specified via an IDL. The task managers are automatically generated by the code generator from the WIL specification. When the preconditions (specified as an AND-OR tree) associated to the task it is controlling are fulfilled and all the input data are available, this task is started. Once completed, a post activation part is used to decide what to do and which (if any) successors to activate. The task managers are responsible for the consistency of the data that they are using. They save the state of the data objects used by calling the save method or using the persistent object services of CORBA. The recovery system is being designed based on a hierarchical error model and includes mechanisms for persistence, monitoring and recovery. The errors have been categorised in three main types: task errors, task manager errors and workflow errors. For the task errors, ORBWork allows the users to define errors and specify their handlers. If no handler is provided, the error results in erroneous conditions in the task manager. If the error can not be treated (by retrying the task for instance or running an alternative task), it becomes a workflow error. Another type of workflow error is the failure of enforcing the inter-task dependencies. It can be due to communication failure. The system tries to deal with the error by for instance moving/replicating a faulty task manager on another node. If it cannot be solved the error is reported to a human via a workflow monitor. Local Recovery managers, polling the critical CORBA components on their node are used to detect potential errors, while a Global Recovery Manager is used to check the Local Recovery Manager. The components to be monitored register (respectively deregister) when they need to be monitor (respectively when they stop needed this service). On detection of a failed component, this component is restarted using the factory associated to the recovery manager. 3.4. Newcastle-Nortel Workflow System We describe the main features of the workflow system that will be used in the C3DS project. It has been designed to meet the requirements interoperability, scalability, flexible task composition, dependability and dynamic reconfiguration [10]. Our system architecture is one of the candidates selected by the OMG for the development of the standard [11]. Interoperability: The system has been structured as a set of CORBA services to run on top of a CORBA-compliant ORB thereby supporting interoperability including the incorporation of existing applications and services. Scalability: There is no reliance on any centralized service that could limit the scalability of workflow applications. Flexible Task Composition: The system provides a uniform way of composing a complex task out of transactional and non-transactional tasks. This is possible because the system supports a simple yet powerful task model permitting a task to perform application specific input selection (e.g., obtain a given input from one of several sources) and terminate in one of several outcomes, producing distinct outputs. Dependability: The system has been structured to provide dependability at application level and system level. Support for application level dependability has been provided through flexible task composition mentioned above that enables an application builder to incorporate alternative tasks, compensating tasks, replacement tasks etc., within an 9
C 3 DS Deliverable A3.1 application to deal with a variety of exceptional situations. The system provides support for system level dependability by recording inter-task dependencies in transactional shared objects and by using transactions to implement the delivery of task outputs such that destination tasks receive their inputs despite finite number of intervening machine crashes and temporary network related failures. Dynamic Reconfiguration: Task model referred to earlier is expressive enough to represent temporal (dataflow and notification) dependencies between constituent tasks. Our application execution environment is reflective, as it maintains this structure and makes it available through transactional operations for performing changes to it (such as addition and removal of tasks as well as addition and removal of dependencies between tasks). Thus the system directly provides support for dynamic modification of workflows (ad hoc workflows) [12]. The use of transactions ensures that changes to schemas and instances are carried out atomically with respect to normal processing. The workflow management system structure is shown in fig. 3. Here the big box represents the structure of the entire distributed workflow system (and not the software layers of a single node); the small box represents any node with a Java capable browser. The most important components of the system are the two transactional services, the workflow repository service and the workflow execution service. These two facilities make use CORBA Object Transaction Service (OTS). The implementation for OTS used for the workflow management facility is OTSArjuna, which is an OTS compliant version of Arjuna distributed transaction system built by us [13]. In our system, application control and management tools required for functions such as instantiating workflow applications, monitoring and dynamic reconfiguration etc., (collectively referred to as administrative applications) themselves can be implemented as workflow applications. Thus the administrative applications can be made fault-tolerant without any extra effort. Graphical user interface to the these administrative applications has been provided by making use of Java applets which can be loaded and run by any Java capable Web browser. Administrative Workflow Applications Workflow Repository Service User Workflow Applications Workflow Execution Service Browser IIOP Object Transaction Service (OTS) CORBA Object Request Broker (ORB) Fig. 3: Workflow management system structure. Workflow Repository Service: The repository service stores workflow schemas and provides operations for initializing, modifying and inspecting schemas. A schema is represented in terms of tasks, compound tasks and dependencies. We have designed a scripting language that provides high-level notations (textual as well graphic) for the specification of schemas. The 10
A Review of Distributed Workflow Management Systems scripting language has been specifically designed to express task composition and inter-task dependencies of fault-tolerant distributed applications whose executions could span arbitrarily large durations [14]. Workflow Execution Service: The workflow execution service coordinates the execution of a workflow instance: it records inter-task dependencies of a schema in persistent atomic objects and uses atomic transactions for propagating coordination information to ensure that tasks are scheduled to run respecting their dependencies. The dependency information is maintained and managed by task controllers. Each task within a workflow application has a single dedicated task controller. The purpose of a task controller is to receive notifications of outputs of other task controllers and use this information to determine when its associated task can be started. The task controller is also responsible for propagating notifications of outputs of its task to other interested task controllers. AS stated earlier, in our system, application control and management tools required for functions such as instantiating workflow applications, monitoring and dynamic reconfiguration etc., themselves can be implemented as workflow applications. This is made possible because the repository and execution services provide operations to examine and modify the structure of schemas and instances respectively. A graphical user interface (GUI) has been provided for these applications. U S E R S GUI J A V A O R B INTERNET IIOP C + + O R B Execution Service Repository Service Fig. 4: Graphical User Interface The GUI has been implemented as a Java applet and as a result it is platform independent, and can be loaded and run by any Java capable Web browser. This component of the toolkit is important as it makes it easier to use the workflow system, enabling a user to specify, execute and control workflow applications with minimal effort. 3.5. Comparative Evaluation The workflow approach to coordinating task executions provides a natural way of exploiting distributed object and middleware technologies. However, currently available workflow systems are not scalable, as there structure tends to be monolithic. There is therefore much research activity on the construction of decentralized workflow architectures. System such as ours, RainMan and ORBWork described above represent new generation of (research) systems that can work in arbitrarily distributed environments. We briefly compare and contrast these two systems with ours. The sources and performers of RainMan/RainMaker represent respectively, task controllers and tasks of our system. The task controllers of our system also perform the function the builder 11
C 3 DS Deliverable A3.1 of RainMan. However, our system can support arbitrary placement of task controllers, so can be made immune from a central point of failure. The task managers and tasks of ORBWork correspond to task controllers and tasks respectively of our system. However, unlike our system, ORBWork does not implement a transactional task coordination facility: task managers and tasks are not transactional CORBA objects. Therefore, our system does not need special recovery facilities to deal with failures. Naturally, recovery facilities need to be implemented in ORBWork; its design was briefly discussed here. There are two features of our system that distinguishes it from the rest: (i) (ii) The use of transactional task coordination facility means that the system naturally provides a fault-tolerant job scheduling environment. Our system is reflective: computation structure is maintained by the system at run time and exposed in a careful manner for dynamic control. The execution service directly maintains the structure of an application within task controllers and makes it available through transactional operations. This makes the provision of system monitoring and dynamic workflows relatively easy. 4. Concluding Remarks The C3DS framework for complex service provisioning, unlike most other efforts, will be based on unifying three technologies: software architecture based development environments, software agents and transactional workflow management systems. This brief review has presented the state of art in workflow technology, describing in particular the workflow system that will be used within the Project. The system to be used within the Project is intended to meet the requirements of interoperability, scalability, flexible task composition, dependability and dynamic reconfiguration. Our system architecture is decentralized and open: it has been designed and implemented as a set of CORBA services to run on top of a given ORB. Wide-spread acceptance of CORBA and Java middleware technologies make our system ideally suited to creating a framework for complex service provisioning. Integration of agent and ADL technologies within this framework is the one of the main objectives of the Project that the Project partners are currently working on. References [1] D. Georgakopoulos, M. Hornick and A. Sheth, ÒAn overview of workflow management: from process modelling to workflow automation infrastructureó, Intl. Journal on distributed and parallel databases, 3(2), pp. 119-153, April 1995. [2] M. Rusinkiewicz and A. Sheth, ÒSpecification and execution of transactional workflowsó, Modern database systems (W. Kim, ed.), pp. 592-620, ACM Press, 1995. [3] J.P. Warne, ÒFlexible transaction framework for dependable workflowsó, ANSA Report No. 1217, 1995. [4] P. Lawrence (ed.), ÒWfMC Workflow HandbookÓ, John Wiley & Sons Ltd., 1997. 12
A Review of Distributed Workflow Management Systems [5] R. Orfali, D. Harkey and J. Edwards, ÒThe essential distributed objectsó, John Wiley and Sons Ltd., 1996. [6] C. Mohan, D. Agrawal, G. Alonso et al., ÒExotica: a Project on Advanced Transaction Management and Workflow SystemsÓ, IBM Almaden, ACM SIGOIS bulletin, August 95, vol. 16 no 1. [7] The IBM Corporation, ÒFlowMark Workflow Management SoftwareÓ, http://www.software.ibm.com/ad/flowmark/ [8] S. Paul, E. Park and J. Chaar, ÒRainMan: a Workflow System for the InternetÓ, Proc. of USENIX Symp. on Internet Technologies and Systems, November 1997. [9] S. Das, K. Kochut, J. Miller, A. Seth and D. Worah, ÒORBWork: A reliable distributed CORBA-based workflow enactment system for METEOR2Ó, Tech. Report No. UGA-CS-TR 97-001, Dept. of Computer Science, University of Georgia, Feb. 1997. [10] S. M. Wheater, S. K. Shrivastava and F. Ranno "A CORBA Compliant Transactional Workflow System for Internet Applications ", IFIP Middleware 98, September 1998. [11] Nortel & University of Newcastle upon Tyne, ÒWorkflow Management Facility SpecificationÓ, OMG document bom/98-01-11, Updated submission for the OMG Business Object Domain Task Force (BODTF): Workflow Management Facility. [12] S. K. Shrivastava and S. M. Wheater, ÒArchitectural Support for Dynamic Reconfiguration of Large Scale Distributed ApplicationsÓ, IEEE 4th Intl. Conf. on Configurable Distributed Systems, CDSÕ98, Annapolis, May 1998, pp. 10-17. [13] G.D. Parrington, S.K. Shrivastava, S.M. Wheater and M. Little, ÒThe design and implementation of ArjunaÓ, USENIX Computing Systems Journal, vol. 8 (3), pp. 255-308, Summer 1995. [14] F. Ranno, S. K. Shrivastava and S. M. Wheater, "A Language for Specifying the Composition of Reliable Distributed Applications", 18th IEEE Intl. Conf. on Distributed Computing Systems, ICDCSÕ98, Amsterdam, May 1998, pp. 534-543. 13