Artifact Software Library ADTransform User Manual. Creating Simple ETVL Workflows with ADTRansform

Size: px
Start display at page:

Download "Artifact Software Library ADTransform User Manual. Creating Simple ETVL Workflows with ADTRansform"

Transcription

1 Artifact Software Library ADTransform User Manual Creating Simple ETVL Workflows with ADTRansform

2

3 iii Contents Abstract... 7 Notice to the Reader...ix Trademarks...x Preface: Preface for the ADTransform User Manual...xi Chapter 1: 1 Introduction What is ADTransform?...14 What Data Types are Supported?...15 ADTransform Workflow ADTransform Workflow Structures...16 Introduction to Logging and the Audit Trail Audit Trail Error Log Chapter 2: 2 Installation Setting up ADTransform Platforms and Requirements Installing ADTransform...24 Starting A Project...24 ADTransform data folder structure Integrating ADTranform into a Compound Workflow Adding Pre-Configured Workflows Chapter 3: 3 Configuration Configuring Workflow Phases...28 Configuring a Workflow Task...28 Configuring The Environment Configuring The Mailer...30 Configuring Error Handling...31 Internal Data Structure...31 Chapter 4: 4 General Processing...33 Configuring Pre and Post-Processing...34 Reading Properties Files...34 Chapter 5: 5 File Manipulation Manipulating Remote Files Sending files to remote sites with FTP...36 Getting files from a remote sites with FTP Sending files to Remote sites with sftp...37 Getting files with sftp...38 Managing Local Files... 39

4 iv Copy a File...40 Copy All Files Move a File Move All Files...41 Renaming A Local File...41 Deleting A File...42 Deleting all Files in a Folder...42 Chapter 6: 6 Input and Output Input and Output Tasks Reading and Writing CSV files Reading files...48 Writing HTML files...49 Chapter 7: 7 Transformation Configuring Transformations Creating a new Empty Data Store Concatenating Data Stores...52 Cloning a Data Store Creating New Columns Renaming Columns...54 Adding The Row Number Changing the Case of Values Concatenate Columns...56 Setting A Default Value Replacing Characters in a Text Column Simple Mapping Transformation Multi-column Mapping Transformation...58 Reformating a Date...60 Changing Minutes to Hours and Minutes Chapter 8: 8 Filtering Filtering on single column...64 Chapter 9: 9 Validation Configuring Validations Validating the Type of a Field Validating the Length of a Field Validating the Relationship between Two Fields Validating the Relationship between Two Numeric Values Comparing Two Dates in a Data Store Validating a Field using a Lookup Validating Unique Values...72 Validating the Values of a Field Chapter 10: 10 Reporting A Report from a Single Data Store HTML Reports...76 Tabular HTML Reports...77 Web Page with a Dynamic Organization Chart... 78

5 v Chapter 11: 11 Flow Control Flow Control Chapter 12: 12 Fully Configured Workflows...85 Fully Configured Implementations Chapter 13: 13 Writing Custom Plugins...87 Appendix A: A Appendix LMS Implementation Use Case ADTransform for PBX Reporting... 90

6

7 7 Abstract This manual is written for the person who is responsible for the process of preparing the data for an automated process or developing a workflow consisting of multiple computer systems whose data formats are not identical even though the data flowing between systems has the same meaning. It assumes that the reader is familiar with programming, simple configuration of Spring beans, batch scripting and general server technology. It does not require knowledge of Java or a deep understanding of Spring unless the user intends to extend the system by adding new functionality. For a high level overview of the system, please refer to the ADTransform website at adtransform.artifact-software.com.

8

9 ix Notice Notice to the Reader Topics: Trademarks This information was developed for products and services offered in the U.S.A. Some of the trademarks mentioned in this product appear for identification purposes only. Not responsible for direct, indirect, incidental or consequential damages resulting from any defect, error or failure to perform. No other warranty expressed or implied. This supersedes all previous notices. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may not copy, modify, and distribute these sample programs in any form without permission from Artifact Software Inc., These examples have not been thoroughly tested under all conditions. Artifact Software, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

10 Trademarks The following terms are trademarks of Artifact Software Inc. in the Canada, other countries, or both: ADTransform The following terms are trademarks of other companies: Spring is a registered trademarks of VMWare and/or its affiliates. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

11 xi Preface Preface for the ADTransform User Manual This manual has been created using the DITA Open Tool Kit and the Maven DITA plugin.

12

13 13 Chapter 1 1 Introduction Topics: What is ADTransform? What Data Types are Supported? ADTransform Workflow ADTransform Workflow Structures Introduction to Logging and the Audit Trail This chapter lays out the basic concepts underlying the operation of ADTransform.

14 14 What is ADTransform? It is an advanced ETVL (Extract, Transform, Validate and Load) utility which has been extended to include reporting and other useful features that facilitates the creation of user-friendly workflow applications. It is a stand-alone batch application designed to modify and transform data provided by any number of input sources, validate the resulting records, then report the data and output the data in the format required for further processing. The original use case was to simplify the preparation of data to be imported into an LMS system. However there are many use cases where data in one format has to manipulated, validated and output in a different format. All of the transformation and validation is done through configuration of plugins so it is very flexible and does not require Java programming. It can do many types of data transformation. Many standard transformation plugins come with the package such as: Set defaults Map values based on secondary data - job titles to job roles, department numbers or cost codes to organizational ids Copy columns from one data steam to another - create a list of organizations from information in the personnel data. Create new columns in a data stream Create new data streams and populate them. Custom transformation plugins can be written and configured into the workflow. Validation plugins include: Check for required values/columns Uniqueness Table look-up - the organization code in a person record must exist in the organization, etc. Date format checking Value comparison - one value must be greater or equal to another. Length constraints Value constraints - timezone codes, currency, state, etc., must match a list or a table of valid values. Custom validations can be added. For example, a validation plugin could be written to validate a discount based on a price and date dependent rate. The standard plugins give very clear error messages. These tell the person exactly which records failed validation and what error was detected. This is done in terms that a non-it person can understand. 222 rows were validated for uniqueness in column 'NAME' in datastore 'organizations' with 13 duplicate Keys. Key '212' was found in rows : 171, 113 Key '250' was found in rows : 163, This is a big improvement over an SQL exception that may have little or no information about why the record could not be processed and no information about which rows were involved This can save days of testing imports and refreshing databases that have had wrong or partial updates applied.

15 15 Even input and output is done using a plugin architecture so a different data input routine can be added if the provided data input/output plugins do not meet your needs. Web services or an SQL query can be used to extract the data from the originating system directly. Multiple input and output routines can be configured into the program so that data can be collected in a number of ways and output in several formats. Their are standard plugins for CSV, Excel and HTML. A report could be configured into the data output tasks to produce formatted reports of the data. JasperReports is supported in a standard plugin. Graphs can also be produced. The standard plugins can produce static organization charts using GraphViz ( as well as dynamic charts that can be interrogated in your browser to view the data associated with an on-screen object. What Data Types are Supported? Internally ADTransform supports both named data elements and tables of rows of data. There are two data structures that can be used to hold data during the execution of the workflow. Named data can be used to hold single valued items such as the address of the person to whom a report should be sent. These are typically read from properties files and there is a standard plugin that allows you to load many values into named data elements. The names of data values can be any string of alpha-numeric characters. Do not use names that start with "SYSTEM" since these are reserved for ADTransform use and overwriting them may cause some very odd errors that you will find hard to resolve. The datastore is a tabular structure that consists of rows and columns. The datastores and their columns have case sensitive names. Text, numbers and dates are all held as text strings. However, validation can still check for the validity of dates or compare numbers. ADTransform Workflow Each application will have a unique structure and list of tasks that are needed to achieve the transformation. For convenience, we discuss worflows in terms of phases that have a number of steps. Although a workflow can be specified as a single phase or divided up into as many phases as you want, it is often convenient to structure the typical workflow in 7 phases. Pre-process Input Transformation Validation Output Reporting Post-process

16 16 Each of these phases is made up of smaller steps that perform a simple single task such as: Get a file from a remote location Read a spreadsheet Read a database table or the results of a query Set defaults or add columns based on a lookup in another file Validate that a column has no duplicate keys Output a CSV file or write in a database Send a file to an another location using ftp. There could be a large number of steps in a single processing stream. Each of these steps is easy to specify using the operations provided by the plugins available in ADTransform. ADTransform Workflow Structures There is an infinite number of ways to structure a workflow. This sections discusses a couple of different approaches that can be used as inspirations for your own workflow. ADTransform does not impose a structure on the way you organize your tasks. We often talk about workflows in terms of 7 phases but there is no need to follow this pattern. We will first present basic 7 step model and then show an alternative way to structure the same workflow that may be more convenient depending on the nature of your application. In the 7 phase case, the workflow is structured into phases that cross all of the various data stores that will be used in the workflow. a pre-process phase in which general set up can be done, directories cleared, files from previous runs archived, etc. an input phase for the data is read, a transformation phase for the information in the data store or data stores are converted to the desired form, a validation phase which all of the relevant fields are validated according to the requirements for the output, reporting phase where reports are created assist in the analysis of the particular data that is being transformed and converted, an output phase in which the internal data stores are written in the desired output format. a post-processing phase where in reports may be ed, directories archived, files uploaded to external systems or any other general housekeeping is required to complete the workflow.

17 17 In order to understand the diagrams you need to know a little bit about the symbols used. Within each phase, the individual tasks are identified as the small squares. In this Human Resources example, we are dealing with an organization file (On), location file (Ln)), job types file (Jn) and a person file(pn). In the input phase the M1 task reads in mapping files be used in the transformation and validation phases. The other tasks read in each of the source data files. In the transformation phase there are three tasks that transform the organization file and for tasks that transform the person file.

18 18 In the validation phase there are no validation of location or job types but several validation tasks for organization and person file. In the reporting phase there is a general task G1 that outputs the error log and the audit trail as well as reports from the organization location job types and person file files In the post-processing phase there is again a general task and tasks related to each file. In the diagram, the tasks have been rearranged so that each of the grey boxes represents the work done on each file type. The location file is read in, transformed, validated and reported and then the job types file, organization file and persons file are similarly processed. The second last block outputs the logs. The pre and post-processing phases stay the same as in the original flow. You can create a custom structure for your workflow so that it is easy to understand where each task is being specified so that it is easy to maintain the workflow as requirements change. The key point is to ensure that each task happens in the right sequence with respect to its dependencies. If there are transformations that must be done to the locations file prior to validation of the persons file then the relevant locations tasks must be in the phases that will be processed prior to the phase that validates the person file. Introduction to Logging and the Audit Trail This section describes the logging and audit strategy of ADTransform. ADTransform and the plugins work together to provide a flexible framework to support an Audit Trail, an Error Log report as well as a detailed logging for programmers of custom plugins. The Audit Trail is a high level summary of the activities of each step. One would expect to find messages such as "Excel file reader - Added 711 rows of data from 'input/lms.xls' to 'person_data'."" and "Convert Start date to Saba format - Successfully transformed 711 date values in column 'HIRED_ON'." and "Username uniqueness validator rows were validated for uniqueness in column 'USERNAME' in datastore 'person_data'.".

19 19 The Error Log will typically hold more detailed information than the audit trail and is used to identify the information required to fix data or configuration problems. The information should be expressed in terms that the person using the workflow will understand. For example, one might find a message such as "Record 8 of 'call_details' has a value '11001' in 'extention_number' which is not in the extension_master." or "Record 3708 of the employee_master has a termination_date earlier than the hire_date." The Error Log and Audit Trail would normally be ed to the end-user in charge of the overall process. This should be done as the last step in the workflow since any tasks done after these files are produced will not appear in the reports. A copy of these reports is normally retained in a log folder that is archived with the inputs and outputs. Custom plugins can also append detailed messages to an external log file using Apache commonslogging. This programmer's log can provide very detailed information about the internal processing of each record. This log would not normally be sent to the end-user. Audit Trail The audit trail provides a summary of each operation performed during the workflow. This gives the workflow planner a lot of flexibility in satisfying audit requirements. The Audit Trail holds the following information: Step The step number of the plugin that made the audit entry. Plugin The user-specified name of the task that made the audit entry. Message The audit message that the plugin recorded. The Audit Trail is maintained in a datastore that can be treated like other datastores so that it can be reported, ed, saved to a remote server, etc. using the standard plugins.

20 20 The ADTransform API includes access to the audit trail manager to allows plugins to append information to the audit trail. Error Log ADTransform supports detailed logging of activities in a format that is easy for end-users to understand. The standard plugins use this to write a detailed description of errors that are found during the processing of the workflow. The Error Log records the following information for each entry made in the error log: Step The step identifier of the plugin that made the audit entry. Plugin The name of the Plugin that made the audit entry Message The message that the plugin recorded. Level The severity of the condition being reported. Typical values are "DEBUG","INFO", "WARNING", "ERROR" and "FATAL". The use of "DEBUG"

21 21 is discouraged. These should be logged in the programmers log. The plugins can write as many detailed, multi-line message entries as are required to describe the problems encountered in the processing. The messages in the Error Log should be written in a way that the end-user can understand what caused the errors and what records, files, etc. need to be examined and fixed. The standard plugins follow this guideline. The Error Log is maintained in a datastore that can be treated like other datastores so that it can be reported, ed, saved to a remote server, etc. using the standard plugins. There is a standard report supplied with ADTransform that can be produced that uses font sizes and colours to flag entries with "FATAL", "ERROR" or "WARNING" severity.

22

23 23 Chapter 2 2 Installation Topics: Setting up ADTransform Platforms and Requirements Installing ADTransform Starting A Project ADTransform data folder structure Integrating ADTranform into a Compound Workflow Adding Pre-Configured Workflows This chapter describes the installation of ADTransform and the inclusion into an existing workflow.

24 24 Setting up ADTransform ADTransform is easy to install. ADTransform is delivered as an installer for use on MS-Windows or Linux. The adtransform-installer file will install the files required to run ADTransform, the Project Initiation program used to start a new project as well as the sample templates. Platforms and Requirements ADTransform will run on Microsoft Windows or and Unix-like platforms including the variants of Linux. A Java run-time environment that has a version of Java 8 or greater must be installed on the computer where ADTransform will be used. Custom plugins may be more restrictive in the environments that they support. This should be noted in their Release Notes or User Documentation. Installing ADTransform ADTransform is delivered as an installer that will create a working ADTransform structure on the target workstation. ADTransform is delivered as a jar file for that can be run using Java. In MS-Windows this can usually be accomplished by double-clicking on the jar file. This will initiate an installer session that asks for the locations where you want the program and templates to be installed. The installer includes working sample projects called templates that can be used for testing the installation or as a base for your project. Starting A Project Once ADTransform is installed and any extra template packages are installed, you can can select and activate the one that you want to use by running the "projectinstaller". It is installed in the ADTransform main directory. This will be projectinstaller.bat for Windows or projectinstaller.sh for other operating systems. The projectinstaller This will build a complete project structure in the destination folder that you specify. Once the project structure is setup, you can modify the configuration files, replace the data files and run the ADTRansform process. The workflow is initiated by executing the "startup" file that is created in the project directory. This will be startup.bat for Windows or startup.sh for other operating systems. ADTransform data folder structure A normal ADTransform project uses a number of folders to hold various types of files. These can be overridden in the configuration for a plugin if the structure is not suitable for your needs.

25 25 This is the structure used in the sample projects that are distributed with the ADTransform application and for any application that Artifact Software distributes. You can modify this structure or replace it with your own layout. A typical project uses the following files and folders. inputdata : The default folder where the ADTransform will look for input files. outputdata : The default folder where the final output files will be written. outputreports : The default folder where reports will be written. config : This folder and its sub folder hierarchy contain the configuration files that control the process. mappings : The default folder to place mapping files that are required by your transformations or validation. logs : This directory is where the application writes its log files. This directory should be empty after installation archive : At the completion of successful validation, you may want to have ADTransform clean out the input directory after copying the original input files to a an archive sub folder labeled by the date and time in the format YYYY-MM-DD_HH_MM. appcontext.xml : Required configuration and setup, in most cases you do not want to manipulate this file. It refers to the masterconfig.xml file by name. If you change the name of the master configuration file to another name, you will have to update this file. masterconfig.xml : This lists the configuration files that determine the flow of the processing. They are processed in the order that they are listed in this file. config : These are configuration files where you specify each of the steps. For convenience, it is recommended that you break your operations into phases. You can have as many files as you want.

26 They will be processed in the order that they are included as references in the masterconfig.xml. This also contains the css stylesheet files that are used when viewing any HTML reports that you generate. reports : This contains the report specification files for reporting plugins that require external report specifications. For example, compiled JasperReports (*.jasper) will be saved there. startup.bat : The Windows batch start-up file. startup.sh : The Linux/Unix batch start-up file. Repository : This contains the specification of the repository and any temporary files created during the processing. It should not be modified. All relative references to files have the top level of the data structure as its root. For example, an output file would be specified as "outputdata/foobar.csv". If you want to add sub-folders or change the names of the folders, you will have to change the references in the configuration files to match your new structure. Integrating ADTranform into a Compound Workflow ADTransform can be easily integrated into an existing workflow or run on its own. The ADTransform is often run as part of the batch stream that extracts data from an application and imports it into another system. ADTransform can, of course, be run manually in either a test or development environment. Batch scripts for Windows and Linux/Unix are provided with the system and you can either execute the scripts as part of your workflow or copy the commands into the script controlling your end-to-end workflow. The easiest way to put ADTransform into production is to use the installer to setup the system on an MSWindows(Vista, Windows-7 or Windows-8) workstation, configure and test the ADTransform workflow on a Windows workstation and once it works, transfer the entire directory structure to the server where the rest of the workflow is running. ADTransform must be installed on the server and the projectinstaller must be run to get the correct folder structure and the correct startup file installed in the project directory. Adding Pre-Configured Workflows Pre-configured workflows are referred to as "templates". They are packaged in individual installation packages that are installed after the main ADTransform Core is installed. When asked for an installation directory, select the directory where the ADTransform core package is installed. In windows this will normally be C:\Program Files\ADTransform. The installer will install the templates in the "templates" sub-directory. Once the templates are installed, you can can select and activate the one that you want to use by running the "projectinstaller" that is found in the ADTransform main directory. This will be projectinstaller.bat for Windows or projectinstaller.sh for other operating systems. This will build a complete project structure in the destination folder that you specify. Once the project structure is setup, you can modify the configuration files, replace the data files and run the ADTransform process. The workflow is initiated by executing the "Startup" file that is created in the project directory. This will be startup".bat for Windows or startup".sh for other operating systems.

27 27 Chapter 3 3 Configuration Topics: Configuring Workflow Phases Configuring a Workflow Task Configuring The Environment Internal Data Structure The following sections describe technical details about customizing a workflow with the steps that are required to accomplish all of the ETVL tasks. It is intended for reader with an understanding of IT terms. For some of the tasks, it assumes a basic familiarity with XML files.

28 28 Configuring Workflow Phases A workflow is described in one or more phases. Each phase contains one or more tasks. The structure of the phases is completely user configurable. The phases are defined as Java beans with an id which must be unique and a class that must be "com.artifact_software.adt.model.phasecontainerimpl". The masterconfig.xml file has 2 main sections. The first imports the phase definition files that you have set up. The second section adds the name of each bean defining the phase to the list of phases to include in the workflow. The names of the import files does not have to match the names of the phases. The following example of a masterconfig.xml has 8 phases defined to make up the workflow. The file for each phase is referenced in an import statement. The phase definition is referenced in the "phaselist", by the bean name which appears in the phase file. <beans xmlns=" xmlns:xsi=" xsi:schemalocation=" <import <import <import <import <import <import <import <import resource="phase1-preprocess.xml"></import> resource="phase2-input.xml"></import> resource="phase3-person-transformation.xml"></import> resource="phase4-location-validation.xml"></import> resource="phase4-organization-validation.xml"></import> resource="phase4-person-validation.xml"></import> resource="phase5-output.xml"></import> resource="phase6-postprocess.xml"></import> <bean id="phaselistconfiguration" class="com.artifact_software.adt.model.phaselistcontainerimpl"> <property name="phaselist"> <list> <ref bean="preprocessconfig" /> <ref bean="inputconfig"/> <ref bean="transformpersonconfig"/> <ref bean="validatelocationconfig"/> <ref bean="validateorganizationconfig"/> <ref bean="validatejobtypeconfig"/> <ref bean="validatepersonconfig"/> <ref bean="outputconfig"/> <ref bean="postprocessconfig"/> </list> </property> </beans> The name of each phase is completely arbitrary. It is common to split some of the phases into smaller workflows to make it easier to design and test your configuration. For example, if you have 3 types of input files, you might use a single input phase to read all the files but create a separate transformation phase for each type of data being transformed and validated. Validation is often split and there is no reason why you could not validate one input file before you transform another that depends on data in the first file. Configuring a Workflow Task Configuring an ADTransform task is very simple. There are some common parameters that appear in each step which are described below. In addition, each plugin will require information that is specific to the task that it performs. This is described in the documentation for each plugin.

29 29 If you are using one of the pre-packaged configurations that has the transformation and validation already configured, you simply have to fill in the mailer configuration, the input and output file patterns, etc. as described in the Environment Configuration section. If you want to create your own workflow, you can use one of the existing configurations as a model in conjunction with this documentation to create a workflow configuration that matches your needs. Each workflow step is specified as a Java bean. This allows the workflow to be configured without Java programming and assembled by the ADTransform when it starts up. The beans specification of every workflow step includes the following: id The identification of the step. This must be a unique name for the step and will be used for flow control when you want to jump to a particular step in your workflow. class This is the name of the class that performs the operation that you want executed in this step. Every bean should have 2 parameters regardless of the step's functionality: pluginid This mandatory property is the name of the step that you want to see in an error log or audit trail entry. It can be any short string that describes the plugin in way that makes sense to someone reading the configuration file or the output reports. usagedescription This is an optional short description that describes what the plugin is doing in terms that a person reading a configuration file understands. <bean id="validation2" class="com.artifact_software.adt.plugin.validation.fieldlengthvalidator" > <property name="pluginid" value="username Required validator"/> <property name="usagedescription" value="checks that all rows have an entry for the person's username"/> In addition, each plugin will have as many properties as required to specify the details of the function to be performed. For example, there is a plugin that reads Excel spreadsheets. Its parameters are simply a list of file specifications. Each file specification includes the name of the internal data store that you will use in subsequent steps, the location of the file to read a flag indicating the presence of a header row the number of rows to skip before starting to read the data. The number of rows to skip the beginning of the file before looking for the header The number of rows to skip after the header before reading data rows the number of rows to drop at the end of the file the number of rows to read once the data rows are found <bean id="excelpersoninput" class="com.artifact_software.adt.plugin.filemanagement.excelfilereader"> <property name="pluginid" value="person file Excel reader"></property> <property name="usagedescription" value="reads the person data"/> <property name="fileset"> <set> <bean id="personfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.excelfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="person_data"/>> <property name="filename" value="input/lms.xls"/>

30 30 <property name="dropfirst" value="6"/> <property name="droplast" value="2"/> <property name="skiprows" value="1"/> </set> </property> Configuring The Environment You can specify some shared information that sets the environment in which ADTransform operates. THis is usually done in one or more tasks in an early phase. This is usually done through property files that set individual named values that can be used later. Shared information can be specified that can be used in multiple tasks rather than specifying the same information each time it is needed. For example, one can set the configuration options for the mail functions once and use it as many times as required to send files or reports. Configuring The Mailer In order to mail logs and reports, ADTransform needs to have its mail configuration set up. ADTransform can mail its log files to one or more recipients. You need to set the following properties for the mailing sub system. This is normally done in an early phase, usually the pre-processing phase, before the first time a file or report is to be mailed. You can set this up using either of two plugins that are included with the basic system. 1. Use the PropertySetter plugin to take a list of property names and values and sets those values as internal ADTransform properties or 2. Use the PropertyFileReader plugin to read in your property names and values from a properties file. Regardless of the method used to supply the properties, you must specify the following information to set up your mailer configuration. mailing_enabled use values true or false to tell ADTransform plugins whether or not mailing is available. mailing_use_authorization use values true or false to tell the mailer whether or not to attempt to use authentication. mailing_authorized_username a user name with authorization to send mail through the SMTP host server being used mailing_authorized_password the password for the user specified mailing_port the port number for the mail server (usually 25) mailing_smtphost the SMTP host address of the mail server being used mailing_recipients a comma separated list of addresses to which the log files will be sent for example : [email protected],[email protected] mailing_from the address you wish to appear as the "from" address eg. [email protected] mailing_subject the subject line you wish to appear as the "subject" of the . Note: the placeholder {0} may be used in the subject line. This specific placeholder will be replaced with the date at the time the ADTransform

31 31 application started its current run. Example: "log files for ADTransform process run at {0}" mailing_body_message The text you wish to appear in the body of the mail message. Note: the placeholder {0} may be used in the body. This specific placeholder will be replaced with the date at the time the ADTransform application started its current run. Example: "log files for ADTransform process run at {0}" You can override these properties in individual steps so that different subject headers and messages can be specified as well as different lists of addressees. Configuring Error Handling If errors occur in steps in the process, you can have them ignored or have them cause the workflow to halt. Error handling is controlled by two properties that can be set and reset during your workflow. You can set these in either of two ways. 1. Use the PropertySetter plugin which takes a list of names and values and sets those values as internal ADTransform properties or 2. Use the PropertyFileReader plugin which reads in your names and values from a properties file. runonerror If any step in the process fails with an error condition and the "runonerror" property is set to false, the process will jump to the step that is labeled "FatalError". This step must be included in your workflow configuration. You will normally configure this to produce the Error Log and mail it to the administrator and then halt. finishonerror A plugin configuration can specify whether the task is to run to completion or stop as soon as an error reported in a record. This can be use to determine if a validation will stop when it finds the first error or continues to the end of its process, identifying all of the errors that it is checking in the data stream. To make it stop after the first error, set the "finishonerror" parameter value to "false" in the parameters of the plugin. "FinishOnError" defaults to "true" which is normally the behavior that you want. Normally, you will not specify this parameter for plugins. Related concepts Validation on page 67 Internal Data Structure Once the data is loaded by an input plugin it is available to the subsequent workflow processes in a standard format that is independent of the input format. For each type of data read in, a Data Store is created that has the data stored in a series of rows each with a set of columns that have a name and a cell value. Plugins typically operate on one or more columns and process all of the rows in the Data Store. Transformations can create or destroy entire objects. They can insert or remove columns or rows.

32 Of course, they can modify individual values. There are also named values referred to as "system properties", that can be used store values that you want to pass between tasks.

33 33 Chapter 4 4 General Processing Topics: Configuring Pre and Post-Processing Reading Properties Files The general tasks are used to setup the environment at the start of a workflow, move files locally or between the local workstation/server and remote servers or to clean it up at the end.

34 Configuring Pre and Post-Processing A wide range of tasks can be performed before the data is read and after the data is output. These can include: setting values or properties used by the system, creating or deleting directories, moving files between directories, retrieving files from remote systems, sending files to remote systems, ing files and reports. These tasks are performed by plugins that can be included as required. Reading Properties Files Property files contain single values for each property rather than rows. These are stored as system properties and can be referenced in custom plugins by requesting their values as system properties. The properties are read from property files that have each property on a single line with a name and value separated by the "=" character. The following file would set the system properties "companynname", "address", "city", "state" and zip". They could be used in subsequent plugins. companyname=artifact Software address=135 Main Street city=washington state=dc zip=12345 It is possible to read in multiple properties files. If duplicate names are found in the files, the last value specified will be the only one saved. The ability to read property files is provided by the "com.artifactsoftware.adt.plugin.filemanagement.propertyfilereader" class. The following parameters are required to add a new column to an existing data stream. propertyfilename The name of the file to be read and processed. Example showing a file company.properties being read in <bean id="addcolumns" class="com.artifact_software.adt.plugin.filemanagement.propertyfilereader" > <property name="propertyfilename" value="company.properties" />

35 35 Chapter 5 5 File Manipulation Topics: Manipulating Remote Files Managing Local Files ADTransform can manipulate both local and remote files. These activities normally appear in a pre-processing or postprocessing phase. In the pre-processing phase, the tasks are usually related to cleaning folders to remove files that were output in previous runs, downloading files or moving files. In the post-processing phase, these functions are used to send files to remote systems for further processing or cleaning up the working disk folder or archiving files.

36 36 Manipulating Remote Files Files can be retrieved and set to remote servers. Sending files to remote sites with FTP Files can be uploaded onto remote FTP servers. FTP uploads are done using "com.artifact-software.adt.plugin.filemanagement.ftp" class. operation The literal text "PUT" indicating that you want to move files from the local machine to the remote server. ftpusername The username to be used on the FTP server ftppassword The password of the user on the remote server ftpremoteserver The hostname of the remote server. ftpremotefolder The folder on the remote server where the files will reside. ftplocalfolder The folder on the local machine where the files reside. protocol The literal text "FTP" filenames The list of files to be uploaded. This will upload all the files in the list provided. <bean id="ftptransfer" class="com.artifact_software.adt.plugin.filemanagement.ftp"> <property name="pluginid" value="ftpfiletransfer"/> <property name="usagedescription" value="copy files to FTP server."/> <property name="operation" value="put"/> <property name="ftpusername" value="ftpuser"/> <property name="ftppassword" value="secretpassword"/> <property name="ftpremoteserver" value="ftp.example.com"/> <property name="ftpremotefolder" value="archive"/> <property name="ftplocalfolder" value="outputdata"/> <property name="protocol" value="ftp"/> <property name="filenames"> <list> <value="fileone.csv"/> <value="filetwo.csv"/> </list> </property> This configuration will upload "fileone.csv" and "filetwo.cvs" which are in the "outputdata" folder on the local machine to the "archive" folder on the "ftp.example.com" FTP server. It will log in as "ftpuser" with the password "secretpassword". Getting files from a remote sites with FTP Files can be downloaded from FTP servers. FTP downloads are done using "com.artifact-software.adt.plugin.filemanagement.ftp" class. operation The literal text "GET" indicating that you want to move files from the remote server to the local machine. ftpusername The username to be used on the FTP server

37 37 ftppassword The password of the user on the remote server ftpremoteserver The hostname of the remote server. ftpremotefolder The folder on the remote server where the files reside. ftplocalfolder The folder on the local machine where the files will be put. protocol The literal text "FTP" filenames The list of files to be downloaded. This will download all the files in the list provided. <bean id="ftptransfer" class="com.artifact_software.adt.plugin.filemanagement.ftp"> <property name="pluginid" value="ftpfiletransfer"/> <property name="usagedescription" value="copy files to server."/> <property name="operation" value="get"/> <property name="ftpusername" value="ftpuser"/> <property name="ftppassword" value="secretpassword"/> <property name="ftpremoteserver" value="ftp.example.com"/> <property name="ftpremotefolder" value="archive"/> <property name="ftplocalfolder" value="restoredarchive"/> <property name="protocol" value="ftp"/> <property name="filenames"> <list> <value="fileone.csv"/> <value="filetwo.csv"/> </list> </property> This configuration will download "fileone.csv" and "filetwo.cvs" which are in the "archive" folder on the remote "ftp.example.com" FTP server to the "restoredarchive" folder on the local machine. It will log in as "ftpuser" with the password "secretpassword". Sending files to Remote sites with sftp Files can be sent to remote servers using the secure FTP protocol. sftp uploads are done using the "com.artifact-software.adt.plugin.filemanagement.sftp" class. operation The literal text "PUT" indicating that you want to move files from the local machine to the remote server. ftpusername The username to be used on the FTP server ftppassword The password of the user on the remote server ftpremoteserver The hostname of the remote server. ftpremotefolder The folder on the remote server where the files will reside. ftplocalfolder The folder on the local machine where the files reside. protocol The literal text "sftp" filenames The list of files to be uploaded.

38 38 This will upload all the files in the list provided. <bean id="ftptransfer" class="com.artifact_software.adt.plugin.filemanagement.sftp"> <property name="pluginid" value="sftpfiletransfer"/> <property name="usagedescription" value="copy files to sftp server."/> <property name="operation" value="put"/> <property name="ftpusername" value="ftpuser"/> <property name="ftppassword" value="secretpassword"/> <property name="ftpremoteserver" value="sftp.example.com"/> <property name="ftpremotefolder" value="archive"/> <property name="ftplocalfolder" value="filestosave"/> <property name="protocol" value="sftp"/> <property name="filenames"> <list> <value="fileone.csv"/> <value="filetwo.csv"/> </list> </property> This configuration will upload "fileone.csv" and "filetwo.cvs" which are in the "filetosave" folder on the local machine to the "archive" folder on the "sftp.example.com" sftp server. It will log in as "ftpuser"" with the password "secretpassword". Getting files with sftp Files can be retrieved from a remote server using the secure FTP protocol. FTP downloads are done using "com.artifact-software.adt.plugin.filemanagement.sftp" class. operation The literal text "GET" indicating that you want to move files from the remote server to the local machine. ftpusername The username to be used on the sftp server ftppassword The password of the user on the remote server ftpremoteserver The hostname of the remote server. ftpremotefolder The folder on the remote server where the files reside. ftplocalfolder The folder on the local machine where the files will be put. protocol The literal text "sftp" filenames The list of files to be downloaded. This will download all the files in the list provided. <bean id="sftptransfer" class="com.artifact_software.adt.plugin.filemanagement.sftp"> <property name="pluginid" value="ftpfiletransfer"/> <property name="usagedescription" value="copy files to sftp server."/> <property name="operation" value="get"/> <property name="ftpusername" value="ftpuser"/> <property name="ftppassword" value="secretpassword"/> <property name="ftpremoteserver" value="sftp.example.com"/> <property name="ftpremotefolder" value="archive"/> <property name="ftplocalfolder" value="restoredarchive"/> <property name="protocol" value="ftp"/> <property name="filenames"> <list> <value="fileone.csv"/> <value="filetwo.csv"/>

39 39 </list> </property> This configuration will download "fileone.csv" and "filetwo.cvs" which are in the "archive" folder on the remote "sftp.example.com" sftp server to the "restoredarchive" folder on the local machine. It will log in as ftpuser with the password "secretpassword". Managing Local Files Files within the ADTransform local workstation or server can be manipulated. fileordirectorypath path to the file or directory that will be processed. Relative paths are based on the directory from which ADTransform is run. outputfileordirectorypath path to an output file or directory where the files or directories specified in the fileordirectorypath will be placed in the case of copy, copycontent, move or movecontent commands. operation the operation to perform on the contents specified by the fileordirectorypath. Command values are case sensitive. createifrequired optional true/false parameter specifying if a directories or files in output locations should be created if they do not already exists in the file system. If not specified it defaults to true. filetypefilter optionally restricts the type of file on which the command will be operated. For example, a source a value of "log" or ".log" would perform the operation specified in the command property only on files finishing with the character string "log" or of.log file type, leaving any other files in the directory untouched. Optional parameter. create creates a new file or directory with the path provided by the fileordirectorypath. copy copies the specific file at the fileordirectorypath to the outputfileordirectorypath. copycontents copies the contents of a directory specified by the fileordirectorypath to the outputfileordirectorypath. delete deletes a file or directory specified by the fileordirectorypath. deletecontents deletes the contents of a directory specified by the fileordirectorypath. move similar to the copy command except that a move command removes the original files or directories after copying them to the target location.

40 40 movecontent similar to copycontent with the exception the a movecontent command removes the original files or subdirectories. Related tasks Copy a File on page 40 Copy a file. Copy All Files on page 40 Copy all files in a folder to another location. Move a File on page 41 A single file can be moved from one folder to another. Move All Files on page 41 Move all files in a folder to another location. Renaming A Local File on page 41 ADTransform can rename files using the move operation for a single file. Deleting A File on page 42 A file can be easily deleted. Deleting all Files in a Folder on page 42 Deletes all of the files in the specified folder. Copy a File Copy a file. Copying files is done using the "com.artifact-software.adt.plugin.filemanagement.filesystemhandler" class. This will copy the specified file to a new location. <bean id="copysourcefile" class="com.artifact_software.adt.plugin.filemanagement.filesystemhandler"> <property name="pluginid" value="copyinputfile"/> <property name="usagedescription" value="copies input.csv to the archive folder."/> <property name="operation" value="copy"/> <property name="fileordirectorypath" value="inputdata/input.csv"/> <property name="outputfileordirectorypath" value="archive"/> This configuration will copy "input".csv" in the inputdata"" folder to the "archive" folder. Related concepts Managing Local Files Copy All Files Copy all files in a folder to another location. Copying files is done using "com.artifact-software.adt.plugin.filemanagement.filesystemhandler" class. This will copy all the csv files in the specified folder to another. <bean id="copyallsourcefiles" class="com.artifact_software.adt.plugin.filemanagement.filesystemhandler"> <property name="pluginid" value="copysourcefiles"/> <property name="usagedescription" value="copies original to output folder."/> <property name="operation" value="copyall"/>

41 41 <property name="fileordirectorypath" value="inputdata"/> <property name="outputfileordirectorypath" value="archive"/> <property name="filetypefilter" value="csv"/> This configuration will copy all the files in "inputdata" with "csv" type to the folder "archive". Related concepts Managing Local Files Move a File A single file can be moved from one folder to another. Moving files is done using "com.artifact-software.adt.plugin.filemanagement.filesystemhandler" class. This will move the specified file to a new location <bean id="movesourcefile" class="com.artifact_software.adt.plugin.filemanagement.filesystemhandler"> <property name="pluginid" value="movesourcefiles"/> <property name="usagedescription" value="moves the original to output folder."/> <property name="operation" value="move"/> <property name="fileordirectorypath" value="input.csv"/> <property name="outputfileordirectorypath" value="archive_folder"/> This configuration will move the "input".csv" to the folder "archive_folder". Related concepts Managing Local Files Move All Files Move all files in a folder to another location. moving files is done using "com.artifact-software.adt.plugin.filemanagement.filesystemhandler" class. <bean id="moveallsourcefiles" class="com.artifact_software.adt.plugin.filemanagement.filesystemhandler"> <property name="pluginid" value="movesourcefiles"/> <property name="usagedescription" value="move original csv files to the archive folder."/> <property <property <property <property name="operation" value="moveall"/> name="fileordirectorypath" value="inputdata"/> name="outputfileordirectorypath" value="archive"/> name="filetypefilter" value="csv"/> This configuration will move all the files in "inputdata" with "csv" type to the folder "archive". Related concepts Managing Local Files Renaming A Local File ADTransform can rename files using the move operation for a single file. To rename a file in place, just specify a "Move File" with the same directory path and a new filename. Related concepts Managing Local Files

42 Deleting A File A file can be easily deleted. Deleteing files is done using "com.artifact-software.adt.plugin.filemanagement.filesystemhandler" class. This will delete the file specified. <bean id="deletesourcefiles" class="com.artifact_software.adt.plugin.filemanagement.filesystemhandler"> <property name="pluginid" value="deletesourcefiles"/> <property name="usagedescription" value="delete original data."/> <property name="operation" value="delete"/> <property name="fileordirectorypath" value="inputdata/input.csv"/> This will delete the file "input".csv" from the "inputdata" folder. Related concepts Managing Local Files Deleting all Files in a Folder Deletes all of the files in the specified folder. Copying files is done using "com.artifact-software.adt.plugin.filemanagement.filesystemhandler" class. This will delete all of the files in the specified folder. The filetypefilter can be used to reduce the scope of the deletion to a single type of file. <bean id="deleteallsourcefiles" class="com.artifact_software.adt.plugin.filemanagement.filesystemhandler"> <property name="pluginid" value="deletesourcefiles"/> <property name="usagedescription" value="deletes original files after archiving."/> <property name="operation" value="deleteall"/> <property name="fileordirectorypath" value="inputpata"/> <property name="filetypefilter" value="csv"/> This configuration will delete all the files with "csv" type from the "inputdata"" folder. Related concepts Managing Local Files

43 43 Chapter 6 6 Input and Output Topics: Input and Output Tasks Reading and writing data is an important part of ADTransform. A number of different file formats can read and written. Regardless of the format read in, once the data is read in, it is held in datastores which are treated as rows of data with named columns. This provides a consistent way to manipulate data in the subsequent transformation, validation, reporting and output operations. The plugin that reads the data is responsible for capturing the column names as well as the actual rows of information. Input and output plugins could be written to read and write a wide variety of files and the input type does not have any relationship to the output. A CSV file could be read, transformed, validated and written out as a formatted report by a plugin that invokes a standard report writer. A data base could be queried to create the internal objects with the final results written back into a set of database tables. Data output also includes the ability to Standard input and output plugins included in the ADTransform handle the most common tasks. This capability can be extended through custom plugins to communicate with external applications that expose Web services or a remote Application Programming Interface (API).

44 44 Input and Output Tasks Input and output are basic functions for any workflow. The plugins that can read and write data can require different configurations depending on the sources and destinations that are required. The plugins that read and write files all depend on file specifications that describe the files to be read or written. These are the basic properties that are shared by most of the plugins. datastorename The data store to be written or filled with data. filename The name of the file to be read or written datelocation Indicates whether a date is to be added to a filename on output. "N" no date added; "P" - prefix date to filename; "S" - put date at the end of the filename dateformat String pattern describing the date format to be used if a date is to be added to the filename. Follows the Java standard. Each plugin may require additional information. Reading and Writing CSV files CVS (Comma Separated Values) files can be read into data stores for processing and data stores can be output as CSV files for processing in other systems. CSV files can read by "com.artifact-software.adt.plugin.filemanagement.csvfilereader" class. The CSV reader can handle very complex data structures including extra lines at the beginning and end of the file as well as gaps between the header and the first line of data. This can be very useful if you have data files that have an explanation or comments at the beginning of the file and have summary information at the end that you do not want to read. It can also read a fixed number of lines from the data section of the file in order to build test data or to extract parts of the file without reading the whole file. Writing of CSV files is done by "com.artifact-software.adt.plugin.filemanagement.csvfilewriter" class. The Wikipedia entry on CSV describes the characteristics of the CSV file. Reading and writing use the CVS File Specification to describe the name and characteristics of the file to be read or written. datastorename The name of the datastore where the data read in will be saved or the name of the data store containing the data to be written. filename The name of the CSV file to be read or created. columnnames This is the list of column names that are to be written into the output file. This is a list of values that match the name of a column in the data store that is being written. columnseparator Although the name implies that fields in the file are separated by commas, it is possible to use other characters to separate fields. quotecharacter The text fields in a CVS file may be surrounded by a quotation character which is usually single or double quotes.

45 45 escapecharacter If you need to use the quotecharacter in a field, then you need to prefix it with an escape character that tells the CSV reader to treat the following character as part of the string rather than as an end of string marker. ignoreleadingwhitespace Setting this to true instructs the reader to ignore any spaces or tabs at the start of a field. encoding The default encoding is UTF-8. If you need to read or write a file using another encoding such as ISO , you can specify it using the encoding property. dropfirst Normally processing starts at the first row in the file but lines can be ignored by putting this property to the number of rows to ignore at the start of the file. This is used to skip over any introductory text or irrelevant data before the header row o first row of data if there is no header. If omitted it defaults to 0. headerrowrequired CVS files may or may not include a header row. If this property is set to true, the header row will be read or written. Otherwise the first row will be considered to contain data on input and the output file will not include a header row with the column names. skiprows This is the number of records to between the header and the first row of data. This is only available in the Advanced reader. It is used to ignore extra rows such as blank lines after the header or to jump partway through the data before registering data. It defaults to 0 if not specified. readrows This is the number of data records to register. This is only available in the Advanced reader. It is used to read a fixed number of rows. If it is not specified or set to a negative value, all the rows will be read after the rows skipped at the top of the file up until the end of file unless the there is a positive number in "droplast"." By setting skiprows and readrows you can read a fixed number of consecutive rows from the middle of the CSV file. If not specified all data rows are read (taking into account droplast). droplast This is the number of records to drop at the end of the file. This is only available in the Advanced reader. It is used to ignore extra rows such as totals or dates at the end of the file. If 0 or not specified no lines are dropped. When data is written, the column names in the data store will be written into the header row as field names. On input, the data read in will appear as rows in a data store and the columns will be given the names from the header row. If there is no header in the input file, the columns will be named "column1", "column2".

46 46 The reader and writer will process a number of files in a single task. Each file is described in its own specification so the files can have different specifications of header rows, quote characters, etc. <bean id="locationsfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="locations"/> <property name="quotecharacter" value=""/> <property name="columnseparator" value=" "/> <property name="filename" value="input/locations"/> The file specifications are configured as a fileset property which is a Set of CSVFileSpecifications. This configurations instructs ADTransform to read three CSV files into the datastores "locations", "job_types" and ""organizations". The first two use the " " symbol as a field separator and do not have quotes around the text field values. They all include a header row with the column names. The third file has quotes around each text field and uses the "\" to signal quotes that are part of the data to be read. <bean id="readallcvsfiles" class="com.artifact_software.adt.plugin.filemanagement.csvfilereader"> <property name="pluginid" value="input CSV files"/> <property name="fileset"> <set> <bean id="locationsfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="locations"/> <property name="quotecharacter" value=""/> <property name="columnseparator" value=" "/> <property name="escapecharacter" value="\"/> <property name="filename" value="input/locations.csv"/> <bean id="jobtypefilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="job_types"/> <property name="quotecharacter" value=""/> <property name="columnseparator" value=" "/> <property name="filename" value="input/jobtypes.csv"/> <bean id="internalorgsfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="organizations"/> <property name="quotecharacter" value="""/> <property name="columnseparator" value=" "/> <property name="filename" value="input/int_organizations.csv"/> </set> </property> This configurations is similar to the example above and instructs ADTransform to read three CSV files into the datastores "locations", "job_types" and ""organizations". They all use the " " symbol as a field separator and do not have quotes around the text field values. They all include a header row with the column names. In addition, while reading the locations.csv"" file, the first 3 rows will be skipped before the header is read and the last 2 rows in the file will be dropped In the second file, the header will be read from the first line and then 1 data row will be skipped and up to 110 rows will be processed.

47 47 in the third file there are 4 lines of text at the start, a header row and a blank row after the header and 2 rows of summary data at the end to be ignored. <bean id="readcvsfiles" class="com.artifact_software.adt.plugin.filemanagement.csvfilereader"> <property name="pluginid" value="input CSV files"/> <property name="fileset"> <set> <bean id="locationsfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="locations"/> <property name="quotecharacter" value=""/> <property name="columnseparator" value=" "/> <property name="filename" value="input/locations.csv"/> <property name="dropfirst" value="3"/> <property name="droplast" value="2"/> <bean id="jobtypefilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="job_types"/> <property name="quotecharacter" value=""/> <property name="columnseparator" value=" "/> <property name="filename" value="input/jobtypes.csv"/> <property name="skiprows" value="1"/> <property name="readrows" value="110"/> <bean id="internalorgsfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="organizations"/> <property name="quotecharacter" value=""/> <property name="columnseparator" value=" "/> <property name="filename" value="input/int_organizations.csv"/> <property name="dropfirst" value="4"/> <property name="skiprows" value="1"/> <property name="droplast" value="2"/> </set> </property> The following configuration will write a CSV files from the data store called "people". A header row will be written. It will use " " to separate fields. Text fields will not be enclosed in a quote character. <bean id="writepeoplecsvfile" class="com.artifact_software.adt.plugin.filemanagement.csvfilewriter"> <property name="pluginid" value="write person data"/> <property name="filespecifications"> <set> <bean id="peoplefilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.csvfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="people"/> <property name="quotecharacter" value=""/> <property name="columnseparator" value=" "/> <property name="filename" value="output/int_person.csv"/> <property name="columnnames"> <list> <value>id</value> <value>username</value> <value>lname</value> <value>fname</value> <value>mname</value> <value>gender</value> <value>locale</value> <value>timezone</value> <value>company</value>

48 48 <value>status</value> <value>manager</value> <value>address</value> <value>city</value> <value>state</value> <value>jobtype</value> <value>job_title</value> <value> </value> </list> </property> </set> </property> Reading files Excell spreadsheets can be read into data stores for processing and data stores can be output as CSV files for processing in other systems. CSV files can read by "com.artifact-software.adt.plugin.filemanagement.excelfilereader" class. The Excel reader can read one or more Excel Spreadsheet into data stores. The plugin uses the Excel File Specification to describe the name and characteristics of the file to be read or written. datastorename The name of the datastore where the data read in will be saved or the name of the data store containing the data to be written. filename The name of the CSV file to be read or created. columnnames This is the list of column names that are to be written into the output file. This is a list of values that match the name of a column in the data store that is being written. ignoreleadingwhitespace Setting this to true instructs the reader to ignore any spaces or tabs at the start of a field. headerrowrequired Spreadsheets may or may not include a header row. If this property is set to true, the header row will be read or written. Otherwise the first row will be considered to contain data. The data read in will appear as rows in a data store and the columns will be given the names from the header row. If there is no header in the input file, the columns will be named "column1", "column2", etc.. The reader can process a number of files in a single task. Each file is described in its own specification so the files can have different specifications of header rows, etc. <bean id="locationsfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.excelfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="locations"/> <property name="filename" value="input/locations"/> The file specifications are configured as a fileset property which is a Set of ExcelFileSpecifications.

49 49 This configurations instructs ADTransform to read a spreadsheet into the datastores "locations". <bean id="readlocationspreadsheet" class="com.artifact_software.adt.plugin.filemanagement.excelfilereader"> <property name="pluginid" value="input Locations"/> <property name="fileset"> <set> <bean id="locationsfilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.excelfilespecification"> <property name="headerrowrequired" value="true"/> <property name="datastorename" value="locations"/> <property name="filename" value="input/locations.xls"/> </set> </property> Writing HTML files HTML pages can be created from data stores. The output is thoroughly annotated with CSS tags to assist in formatting the page. Writing of HTML files is done by "com.artifact-software.adt.plugin.filemanagement.htmlfilewriter" class. The normal File Specification that describes the name and characteristics of the file to be written are extended by adding the name of a CSS style sheet. stylesheetname The name of a CSS style sheet that will be referenced in the HTML file that is produced. This will allow the table to be nicely formated for display. There is an empty div above the table where you can add a logo using CSS as shown in the example below. The output from a data store is a simple table. The table has a div with the id of "datatable." The header row has a div with an id of "tableheader" and a class of "headerrow". Each column header cell has a div with the columnname as the div id. Each table row has a div that is the word "row" and the row number. For example, the first row has a div id of "row1". Each row also has a class of either "oddrow" or "evenrow". Each cell in the row has a class that is the column name. The following configuration will write an HTML page "int_person.html" from the data store called "people". <bean id="writepeoplehtmlpage" class="com.artifact_software.adt.plugin.filemanagement.htmlfilewriter"> <property name="pluginid" value="peoplepagewriter"/> <property name="description" value="write the people web page"/> <property name="filespecifications"> <set> <bean id="peoplefilespec" class="com.artifact_software.adt.plugin.filemanagement.specification.htmlfilespecification"> <property name="datastorename" value="people"/> <property name="filename" value="outputdata/int_person.html"/> <property name="stylesheetname" value="config/int_person.css"/> <property name="columnnames"> <list> <value>username</value> <value>lname</value> <value>fname</value> <value>mname</value> <value>gender</value> <value>jobtype</value> <value>job_title</value> <value>location</value>

50 <value> </value> </list> </property> </set> </property> The css sheet is identifies required columns with bold rext and columns that contain keys with red "ISO "; <style url("common.css");.username { color:red font-weight:bold; }.LNAME.FNAME.GENDER. { font-weight:bold; } The "common.css" file imported above is used to set the background colors for odd and even rows and set other formats that are common for all your reports. You can specify a logo which will appear above the table. [id=logo] {.logo { width: 250px; height: 75px; background-image: (logo.gif); }.headerrow { background-color: PowderBlue }.oddrow { background-color:wheat }.evenrow { background-color:papayawhip }

51 51 Chapter 7 7 Transformation Topics: Configuring Transformations Creating a new Empty Data Store Concatenating Data Stores Cloning a Data Store Creating New Columns Renaming Columns Adding The Row Number Changing the Case of Values Concatenate Columns Setting A Default Value Replacing Characters in a Text Column Simple Mapping Transformation Multi-column Mapping Transformation Reformating a Date Changing Minutes to Hours and Minutes Transforming data is a key reason for employing ADTransform. ADTransform workflows can transform data in many ways. The range of transformations is almost limitless. Individual fields can be altered, new columns created and entirely new data files can be created. The transformation plugins included in the base package should handle most situations. Custom transformations can be added to handle specific needs.

52 52 Configuring Transformations A workflow configuration can have as many transformations as required to modify the input data into a specific set of objects that can be validated, output and reported in subsequent tasks. Although a workflow could have no transformations if it was being used simply for validation, most of the time the configuration will have one or more transformations. Normally, the transformations to be done will be specified in a single configuration file. If the workflow is complex and deals with many types of data, each type of transformation might have its own configuration file. Each transformation is an element in a list in the file. The list of transformations is processed in the order that they are specified in the file. Individual transformations are implemented through distinct plugins that perform a single modification of the data. Each transformation has its own properties that specify the details of the transformation. Creating a new Empty Data Store A new data store can be created with the specified name and column definitions with no data. Creating the data store is done by "com.artifactsoftware.adt.plugin.transformation.createemptydatastoretransformationimpl" class. The new data store will have the specified columns defined but will contain no data rows. The column labels will be the same as the column names. datastorename The name of the new datastore. columnnames The list of the name of the new columns. This creates a new data store called "templocations" with 4 columns called LOCATION_NAME,HOME_DOMAIN, SECURITY_DOMAIN, LOCALE and TIMEZONE. <bean id="createtemplocations" class="com.artifact_software.adt.plugin.transformation.createemptydatastoretransformationimpl" > <property name="pluginid" value="add templocations"/> <property name="usagedescription" value="add an empty data store called templocations."/> <property name="datastorename" value="templocations" /> <property name="columnnames"> <list> <value>location_name</value> <value>home_domain</value> <value>security_domain</value> <value>locale</value> <value>timezone</value> </list> </property> Concatenating Data Stores Creates a new data store with the columns and data of one or more existing data stores. Concatenation of data stores is done using the "com.artifactsoftware.adt.plugin.transformation.concatenatedatastorestransformationimpl" class.

53 53 The new data store includes a copy of each of the existing in the order in which they are listed in datastorenames. The columns of the target data store includes all of the columns found in the source data stores. Empty values are inserted in rows from data stores that do not include the column. No data is lost during the concatenation. targetdatastorename The name of the data store to be created. datastorenames The list containing the source data stores to be included in the target. This creates a new data store called "Locations" that is identical to the current "AmericasLocations", "AsiaLocations" and "EULocations" data stores. <bean id="buildlocationstransformation" class="com.artifact_software.adt.plugin.transformation.concatenatedatastorestransformationimpl" > <property name="pluginid" value="clone Locations"/> <property name="usagedescription" value="make a consolidated Locations data store."/> <property name="targetdatastorename" value="locations"/> <property name="datastorenames"> <list> <value>americaslocations</value> <value>asialocations</value> <value>eulocations</value> </list> </property> Cloning a Data Store Creates a new data store with the columns and data of an existing data store. Cloning the data store is done by using the concatenation transformation with a single source data store "com.artifact-software.adt.plugin.transformation.concatenatedatastorestransformationimpl" class. The new data store is a copy of the existing. targetdatastorename The name of the data store to be created. datastorename The single item list containing the existing data store. This creates a new data store called "templocations" that is identical to the current "Locations"" data store. <bean id="clonetransformation1" class="com.artifact_software.adt.plugin.transformation.concatenatedatastorestransformationimpl" > <property name="pluginid" value="clone Locations"/> <property name="usagedescription" value="copy Locations to templocations."/> <property name="targetdatastorename" value="templocations"/> <property name="datastorenames"> <list> <value>locations</value> </list> </property>

54 54 Creating New Columns Inserts a new column in the Data Stream. The ability to create a new column is provided by the "com.artifactsoftware.adt.plugin.transformation.createnewcolumntransformationimpl" class. New columns are added to an existing data store. The names are provided as a list of column names. The values in the rows are empty (zero length string) for the new columns. The following parameters are required to add a new column to an existing data stream. datastorename The name of the data store to which the new column is to be added. columnnames This is a list of the names of the columns to add. This example shows 4 columns being added to the person_data data store. <bean id="addcolumns" class="com.artifact_software.adt.plugin.transformation.createnewcolumntransformationimpl" > <property name="pluginid" value="add new columns"/> <property name="usagedescription" value="add 4 new columns to the person_data internal data store."/> <property name="datastorename" value="person_data" /> <property name="columnnames"> <list> <value>home_domain</value> <value>security_domain</value> <value>locale</value> <value>timezone</value> </list> </property> Renaming Columns Renaming columns in a data store is frequent requirement. Renaming one or more columns is done by the "com.artifactsoftware.adt.plugin.transformation.renamecolumnheaderstransformationimpl" class. Sets the name of the column indicated by the key to the name specified by the value. It does not affect any values in the columns. The common use case is when the data from an export of one system needs to be transformed into something that can be imported into another system. Changing the names of the columns is almost always required. datastorename The name of the data store containing the column to be set. columnnamemap The map containing the correspondence information between old and new names. This renames 5 columns in the person_data data store. <bean id="changeabradatacolumnnames" class="com.artifact_software.adt.plugin.transformation.renamecolumnheaderstransformationimpl">

55 55 <property name="pluginid" value="rename Person Data columns"/> <property name="usagedescription" value="rename exported column names to Saba import names."/> <property name="datastorename" value="person_data" /> <property name="columnnamemap"> <map> <entry key="last Name" value="lname" /> <entry key="first Name" value="fname" /> <entry key="mi" value="mname" /> <entry key="username" value="username" /> <entry key="ee #" value="id" /> <entry key="ldoh" value="hired_on" /> </map> </property> Adding The Row Number Adds the row number as a column in the data store The row number is mostly used for reporting. datastorename The name of the data store. <bean id="addrowtransformation2" class="com.artifact_software.adt.plugin.transformation.addrownumbertransformationimpl" > <property name="pluginid" value="add Row Numbers to Locations"/> <property name="usagedescription" value="add the row number for reporting."/> <property name="datastorename" value="locations" /> Changing the Case of Values The case of the text in a column can be changed to uppercase or lowercase Case changing is done by the "com.artifactsoftware.adt.plugin.transformation.changecasetransformationimpl" class. Frequently text in the source for a data store is formatted differently than what is required in the output. This plugin changes the case of a text field to uppercase or lowercase. datastorename The name of the data store containing the column to be changed. columnname The name of the column to be changed. desiredcase This must be "uppercase" or "lowercase". This changes the text in the username column to lowercase for each row of the person_data. <bean id="usernamedatereformat" class="com.artifact_software.adt.plugin.transformation.changecasetransformationimpl"> <property name="pluginid" value="fix username"/> <property name="usagedescription" value="changes the username to be all lowercase."/> <property name="datastorename" value="person_data" /> <property name="columnname" value="username" /> <property name="desiredcase" value="lower" />

56 56 Concatenate Columns ADTransform can copy data from one or more columns to a new column in a data store. When there is only one source column, it performs a simple copy. Copying the contents of one or more columns to another is done by the "com.artifactsoftware.adt.plugin.transformation.concatenatedatatransformationimpl" class. Sets the contents of the column indicated by the targetcolumn to data copied from one or more columns It does not affect any values in the source columns. The common use case is when single data element can be constructed from other columns. datastorename The name of the data store containing the columns. targetcolumnname The name of the column to be filled with data. The column must already exist. sourcecolumnnames The map containing the correspondence information between old and new names. separator The characters to be inserted between the data from each source column. Can be an empty string or one or more characters to be inserted between values. This creates a new full name from the first, middle and last names in the person_data data store. Spaces is added between the middle initial and the first and last name. <bean id="createfullname" class="com.artifact_software.adt.plugin.transformation. ConcatenateDataTransformationImpl"> <property name="pluginid" value="create Full Name "/> <property name="usagedescription" value="create a full name from the first name, middle initial and last name."/> <property name="datastorename" value="person_data" /> <property name="targetcolumnname" value="fullname"/> <property name="sourcecolumnnames"> <list> <value>fname</value> <value>mname</value> <value>lname</value> </list> </property> <property name="separator" value=" " /> Setting A Default Value A column in every row of a data store can be set to a default value. The functionality for setting defaults is provided by the "com.artifactsoftware.adt.plugin.transformation.defaultvaluesettertransformationimpl" class. Sets the value of each row in the column indicated by the columnname to the value specified by the defaultvalue, note that this will overwrite any existing data in the target field. datastorename The name of the data store containing the column to be set. columnname The name of the column to set. defaultvalue The value to set in each row of "columnname".

57 57 onlyifempty A boolean (true/false) value that specifies the default value should only be set when the the row value for the requested column is empty. If set to "true"" only empty cells will be assigned the default value. If missing or set to false"" all cells in the column will have the value set. This sets the HOME_DOMAIN column to "World"" in every row of the person_data data store that does not have a value already set. <bean id="setdefaultdomain" class="com.artifact_software.adt.plugin.transformation.defaultvaluesettertransformationimpl"> <property name="pluginid" value="set default home domain value"/> <property name="usagedescription" value="set default home domain value if it is missing."/> <property name="datastorename" value="person_data" /> <property name="columnname" value="home_domain"/> <property name="defaultvalue" value="world"/> <property name="onlyifempty" value="true"/> Replacing Characters in a Text Column Characters in a every cell in a column can be replaced with other characters. The functionality for mapping is provided by the "com.artifactsoftware.adt.plugin.transformation.invalidcharacterreplacementtransformationimpl" class. This replaces individual characters with individual characters or strings specified in a list. datastorename The name of the data store containing the column to be set. columnnames The list of names of the columns to scan. characterreplacementmap The map listing each character to replace and the replacement string. This scans the DESCRIPTION and ABSTRACT columns to replace characters that will be invalid in a the desired context. <bean id="remove invalid charaters" class="com.artifact_software.adt.plugin.transformation.invalidcharacterreplacementtransformationimpl"> <property name="pluginid" value="replace invalid characters in courses"/> <property name="usagedescription" value="replace invalid characters."/> <property name="datastorename" value="courses" /> <property name="datastorename" value="courses" /> <property name="columnnames"> <list> <value>abstract</value> <value>description</value> </list> </property> <property name="characterreplacementmap"> <map> <entry key="\" value=" " /> <entry key=";" value=" " /> <entry key="&" value=" " /> </map> </property>

58 58 Simple Mapping Transformation A simple mapping transformation takes the value from one column (the mappedfield) to use as a key against a list of key/value pairs in a transformation map and inserts the value from the map into the targetfield. The simple mapping functionality is provided by the "com.artifactsoftware.adt.plugin.transformation.simplemappingtransformationimpl" class. The following parameters can be provided to the simple mapping transformer. datastorename Name of the data store to be modified. keycolumnname The column whose value will be used as the key to match. targetcolumnname The column whose value will be changed. The targetfield can be the same as the mappedfield in which case the original data in the column will be overwritten with the mapped values. mapdatastorename The name of the data store containing the map. mapkeycolumn The name of the column in the key to match. mapkeyvalue The value to be copied into the targetcolumnname if the value in the mapkeycolumn matches the value in keycolumnname. Example of a transformation of job titles in the employees data store to job types. It uses a data store "position_to_job_map" with 2 columns ("position" and "job_type") as the mapping file. The value of the "job_position" column in the "employees" data store will be use to match the "position" column in the "position_to_job_map" data store. The value in the "job_type" column will be put in the "job_type" column of the "employees" table. <bean id="jobtitletojobtypetransformation" class="com.artifact_software.adt.transformation.simplemappingtransformationimpl"> <property name="pluginid" value="map Job position"/> <property name="usagedescription" value="job position to JobType Transformation" /> <property name="datastorename" value="employees" /> <property name="keycolumnname" value="job_position" /> <property name="targetcolumnname" value="job_type" /> <property name="mapdatastorename" value="postion_to_job_map" /> <property name="mapkeycolumn" value="position" /> <property name="mapkeyvalue" ref="job_type" /> Multi-column Mapping Transformation A multi-column mapping transformation takes the value from multiple columns (the mapped Fields) to use as a key against a parallel list of keys and values in a transformation Map and inserts the value from the Map into the targetfield. This is used where a number of fields in the data are required to determine the value of the target column. For example, a payroll file may use "division", "department" and "cost code" to describe someone's place in the organizational structure but for the purpose of managing training, a simpler "organization" structure is desired. The three columns in the original input can be used in conjunction with a spreadsheet that has 4 columns: "Division", "Department" and "Cost Code" as well as the "Organization Name". Each person's

59 59 codes will be looked up in this spreadsheet and the "Organization Name" from the row where the codes match, will be used to supply the "learning organization" in the data store. The simple mapping functionality is provided by the "com.artifactsoftware.adt.plugin.transformation.multicolumnmappingtransformationimpl" class. The following parameters can be provided. datastorename Name of the data store to be modified. keycolumnlist The list of the columns whose value will be used as the key to match. targetcolumnname The column whose value will be changed. The column must exist. If the mapping succeeds any existing data will be overritten. The targetfield can be the same as one of the key columns. In this case, the original data in the column will be overwritten with the mapped values. mapdatastorename The name of the data store containing the map. mapkeycolumnlist The list of names of the columns in the key to match. These must be specified in the same order as the corresponding keys in the "keycolumnlist". mapkeyvalue The value to be copied into the targetcolumnname if the value in the mapkeycolumn matches the value in keycolumnname. casesensitive If missing or true, the key comparison will be case sensitive. onlyifempty If missing or false, the mapped values will overwrite any existing data in the target column. If true any cell that has data in it will not be overwritten with the mapped value. reportunmappedcells If this is missing or true and there is no mapping that matches the criteria, the fact will be reported in the Error Log as an error. If this is false and there is no mapping that matches the criteria, the existing value in the cell will be filled with the defaultvalue or left unchanged and nothing will be noted in the Error Log. defaultvalue If this is specified and there is no mapping that matches the criteria, the target column will be set to this value. If it is not specified, the existing data will not be overwritten even if empty. Example of a transformation of job titles and organization in the employees data store to job types. It uses a data store "title_and_organization_to_job_map" with 3 columns as the mapping file. The value of the "job_title" and "organization" columns in the employees data store will be use to match the "map_job_title" and "map_organization" column in the "title_and_organization_to_job_map" data store and the value in the "map_job_type"" column will be put in the "job_type" column of the "employees" data store. If there is data in the column it will be overwritten. If there are rows that can not be mapped to values in the mapping file, do not report this and set the job_type to "unspecified"

60 60 <bean id="jobtitletojobtypetransformation" class="com.artifact_software.adt.transformation.multicolumnmappingtransformationimpl"> <property name="pluginid" value="job Type mapping" /> <property name="usagedescription" value="map Job title and organization to JobType Transformation" /> <property name="datastorename" value="employees" /> <property name="keycolumnlist"> <set> <value>job_title</value> <value>organization</value> </set> </property> <property name="targetcolumnname" value="job_type" /> <property name="mapdatastorename" value="postion_and_organization_to_job_map" /> <property name="mapkeycolumnlist"> <set> <value>map_job_title</value> <value>map_organization</value> </set> </property> <property name="mapkeyvalue" value="map_job_type" /> <property name="casesensitive" value="false" /> <property name="onlyifempty" value="false" /> <property name="reportunmappedcells" value="false" /> <property name="defaultvalue" value="unspecified" /> Reformating a Date Dates can be reformated in each row in a column of the data store. Date format changes are done by the "com.artifactsoftware.adt.plugin.transformation.datereformatingtransformationimpl" class. Frequently dates in the source for a data store are formatted differently than what is required in the output. This plugin changes the format of a date field from one date format to another. datastorename The name of the data store containing the column to be changed. columnname The name of the column to be changed. inputformat The current format of the date. outputformat The format of the date after transformation blankisok Optional. If true it suppresses the warning when the date is blank. Default is false. This changes the date in the hiredate column from "DD/MM/YYYY" to "YYYY/MM/DD" for each row of the person_data. <bean id="hiredatereformat" class="com.artifact_software.adt.plugin.transformation. DateReformatingTransformationImpl"> <property name="pluginid" value="fix hire date"/> <property name="usagedescription" value="changes the hire date to be year, month, day."/> <property name="datastorename" value="person_data" /> <property name="columnname" value="hiredate" /> <property name="inputformat" value="dd/mm/yyyy" /> <property name="outputformat" value="yyyy/mm/dd" />

61 61 Changing Minutes to Hours and Minutes Values in minutes can be changed to hours and minutes separated by a ":". This conversion is performed by "com.artifactsoftware.adt.plugin.transformation.minutestohourstransformationimpl" class. Fields where the time duration is specified in minutes can be changed to the HH:MM format.. datastorename The name of the data store containing the column to be changed. columnname The name of the column to be changed. This changes the text in the duration column from minutes to hours and minutes. <bean id="usernamedatereformat" class="com.artifact_software.adt.plugin.transformation.minutestohourstransformationimpl"> <property name="pluginid" value="fix duration"/> <property name="usagedescription" value="change the duration to be HH:MM."/> <property name="datastorename" value="courses" /> <property name="columnname" value="duration" />

62

63 63 Chapter 8 8 Filtering Topics: Filtering on single column ADTransform data stores can be filtered to select or eliminate rows based on various criteria. Filters separate rows from a data store into other data stores depending on user-specified criteria.

64 64 Filtering on single column This filter Separates a data store into two data stores based on the evaluation of a single column. This basic form of filtering is provided by "com.artifact-software.adt.plugin.singlecolumnfilter." It supports a number of very simple filters that work on a single column by comparing it with a specified value or system property or by checking to see if it is empty or not. Each filter is described by an operation name, a column name and an optional value. operation This filter supports five operations. matchvalue This filter returns true if the column name matches the value specified matchcolumn This filter returns true if the two column values match. matchproperty This filter returns true if the value in the column matches the value in the named System Property. isblank This filter returns true if the column is blank. contains This filter returns true if the column contains the string specified in the value. startswith This filter returns true if the column starts with the string specified in the value. endswith This filter returns true if the column ends with the string specified in the value. columnname This is the name of the column to be evaluated. value This contains the string to be used in value comparisons or the name of the System Property be used when matching the value contained in a property. The following example filters the "organizations" data store to produce a data store of US organizations and a data store of non-us organizations based on the value in the "country" column.. <bean id="filteroncountry" class="com.artifact_software.adt.plugin.filtering. SingleColumnFilter" > <property name="pluginid" value="separate US organizations"/> <property name="usagedescription" value="separate US organization from non-us."/>

65 65 <property name="datastorename" value="organizations"/> <property name="passdatastorename" value="us_organizations"/> <property name="faildatastorename" value="other_organizations"/> <property name="filter"> <bean id="countryfilter" class="com.artifact_software.adt.plugin.filtering.singlecolumnfilterevaluatorimpl"> <property name="operation" value="matchvalue"/> <property name="columnname" value="country"/> <property name="value" value="usa"/> </property>

66

67 67 Chapter 9 9 Validation Topics: Configuring Validations Validating the Type of a Field Validating the Length of a Field Validating the Relationship between Two Fields Validating the Relationship between Two Numeric Values Comparing Two Dates in a Data Store Validating a Field using a Lookup Validating Unique Values Validating the Values of a Field Data validation is an important feature in the ETVL process. Validating data ensures that the data was input properly and transformations have worked properly. Each of the standard validation plugins performs a relatively simple type of validation. It can also detect errors in the original files. This often occurs when data is exported from one system where the validity of certain data is not important for the proper functioning of the originating system. For example a payroll system may still operate correctly when the "employee manager" field refers to a person who no longer works for the organization. However, in a system that uses a team dashboard, it is important that every person can be assigned to a valid manager. You can set plugins to continue to try to finish the validation or stop as soon as it detects an invalid record. Related tasks Configuring Error Handling on page 31 If errors occur in steps in the process, you can have them ignored or have them cause the workflow to halt.

68 68 Configuring Validations Each column of each row can be validated by as many plugins as required. The validation strategy is to break the validation of a single datastore into as many simple checks as is required to achieve the overall validation of the data store. Typically each plugin performs a single type of validation such as: Table look-up The values in the column of one data store has to exist in another. Unique value Each row in a column must have a unique value. Good for information used as a key such as employee ids. Constrained values The values in a column must be one of the values found in a list. Good for things like states, timezones and currency codes. Valid dates The values must be dates. Date comparison One date must be less than or equal to another. Hiring date must be before termination date. Field length The field must not be longer than the length specified. Each validation plugin is designed to perform a simple validation. This makes them easy to write with error messages that are very specific and detailed. If you want to validate that your column names are correct and the fields are not all required or validated in some way, you can do this by adding a simple validation on each column such as a Field Length test. This will give an error that the column is missing or if the column name is spelled incorrectly in the header of the original input file. The validation of a complete set of data files could contain dozens of validations. Validating the Type of a Field This is used when you need to validate that a string value can be converted into an acceptable date, boolean or number data type. When the datatype is "date", the date is verified according to the pattern supplied. This ensures that it can be read as expected in other systems. When the datatype is set to "boolean", the plugin compares the input string to either "true" or "false" using a case insensitive test. When the datatype is set to "number", the string will be checked to see if it is a valid number. datastorename The name of the data store to be checked. columnname The name of the column to check. datatype The type of data that is supposed to be in the column. Must be "date", "boolean" or "number". format The format used to parse the date. This is only used where datatype is "date". blankisok Optional. If true it skips the validation when the field is blank. Default is false.

69 69 This example checks that the HIRED_ON column values are valid date entries. <bean id="internalperson-validation17" class="com.artifact_software.adt.plugin.validation.fielddatatypevalidator" > <property name="pluginid" value="hired_on Date validator"/> <property name="usagedescription" value="checks HIRED_ON has valid dates"/> <property name="datastorename" value="person_data" /> <property name="columnname" value="hired_on"/> <property name="datatype" value="date"/> <property name="format" value="yyyy-mm-dd"/> Validating the Length of a Field This will validate that a string does not exceed a certain length or is shorter than a minimum length. This plugin compares the length of the value found in the column of the data store against the minlength and maxlength properties. If the length of the value is greater than or equal to the minimum length or less than or equal to the maximum length, the validation is considered successful. By configuring the minimum length to 1, this validator can also be used to validate that a required field has a value. datastorename The name of the data store. columnname The name of the column to check. minlength The minimum permitted length of a valid field. maxlength The maximum permitted length of a valid field. Checks that the JOB_TITLE column has data in each row that has at least one character and no more than 50. <bean id="internalperson-validation15" class="com.artifact_software.adt.plugin.validation.fieldlengthvalidator" > <property name="pluginid" value="jobtitle Length Validator"/> <property name="usagedescription" value="checks the length of JOB_TITLE in person_data for length less than 255 characters"/> <property name="datastorename" value="person_data" /> <property name="columnname" value="job_title"/> <property name="minlength" value="0" /> <property name="maxlength" value="50" /> Validating the Relationship between Two Fields This is used when you need to validate that 2 strings have an alphabetic relationship. This compares two columns in the same data store. datastorename The name of the data store. columnname The name of the column to check. comparetocolumnname The name of the field that is be used for comparison.

70 70 valuemustbelower If this is set to "true", the column to be checked must be lower than the "compare to column". The default is true. nullvaluesallowed If true, the column can contain a null value. isequalok If true, the column can be equal to the "compare to column". Runs a sanity check to make sure that the department name is higher than department prefix. <bean id="internalperson-validation14" class="com.artifact_software.adt.plugin.validation.fieldvaluecomparisonvalidation"> <property name="pluginid" value="department prefix test"/> <property name="usagedescription" value="the department name must be higher that the department prefix."/> <property name="datastorename" value="person_data" /> <property name="columnname" value="department"/> <property name="comparetocolumnname" value="department_prefix"/> <property name="valuemustbelower" value="false /> <property name="nullvaluesallowed" value="false" /> <property name="isequalok" value="false" /> Validating the Relationship between Two Numeric Values This is used when you need to validate the relationship between 2 columns that contain numbers. This compares two numeric columns in the same data store. datastorename The name of the data store. columnname The name of the column to check. comparetocolumnname The name of the field that is be used for comparison. valuemustbelower If this is set to "true", the column to be checked must be lower than the "compare to column". The default is true. nullvaluesallowed If true, the column can contain a null value. isequalok If true, the column can be equal to the "compare to column". Runs a sanity check to make sure that the overtime rate is greater than the regular rate. <bean id="internalperson-validation14" class="com.artifact_software.adt.plugin.validation.fieldvaluenumericcomparisonvalidation"> <property name="pluginid" value="hourly Rate check"/> <property name="usagedescription" value="checks that the Overtime rate is greater than the regular rate."/> <property name="datastorename" value="person_data" /> <property name="columnname" value="hourly_rate"/> <property name="comparetocolumnname" value="overtime_rate"/> <property name="valuemustbelower" value="false /> <property name="nullvaluesallowed" value="false" /> <property name="isequalok" value="false" />

71 71 Comparing Two Dates in a Data Store This can validate that a date is earlier or later than another date field in the same data store. Can be used to do checks such as whether the hire date is before the termination date. datastorename The name of the data store. columnname The name of the column to check. columndateformat The format of the date in the column to check. comparetocolumnname The name of the field that is be used for comparison. comparetocolumndateformat The format of the date in the field that is be used for comparison. valuemustbelower If this is set to "true", the column to be checked must be lower than the "compare to column". The default is true. nullvaluesallowed If true, the column can contain a null value. isequalok If true, the column can be equal to the "compare to column". Checks that the TERMINATION_DATE is after HIRED_ON and that the values are valid date entries. The termination date will be null if the person has not been terminated. <bean id="internalperson-validation15" class="com.artifact_software.adt.plugin.validation.fieldvaluedatecomparison"> <property name="pluginid" value="termination Date Validator"/> <property name="usagedescription" value="checks if the TERMINATION_DATE is after HIRED_ON. "/> <property name="datastorename" value="person_data" /> <property name="columnname" value="termination_date"/> <property name="columndateformat" value="dd?mm/yy"/> <property name="comparetocolumnname" value="hire_date"/> <property name="comparetocolumndateformat" value="yyyy/mm/dd"/> <property name="valuemustbelower" value="false /> <property name="nullvaluesallowed" value="true" /> <property name="isequalok" value="true" /> Validating a Field using a Lookup This will validate that the values in a column in a data store can be found in another data store. It checks each row in a data store and verifies that the value in a particular column can be found in a specific column of another table. datastorename The name of the data store. columnname The name of the column to check. dependencydatastorename The name of the data store to use as the list of valid values. This can validate that codes are correct by verifying a field against a list of master codes. This can the same data store as the lookup. This allows it to be used to check for parent-child relationships

72 72 by verifying that the parent of each child record actually exists. dependencycolumnname The name of the field in the dependency data store that contains the valid values is be used for the check. allowableextramappingvalues Values that are always matched even if they are not in the lookup table. This is useful matching defaults that are not in the lookup table or for matching the top record in tree structured data where the root record has no parent in the tree. blankallowed If true, a blank (zero length) field is allowed as a valid value. If missing or "false", blank fields are errors. This example verifies that each organization has a parent organization in the same table. Top level organizations will have a parent of "root" which does not exist in the table. <bean id="organizations-validation6" class="com.artifact_software.adt.plugin.validation.interobjectdependencyvalidator" > <property name="pluginid" value="parent org exists"/> <property name="usagedescription" value="validates that the organization has a valid parent organization" /> <property name="datastorename" value="organizations" /> <property name="columnname" value="parent_org" /> <property name="dependencydatastorename" value="organizations" /> <property name="dependencycolumnname" value="name"/> <property name="allowableextramappingvalues" <list> <value>root</value> </list> </property> <property name="blankallowed" value="true"></property> Validating Unique Values This validates that each value in a column is different for all of the rows. This inspects the values in a column and fails if it finds duplicates. datastorename The name of the data store. columnname The name of the column to check. This example verifies that the organization name is unique in the "organizations" data store. <bean id="organizations-validation" class="com.artifact_software.adt.plugin.validation.uniquefieldvalidator" > <property name="pluginid" value="organization name uniqueness validator"/> <property name="usagedescription" value="validates that the organization name is unique." /> <property name="datastorename" value="organizations" /> <property name="columnname" value="name" />

73 73 Validating the Values of a Field This is used when you need to validate that a field matches one of a list of values. This plugin compares the value of each field with the list of values provided. If the value matches any of the ones provided in the configuration, the row passes. datastorename The name of the data store. columnname The name of the column to check. validvalues The list of valid values to match. nullvalueallowed If true, an empty field is also allowed. casesensitive If missing or false, comparisons are not case sensitive. If true, then they are. Checks that the EMPLOYMENT_TYPE column contains either "Full Time", "Part Time" or "Contract". <bean id="internalperson-validation15" class="com.artifact_software.adt.plugin.validation.fieldlengthvalidator"> <property name="pluginid" value="employment Type Validator"/> <property name="usagedescription" value="checks that the EMPLOYMENT_TYPE column contains either "Full Time", "Part Time" or "Contract"/> <property name="datastorename" value="person_data" /> <property name="columnname" value="employment_type"/> <property name="nullvalueallowed" value="false" /> <property name="casesensitive" value="false" /> <property name="validvalues"> <list> <value>"full Time"</value> <value>"part Time"</value> <value>"contract"</value> </list> </property>

74

75 75 Chapter Reporting Topics: A Report from a Single Data Store HTML Reports Reporting is an important part of most workflows. Both reports and charts can be created using the standard plugins. These include simple tabular reports, hierarchical reports or dynamic hierarchical reports that can be interrogated dynamically in the browser. Reports can be ed using the plugin to one or more subscribers or can be transfered to a web site, a remote site or a local folder in the same way as any file.

76 76 A Report from a Single Data Store This allows a report to be made from a single data store. The reporting functions are provided using JasperReports which has a free version of a report design program that provides a graphical user interface for designing the reports It also provides a set of libraries that are used at runtime to process rows from a data store into report details. The following parameters are required. datastorename Name of the data store to be modified. reporttitle Title of the report. It will be displayed if the report description calls for a title to be displayed. reportfile Filename of the report file. The compiled report template (.jasper file) is produced using the Jaspersoft ireport Designer or the Jasper Studio. The reportfile needs to be installed in a place where it can be read by the simple report writer plugin. A "reports" sub-folder in the ADTranform installation folder is the recommended location. outputfile the file specification of the report file to be created. Example of a simple report creating a nicely formated version of the Audit Trail as a PDF file that can be mailed in a subsequent step. <bean id="audittrailprint " class="com.artifact_software.adt.plugin.reporting.genericjasperreport"> <property name="pluginid" value="audittrailprint"/> <property name="usagedescription" value="prints the audit trail" /> <property name="datastorename" value="audittrail"/> <property name="reporttitle" value="audit Trail Report"/> <property name="reportfile" value="reports/audittrail.jasper"/> <property name="outputfile" value="logs/audittrail.pdf"/> HTML Reports Tabular reports and hierarchical reports in HTML. Reports can be produced as web pages that can be displayed by modern web browsers. These can be viewed where they are produced or uploaded to web sites either in subsequent ADTransform steps or in separate processes. One can create simple tabular reports as well as static or dynamic hierarchical reports. The format of the reports is controlled by CSS stylesheets. This allows a lot of flexibility in specifying and positioning a logo from an external file. The HTML is fully annotated with using div elements and id and class attributes that can be used in CSS stylesheets. The samples include CSS files that demonstrate how the reports could be styled. One suggested strategy is to have a common.css file that specifies the display of the logo and report titles as well as the formatting of common table elements. [id=logo] { height: 39px ; width: 336px; background-image: url('logo.png') }.headerrow

77 77 { background-color: LightSteelBlue }.oddrow { background-color:azure }.evenrow { background-color:lightgoldenrodyellow } h1 { color:midnightblue; font:arial; font-weight:bold; } td { padding:10px; } Tabular HTML Reports Tabular reports that have a row for each record in the data store are easy to create. One can create simple tabular reports by specifying the data store and some The format of the reports is controlled by CSS stylesheets. Each report probably needs its own style sheet that references the common CSS stylesheet for the styling that gives a common corporate branding. It also provides a way to add highlights to individual "common.css";.id { color:red; font-weight:bold; }.NAME { color:red; font-weight:bold; }.SPLIT { font-weight:bold; }.PARENT_ORG { color:red; font-weight:bold; }.DEFAULT_CURRENCY { font-weight:bold; }.DESCRIPTION { color:blue; } The following parameters are required. datastorename Name of the data store being reported. filename Filename of the report file to be created. stylesheetname The URL of the CSS Stylesheet that the browser will use to format the report. This can be a file on the local workstation or a file on a web werver reporttitle Title of the report.

78 78 columnnames The names of the columns that will appear on the report. If this is not specified, all the columns will be displayed. <bean id="organizationsreport" class="com.artifact_software.adt.plugin.filemanagement.htmlfilewriter"> <property name="pluginid" value="internalorganizationsreporter"/> <property name="filespecifications"> <set> <bean id="organizationhtmlspec" class="com.artifact_software.adt.plugin.filemanagement.specification.htmlfilespecification"> <property name="datastorename" value="organizations" /> <property name="filename" value="reports/organizations.html" /> <property name="stylesheetname" value="../config/organizations.css" /> <property name="reporttitle" value="organization List" /> <property name="columnnames"> <list> <value>id</value> <value>name</value> <value>split</value> <value>parent_org</value> <value>default_currency</value> <value>description</value> </list> </property> </set> </property> Web Page with a Dynamic Organization Chart Hierarchical reports show the parent/child relationships between records in the data store are easy to create. One can create simple organization chart by specifying the data store, the output file and the columns that contain the parent and child names. This can be used to create organization charts or any chart that reflects the hierarchical structure of the data. The format of the reports is controlled by CSS stylesheets. Each report probably does not need its own style sheet since the common CSS stylesheet that gives a common corporate branding will be adequate. The following parameters are required. datastorename Name of the data store to be modified. filename Filename of the report file to be created. stylesheetname The URL of the CSS Stylesheet that the browser will use to format the report. This can be a file on the local workstation or a file on a web server reporttitle Title of the report. parentcolumnname The name of the columns holding the name of the parent in the relationship. entitycolumnname The name of the columns holding the name of the child in the relationship. javascripturi The location of the file containing the JavaScript that draws the chart. This will normally be " adtransform.artifact-software.com/scripts/viz.js"

79 79 unless you have made a copy on your workstation or a local server. <bean id="organizationchart" class="com.artifact_software.adt.plugin.filemanagement.htmlchartfilewriter"> <property name="pluginid" value="internalorganizationchart"/> <property name="filespecifications"> <set> <bean id="organizationcharthtmlspec" class="com.artifact_software.adt.plugin.filemanagement.specification.htmlchartfilespecification"> <property name="datastorename" value="organizations" /> <property name="filename" value="reports/organizationchart.html" /> <property name="stylesheetname" value="../config/organizations.css" /> <property name="reporttitle" value="organization Chart" /> <property name="entitycolumnname" value="name" /> <property name="parentcolumnname" value="parent_org" /> <property name="javascripturi" value=" scripts/viz.js" /> </set> </property> Web Page with a Dynamic Organization Chart Hierarchical reports show the parent/child relationships between records in the data store are easy to create. One can create simple organization chart by specifying the data store, the output file and the columns that contain the parent and child names. This can be used to create organization charts or any chart that reflects the hierarchical structure of the data. The format of the reports is controlled by CSS stylesheets. Each report probably does not need its own style sheet since the common CSS stylesheet that gives a common corporate branding will be adequate. The following parameters are required. datastorename Name of the data store to be modified. filename Filename of the report file to be created. stylesheetname The URL of the CSS Stylesheet that the browser will use to format the report. This can be a file on the local workstation or a file on a web server reporttitle Title of the report. parentcolumnname The name of the columns holding the name of the parent in the relationship. entitycolumnname The name of the columns holding the name of the child in the relationship. javascripturi The location of the file containing the JavaScript that draws the chart. This will normally be " adtransform.artifact-software.com/scripts/viz.js" unless you have made a copy on your workstation or a local server. <bean id="organizationchart" class="com.artifact_software.adt.plugin.filemanagement.htmlchartfilewriter"> <property name="pluginid" value="internalorganizationchart"/> <property name="filespecifications"> <set>

80 80 <bean id="organizationcharthtmlspec" class="com.artifact_software.adt.plugin.filemanagement.specification.htmlchartfilespecification"> <property name="datastorename" value="organizations" /> <property name="filename" value="reports/organizationchart.html" /> <property name="stylesheetname" value="../config/organizations.css" /> <property name="reporttitle" value="organization Chart" /> <property name="entitycolumnname" value="name" /> <property name="parentcolumnname" value="parent_org" /> <property name="javascripturi" value=" scripts/viz.js" /> </set> </property> Web Page with a Dynamic Organization chart Hierarchical reports that show the parent/child relationships between records in the datastore are easy to create. The dynamic chart can be interrogated by the user to show the details of each of the records. This can be used to create organization charts or any chart that reflects the hierarchical structure of the data. A dynamic organization chart is created by specifying the data store, the output file and the columns that contain the parent and child names. In addition one needs to specify the data that you want to be shown in the nodes when the user selects an individual node and requests a detailed view. The format of the reports is controlled by CSS stylesheets. Each report probably does not need its own style sheet since a common CSS stylesheet that gives a common corporate branding will be adequate. The following parameters are required. datastorename Name of the data store to be modified. filename Filename of the report file to be created. stylesheetname The URL of the CSS Stylesheet that the browser will use to format the report. This can be a file on the local workstation or a file on a webserver reporttitle Title of the report. parentcolumnname The name of the columns holding the name of the parent in the relationship. entitycolumnname The name of the columns holding the name of the child in the relationship. javascripturi The location of the file containing the JavaScript that draws the chart. This will normally be " adtransform.artifact-software.com/scripts/ getorgchart/getorgchart.js" unless you have made a copy on your workstation or a local server. labelcolumnnames The name of the columns to be used as labels on the chart. detailcolumnnames The name of the columns to be provided as details when the user clicks on the node. <bean id="dynamicpersonnelchart" class="com.artifact_software.adt.plugin.filemanagement.simpledynamichtmlchartfilewriter"> <property name="pluginid" value="internalpersonsdynamic Chart"/> <property name="filespecifications"> <set> <bean id="dynamicorganizationcharthtmlspec"

81 81 class="com.artifact_software.adt.plugin.filemanagement.specification.simpledynamichtmlchartfilespecification" <property name="datastorename" value="person_data" /> <property name="filename" value="outputreports/dynamicpersonnelchart.html" /> <property name="stylesheetname" value=" getorgchart.css" /> <property name="reporttitle" value="personnel Chart" /> <property name="entitycolumnname" value="username" /> <property name="parentcolumnname" value="manager" /> <property name="javascripturi" value=" / > <property name="labelcolumnnames"> <list> <value>person_no</value> <value>lname</value> <value>fname</value> </list> </property> <property name="detailcolumnnames"> <list> <value>person_no</value> <value>username</value> <value>manager</value> <value>hired_on</value> <value>terminated_on</value> </list> </property> <property name="organizationrootname" value="lsmogor"/> </set> </property>

82

83 83 Chapter Flow Control Topics: Flow Control The order in which steps in a workflow are executed can be altered dynamically. The order that is specified by "phaseorder" governs the default sequence in which phases are executed. The tasks within a phase are normally executed in the order that they are specified. This can be altered dynamically through the flow control features of ADTransform. The ability to control the flow of a workflow gives ADTransform a great deal of flexibility is handling errors or special cases.

84 Flow Control ADTransform workflows can can be controlled through tests that can call sub-flows or jump to specific steps in the flow. ADTransform normally processes plugins in the order specified. It is possible to alter the flow of control using standard plugins or API calls from plugins. The controller exposes functions that allow a plugin to specify the next plugin to run with either a "JumpTo" or a "Call". Execution will continue on from the new step. This allows ADTransform to execute different paths through the sequence of steps. The "Call" can be used with a "Return" to allow a set of instructions to be injected into the middle of the flow. After the "Return" from the sub-flow, execution will carry on from the step after the one that made the call. For example, a plugin could check to see if a file existed and if not, execute a series of steps to retrieve the file or create it. There is also a "HaltNow" that will permit the execution to end without coming to the end of the steps defined in the configuration files. This would normally be used to with a JumpTo to execute a branch of instructions and then stop at the end. For example, a custom plugin could check to see if an exception file had been created and if so, send it by and stop or if it did not exist, send a the output files to a remote site, and send out a successful completion as the last step. A "Sleep" function and plugin allows ADTransform to be set up as a service that performs a sequence of actions and then sleeps for a period. When it wakes up, a "JumpTo" would normally be the command in the next step that takes it back to the correct step to restart the service. This could be used to create a service that downloads some data, creates a report, uploads the report to a website every 10 minutes. This would be more efficient than using a job scheduler to rerun the entire job if there were a number of files that had to be processed but only 1 that changed.

85 85 Chapter Fully Configured Workflows Topics: Fully Configured Implementations There are workflows that are available for specific tasks. These are tested workflow configurations that perform a specific task.

86 Fully Configured Implementations There are a number of configurations that can be used out-of-the-box to transform and validate common configurations. These can be used as-is or can be further customized to handle your unique requirements. These are described in individual manuals that are available with the configuration files from Artifact Software.

87 87 Chapter Writing Custom Plugins This chapter introduces the features that support the writing of plugins. The ADTransform API interface used to create custom plugins that can be included ADTransform workflows allows a Java programmer to extend the functionality without dealing with the low level implementation details of how data is stored and how the workflow is managed. The Application Programming Interface provides objects that make it easy to access the rows and columns in the Data Stores. It also provides the functions to make entries in the Error Log and Audit Trail very easily. System Properties can also be written and read to allow information to be passed between plugins.

88

89 89 Appendix A A Appendix Topics: LMS Implementation Use Case ADTransform for PBX Reporting This appendix describes the various use cases for the ADTransform. You can consult this section when you need detailed information about a specific way of using ADTransform. Because of its flexible architecture, it can also be applied in many other areas. ADTransform can be used in any case where data needs to extracted, transformed, validated and output. This includes: HRIS to LMS Use Case: Maintenance of person data in an LMS by extracting current data from a Human Resource Information System (HRIS). Transfer of training history from an LMS to an HRIS. Bulk catalogue updates where the a content vendor is supplying a catalogue update that needs to be transformed into a format that can be uploaded into an LMS History records or transcripts that are coming from an external supplier of training that need to be transformed into a format that can be uploaded into an LMS Certifications and other performance related data generated within the LMS can be processed into a format that can be uploaded into the HRIS system. Transformation of the call log data from PBX systems to produce usage and capacity reports. Extraction of data from a web service to produce a nicely formatted report and placing it on a server for later download.

90 90 LMS Implementation Use Case Setting up the flow of people and organizational information in an LMS can be done easily with ADTransform. One of the first tasks in any LMS implementation is to set up the flow of people and organizational information from existing HRIS or manual systems. An LMS system initially requires foundation data which typically consists of People Internal and External Organizations Job Type definitions Locations Role definitions Frequently during LMS implementations, the Human Resources Information Systems(HRIS) or Payroll data structure or content is not suitable to be used for pushing out training or managing training activities or reporting. To deal with this, the import of Foundation Data (Organizations, Job Types, Locations, People, Roles), extracted from the HRIS, into LMS applications can require custom programming from the client s IT staff. As a typical instance, problems arise when the using Job Titles to generate Job Types. The breakdown of Job Titles does not meet the needs for competency management for any number of reasons, such as names are too broad or too specific, some types are redundant or there are too many titles. They cannot easily be matched with competencies or training needs. In this case, the ADTransform can be used to map the Job Titles into the desired Job Types. Similar needs frequently arise during the mapping of departmental structures to organizations. You may need to use department codes or cost code information to place a person into the organization structure that makes sense from a training or talent management point of view. If you need to maintain Audience Types based on various information in the People information extracted from the HRIS, the ADTransform can do this by adding a custom plugin to the ADTransform that applies the appropriate rules and creates Audience Type membership entries in the People profile that is uploaded into the LMS. In addition, a lot of time can be wasted testing the Foundation Data import. During testing of the import, failures due to data structures not being compliant to the format required by the LMS Import functions or data not being internally consistent, are frequent. The turnaround time for an entire upload can be quite long depending on the amount of data to be uploaded and the scheduling mechanism for the processing of uploads in the LMS. To resolve any problems found during the LMS processing, one has to sift through dense and verbose log files to decipher the error codes, check into the import files to uncover the cause of errors, make the required modifications and run the Import again until it succeeds. The more concise and explicit validation messages produced by the ADTransform facilitates and accelerates the automation of the validation and conversion process. It can also allow new data formats required because of planned upgrades to the LMS to be tested prior to the upgrade so that the transition to the new version does not disrupt the data flowing from the HRIS to the LMS. ADTransform for PBX Reporting Call Reporting requires integration of data from several sources. These include PBX call records, billing logs from telephone network services and names and extensions from telephone directories. Telephone systems usually generate detailed call records for each call made or received. This information can be used in a number of different analysis reports. For example, it can be used for chargeback or cost allocation at different organizational levels or capacity planning or usage monitoring. This often requires that the raw call data be enhanced with information from

91 91 the phone directory, from a CRM system, from telecom provider logs, etc. The consolidated data may require additional transformations and validations before being sorted and reported in different ways in order to produce meaningful operational or managerial reports. An ADTransform workflow can be constructed using various plugins to accomplish the required tasks. For example, an organization with 3 regional PBX systems, might want to enhance the raw call data with the names of the people and departments making or receiving the calls, match up CRM customer and prospect information based on the external telephone numbers and use the supplier file to match supplier information with external numbers. The organization might also want to identify the telecom provider for each of the external lines or trunks used. These enhanced call details could then be reported for each division and a consolidated report by telecom supplier could also be created.

92

TSM Studio Server User Guide 2.9.0.0

TSM Studio Server User Guide 2.9.0.0 TSM Studio Server User Guide 2.9.0.0 1 Table of Contents Disclaimer... 4 What is TSM Studio Server?... 5 System Requirements... 6 Database Requirements... 6 Installing TSM Studio Server... 7 TSM Studio

More information

Copyright 2014 Jaspersoft Corporation. All rights reserved. Printed in the U.S.A. Jaspersoft, the Jaspersoft

Copyright 2014 Jaspersoft Corporation. All rights reserved. Printed in the U.S.A. Jaspersoft, the Jaspersoft 5.6 Copyright 2014 Jaspersoft Corporation. All rights reserved. Printed in the U.S.A. Jaspersoft, the Jaspersoft logo, Jaspersoft ireport Designer, JasperReports Library, JasperReports Server, Jaspersoft

More information

Tracking Network Changes Using Change Audit

Tracking Network Changes Using Change Audit CHAPTER 14 Change Audit tracks and reports changes made in the network. Change Audit allows other RME applications to log change information to a central repository. Device Configuration, Inventory, and

More information

LICENSE4J FLOATING LICENSE SERVER USER GUIDE

LICENSE4J FLOATING LICENSE SERVER USER GUIDE LICENSE4J FLOATING LICENSE SERVER USER GUIDE VERSION 4.5.5 LICENSE4J www.license4j.com Table of Contents Getting Started... 2 Floating License Usage... 2 Installation... 4 Windows Installation... 4 Linux

More information

Documentum Content Distribution Services TM Administration Guide

Documentum Content Distribution Services TM Administration Guide Documentum Content Distribution Services TM Administration Guide Version 5.3 SP5 August 2007 Copyright 1994-2007 EMC Corporation. All rights reserved. Table of Contents Preface... 7 Chapter 1 Introducing

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com [email protected] 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

Kofax Export Connector 8.3.0 for Microsoft SharePoint

Kofax Export Connector 8.3.0 for Microsoft SharePoint Kofax Export Connector 8.3.0 for Microsoft SharePoint Administrator's Guide 2013-02-27 2013 Kofax, Inc., 15211 Laguna Canyon Road, Irvine, California 92618, U.S.A. All rights reserved. Use is subject to

More information

TIBCO Spotfire Automation Services 6.5. User s Manual

TIBCO Spotfire Automation Services 6.5. User s Manual TIBCO Spotfire Automation Services 6.5 User s Manual Revision date: 17 April 2014 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED TIBCO

More information

EMC Documentum Composer

EMC Documentum Composer EMC Documentum Composer Version 6.5 User Guide P/N 300 007 217 A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748 9103 1 508 435 1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

Dashboard Builder TM for Microsoft Access

Dashboard Builder TM for Microsoft Access Dashboard Builder TM for Microsoft Access Web Edition Application Guide Version 5.3 5.12.2014 This document is copyright 2007-2014 OpenGate Software. The information contained in this document is subject

More information

Monitoring Replication

Monitoring Replication Monitoring Replication Article 1130112-02 Contents Summary... 3 Monitor Replicator Page... 3 Summary... 3 Status... 3 System Health... 4 Replicator Configuration... 5 Replicator Health... 6 Local Package

More information

FileMaker Server 11. FileMaker Server Help

FileMaker Server 11. FileMaker Server Help FileMaker Server 11 FileMaker Server Help 2010 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc. registered

More information

FileMaker Server 12. FileMaker Server Help

FileMaker Server 12. FileMaker Server Help FileMaker Server 12 FileMaker Server Help 2010-2012 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc.

More information

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide This document is intended to help you get started using WebSpy Vantage Ultimate and the Web Module. For more detailed information, please see

More information

FileMaker Server 10 Help

FileMaker Server 10 Help FileMaker Server 10 Help 2007-2009 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker, the file folder logo, Bento and the Bento logo

More information

Workflow Templates Library

Workflow Templates Library Workflow s Library Table of Contents Intro... 2 Active Directory... 3 Application... 5 Cisco... 7 Database... 8 Excel Automation... 9 Files and Folders... 10 FTP Tasks... 13 Incident Management... 14 Security

More information

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015 Integration Guide IBM Note Before using this information and the product it supports, read the information in Notices on page 93.

More information

CatDV Pro Workgroup Serve r

CatDV Pro Workgroup Serve r Architectural Overview CatDV Pro Workgroup Server Square Box Systems Ltd May 2003 The CatDV Pro client application is a standalone desktop application, providing video logging and media cataloging capability

More information

ODEX Enterprise. Introduction to ODEX Enterprise 3 for users of ODEX Enterprise 2

ODEX Enterprise. Introduction to ODEX Enterprise 3 for users of ODEX Enterprise 2 ODEX Enterprise Introduction to ODEX Enterprise 3 for users of ODEX Enterprise 2 Copyright Data Interchange Plc Peterborough, England, 2013. All rights reserved. No part of this document may be disclosed

More information

WESTERNACHER OUTLOOK E-MAIL-MANAGER OPERATING MANUAL

WESTERNACHER OUTLOOK E-MAIL-MANAGER OPERATING MANUAL TABLE OF CONTENTS 1 Summary 3 2 Software requirements 3 3 Installing the Outlook E-Mail Manager Client 3 3.1 Requirements 3 3.1.1 Installation for trial customers for cloud-based testing 3 3.1.2 Installing

More information

Results CRM 2012 User Manual

Results CRM 2012 User Manual Results CRM 2012 User Manual A Guide to Using Results CRM Standard, Results CRM Plus, & Results CRM Business Suite Table of Contents Installation Instructions... 1 Single User & Evaluation Installation

More information

Helpdesk Support Tool Administrator s Guide

Helpdesk Support Tool Administrator s Guide Helpdesk Support Tool Administrator s Guide VMware User Environment Manager V E R S I O N 8. 6. 0 You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/

More information

Customization & Enhancement Guide. Table of Contents. Index Page. Using This Document

Customization & Enhancement Guide. Table of Contents. Index Page. Using This Document Customization & Enhancement Guide Table of Contents Using This Document This document provides information about using, installing and configuring FTP Attachments applications provided by Enzigma. It also

More information

HP Service Manager. Software Version: 9.40 For the supported Windows and Linux operating systems. Application Setup help topics for printing

HP Service Manager. Software Version: 9.40 For the supported Windows and Linux operating systems. Application Setup help topics for printing HP Service Manager Software Version: 9.40 For the supported Windows and Linux operating systems Application Setup help topics for printing Document Release Date: December 2014 Software Release Date: December

More information

IBM Operational Decision Manager Version 8 Release 5. Getting Started with Business Rules

IBM Operational Decision Manager Version 8 Release 5. Getting Started with Business Rules IBM Operational Decision Manager Version 8 Release 5 Getting Started with Business Rules Note Before using this information and the product it supports, read the information in Notices on page 43. This

More information

Integrating VoltDB with Hadoop

Integrating VoltDB with Hadoop The NewSQL database you ll never outgrow Integrating with Hadoop Hadoop is an open source framework for managing and manipulating massive volumes of data. is an database for handling high velocity data.

More information

Microsoft Visual Studio Integration Guide

Microsoft Visual Studio Integration Guide Microsoft Visual Studio Integration Guide MKS provides a number of integrations for Integrated Development Environments (IDEs). IDE integrations allow you to access MKS Integrity s workflow and configuration

More information

WS_FTP Professional 12

WS_FTP Professional 12 WS_FTP Professional 12 Tools Guide Contents CHAPTER 1 Introduction Ways to Automate Regular File Transfers...5 Check Transfer Status and Logs...6 Building a List of Files for Transfer...6 Transfer Files

More information

RecoveryVault Express Client User Manual

RecoveryVault Express Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016 Integration Guide IBM Note Before using this information and the product it supports, read the information

More information

EMC Documentum Repository Services for Microsoft SharePoint

EMC Documentum Repository Services for Microsoft SharePoint EMC Documentum Repository Services for Microsoft SharePoint Version 6.5 SP2 Installation Guide P/N 300 009 829 A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748 9103 1 508 435 1000 www.emc.com

More information

BlueJ Teamwork Tutorial

BlueJ Teamwork Tutorial BlueJ Teamwork Tutorial Version 2.0 for BlueJ Version 2.5.0 (and 2.2.x) Bruce Quig, Davin McCall School of Engineering & IT, Deakin University Contents 1 OVERVIEW... 3 2 SETTING UP A REPOSITORY... 3 3

More information

Using SQL Reporting Services with Amicus

Using SQL Reporting Services with Amicus Using SQL Reporting Services with Amicus Applies to: Amicus Attorney Premium Edition 2011 SP1 Amicus Premium Billing 2011 Contents About SQL Server Reporting Services...2 What you need 2 Setting up SQL

More information

SOSFTP Managed File Transfer

SOSFTP Managed File Transfer Open Source File Transfer SOSFTP Managed File Transfer http://sosftp.sourceforge.net Table of Contents n Introduction to Managed File Transfer n Gaps n Solutions n Architecture and Components n SOSFTP

More information

Advanced Service Design

Advanced Service Design vcloud Automation Center 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

Online Backup Linux Client User Manual

Online Backup Linux Client User Manual Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might

More information

OutDisk 4.0 FTP FTP for Email Users using Microsoft Windows and/or Microsoft Outlook. 5/1/2012 2012 Encryptomatic LLC www.encryptomatic.

OutDisk 4.0 FTP FTP for Email Users using Microsoft Windows and/or Microsoft Outlook. 5/1/2012 2012 Encryptomatic LLC www.encryptomatic. OutDisk 4.0 FTP FTP for Email Users using Microsoft Windows and/or Microsoft Outlook 5/1/2012 2012 Encryptomatic LLC www.encryptomatic.com Contents What is OutDisk?... 3 OutDisk Requirements... 3 How Does

More information

Online Backup Client User Manual

Online Backup Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

Microsoft Dynamics CRM Adapter for Microsoft Dynamics GP

Microsoft Dynamics CRM Adapter for Microsoft Dynamics GP Microsoft Dynamics Microsoft Dynamics CRM Adapter for Microsoft Dynamics GP May 2010 Find updates to this documentation at the following location. http://go.microsoft.com/fwlink/?linkid=162558&clcid=0x409

More information

Resources You can find more resources for Sync & Save at our support site: http://www.doforms.com/support.

Resources You can find more resources for Sync & Save at our support site: http://www.doforms.com/support. Sync & Save Introduction Sync & Save allows you to connect the DoForms service (www.doforms.com) with your accounting or management software. If your system can import a comma delimited, tab delimited

More information

CA Workload Automation Agent for Databases

CA Workload Automation Agent for Databases CA Workload Automation Agent for Databases Implementation Guide r11.3.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the

More information

MGC WebCommander Web Server Manager

MGC WebCommander Web Server Manager MGC WebCommander Web Server Manager Installation and Configuration Guide Version 8.0 Copyright 2006 Polycom, Inc. All Rights Reserved Catalog No. DOC2138B Version 8.0 Proprietary and Confidential The information

More information

PageScope Router. Version 1.5. Configuration Guide

PageScope Router. Version 1.5. Configuration Guide PageScope Router Version 1.5 Configuration Guide Table of Contents TABLE OF CONTENTS... 2 1. Introduction...3 1.1 IP Address and Domain Name...3 2. Sending Files to PageScope Router...4 2.1 MFP Device

More information

Xcode Project Management Guide. (Legacy)

Xcode Project Management Guide. (Legacy) Xcode Project Management Guide (Legacy) Contents Introduction 10 Organization of This Document 10 See Also 11 Part I: Project Organization 12 Overview of an Xcode Project 13 Components of an Xcode Project

More information

IceWarp to IceWarp Server Migration

IceWarp to IceWarp Server Migration IceWarp to IceWarp Server Migration Registered Trademarks iphone, ipad, Mac, OS X are trademarks of Apple Inc., registered in the U.S. and other countries. Microsoft, Windows, Outlook and Windows Phone

More information

User Guide. Trade Finance Global. Reports Centre. October 2015. nordea.com/cm OR tradefinance Name of document 8/8 2015/V1

User Guide. Trade Finance Global. Reports Centre. October 2015. nordea.com/cm OR tradefinance Name of document 8/8 2015/V1 User Guide Trade Finance Global Reports Centre October 2015 nordea.com/cm OR tradefinance Name of document 2015/V1 8/8 Table of Contents 1 Trade Finance Global (TFG) Reports Centre Overview... 4 1.1 Key

More information

Getting Started with the Ed-Fi ODS and Ed-Fi ODS API

Getting Started with the Ed-Fi ODS and Ed-Fi ODS API Getting Started with the Ed-Fi ODS and Ed-Fi ODS API Ed-Fi ODS and Ed-Fi ODS API Version 2.0 - Technical Preview October 2014 2014 Ed-Fi Alliance, LLC. All rights reserved. Ed-Fi is a registered trademark

More information

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Third edition (May 2012). Copyright International Business Machines Corporation 2012. US Government Users Restricted

More information

1. Product Information

1. Product Information ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such

More information

LANDESK Service Desk. Desktop Manager

LANDESK Service Desk. Desktop Manager LANDESK Service Desk Desktop Manager LANDESK SERVICE DESK DESKTOP MANAGER GUIDE This document contains information, which is the confidential information and/or proprietary property of LANDESK Software,

More information

Forms Printer User Guide

Forms Printer User Guide Forms Printer User Guide Version 10.51 for Dynamics GP 10 Forms Printer Build Version: 10.51.102 System Requirements Microsoft Dynamics GP 10 SP2 or greater Microsoft SQL Server 2005 or Higher Reporting

More information

Information Server Documentation SIMATIC. Information Server V8.0 Update 1 Information Server Documentation. Introduction 1. Web application basics 2

Information Server Documentation SIMATIC. Information Server V8.0 Update 1 Information Server Documentation. Introduction 1. Web application basics 2 Introduction 1 Web application basics 2 SIMATIC Information Server V8.0 Update 1 System Manual Office add-ins basics 3 Time specifications 4 Report templates 5 Working with the Web application 6 Working

More information

Oracle Enterprise Manager

Oracle Enterprise Manager Oracle Enterprise Manager System Monitoring Plug-in for Oracle TimesTen In-Memory Database Installation Guide Release 11.2.1 E13081-02 June 2009 This document was first written and published in November

More information

Online Backup Client User Manual Linux

Online Backup Client User Manual Linux Online Backup Client User Manual Linux 1. Product Information Product: Online Backup Client for Linux Version: 4.1.7 1.1 System Requirements Operating System Linux (RedHat, SuSE, Debian and Debian based

More information

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support...

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support... Informatica Corporation B2B Data Exchange Version 9.5.0 Release Notes June 2012 Copyright (c) 2006-2012 Informatica Corporation. All rights reserved. Contents New Features... 1 Installation... 3 Upgrade

More information

Auditing manual. Archive Manager. Publication Date: November, 2015

Auditing manual. Archive Manager. Publication Date: November, 2015 Archive Manager Publication Date: November, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this software,

More information

THUM - Temperature Humidity USB Monitor

THUM - Temperature Humidity USB Monitor THUM - Temperature Humidity USB Monitor The THUM is a true USB device to monitor temperature and relative humidity of an interior living, working, and storage spaces. The THUM is ideal for computer rooms,

More information

BIRT Application and BIRT Report Deployment Functional Specification

BIRT Application and BIRT Report Deployment Functional Specification Functional Specification Version 1: October 6, 2005 Abstract This document describes how the user will deploy a BIRT Application and BIRT reports to the Application Server. Document Revisions Version Date

More information

Administrator Manual

Administrator Manual . Self-evaluation Platform (SEP) on Information Technology in Education (ITEd) for School Administrator Manual Mar 2006 [Version 3.0] Copyright 2005 Education and Manpower Bureau Page 1 Table of Contents

More information

BusinessObjects Enterprise XI Release 2 Administrator s Guide

BusinessObjects Enterprise XI Release 2 Administrator s Guide BusinessObjects Enterprise XI Release 2 Administrator s Guide BusinessObjects Enterprise XI Release 2 1 Patents Trademarks Copyright Third-party contributors Business Objects owns the following U.S. patents,

More information

Online Backup Client User Manual

Online Backup Client User Manual For Mac OS X Software version 4.1.7 Version 2.2 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by other means.

More information

SAP BusinessObjects Business Intelligence (BI) platform Document Version: 4.1, Support Package 3-2014-04-03. Report Conversion Tool Guide

SAP BusinessObjects Business Intelligence (BI) platform Document Version: 4.1, Support Package 3-2014-04-03. Report Conversion Tool Guide SAP BusinessObjects Business Intelligence (BI) platform Document Version: 4.1, Support Package 3-2014-04-03 Table of Contents 1 Report Conversion Tool Overview.... 4 1.1 What is the Report Conversion Tool?...4

More information

FileMaker 11. ODBC and JDBC Guide

FileMaker 11. ODBC and JDBC Guide FileMaker 11 ODBC and JDBC Guide 2004 2010 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc. registered

More information

Novell ZENworks Asset Management 7.5

Novell ZENworks Asset Management 7.5 Novell ZENworks Asset Management 7.5 w w w. n o v e l l. c o m October 2006 USING THE WEB CONSOLE Table Of Contents Getting Started with ZENworks Asset Management Web Console... 1 How to Get Started...

More information

TECHNICAL REFERENCE GUIDE

TECHNICAL REFERENCE GUIDE TECHNICAL REFERENCE GUIDE SOURCE TARGET Kerio Microsoft Exchange/Outlook (PST) (versions 2010, 2007) Copyright 2014 by Transend Corporation EXECUTIVE SUMMARY This White Paper provides detailed information

More information

An Introduction To The Web File Manager

An Introduction To The Web File Manager An Introduction To The Web File Manager When clients need to use a Web browser to access your FTP site, use the Web File Manager to provide a more reliable, consistent, and inviting interface. Popular

More information

Installation & Configuration Guide User Provisioning Service 2.0

Installation & Configuration Guide User Provisioning Service 2.0 Installation & Configuration Guide User Provisioning Service 2.0 NAVEX Global User Provisioning Service 2.0 Installation Guide Copyright 2015 NAVEX Global, Inc. NAVEX Global is a trademark/service mark

More information

Table of Contents. Welcome... 2. Login... 3. Password Assistance... 4. Self Registration... 5. Secure Mail... 7. Compose... 8. Drafts...

Table of Contents. Welcome... 2. Login... 3. Password Assistance... 4. Self Registration... 5. Secure Mail... 7. Compose... 8. Drafts... Table of Contents Welcome... 2 Login... 3 Password Assistance... 4 Self Registration... 5 Secure Mail... 7 Compose... 8 Drafts... 10 Outbox... 11 Sent Items... 12 View Package Details... 12 File Manager...

More information

SOA Software API Gateway Appliance 7.1.x Administration Guide

SOA Software API Gateway Appliance 7.1.x Administration Guide SOA Software API Gateway Appliance 7.1.x Administration Guide Trademarks SOA Software and the SOA Software logo are either trademarks or registered trademarks of SOA Software, Inc. Other product names,

More information

Fixes for CrossTec ResQDesk

Fixes for CrossTec ResQDesk Fixes for CrossTec ResQDesk Fixes in CrossTec ResQDesk 5.00.0006 December 2, 2014 Resolved issue where the list of Operators on Category was not saving correctly when adding multiple Operators. Fixed issue

More information

SmartConnect Users Guide

SmartConnect Users Guide eone Integrated Business Solutions SmartConnect Users Guide Copyright: Manual copyright 2003 eone Integrated Business Solutions All rights reserved. Your right to copy this documentation is limited by

More information

Simba XMLA Provider for Oracle OLAP 2.0. Linux Administration Guide. Simba Technologies Inc. April 23, 2013

Simba XMLA Provider for Oracle OLAP 2.0. Linux Administration Guide. Simba Technologies Inc. April 23, 2013 Simba XMLA Provider for Oracle OLAP 2.0 April 23, 2013 Simba Technologies Inc. Copyright 2013 Simba Technologies Inc. All Rights Reserved. Information in this document is subject to change without notice.

More information

Available Update Methods

Available Update Methods The Spectralink 84-Series handsets support multiple methods for updating handset software. This document will detail each of those processes in order to give you the greatest flexibility when administering

More information

Synchronizer Installation

Synchronizer Installation Synchronizer Installation Synchronizer Installation Synchronizer Installation This document provides instructions for installing Synchronizer. Synchronizer performs all the administrative tasks for XenClient

More information

How To Load Data Into An Org Database Cloud Service - Multitenant Edition

How To Load Data Into An Org Database Cloud Service - Multitenant Edition An Oracle White Paper June 2014 Data Movement and the Oracle Database Cloud Service Multitenant Edition 1 Table of Contents Introduction to data loading... 3 Data loading options... 4 Application Express...

More information

MailStore Server 5.0 Documentation

MailStore Server 5.0 Documentation MailStore Server 5.0 Documentation 2010 deepinvent Software GmbH 24. May 2011 Products that are referred to in this document may be either trademarks and/or registered trademarks of the respective owners.

More information

Business Intelligence Tutorial

Business Intelligence Tutorial IBM DB2 Universal Database Business Intelligence Tutorial Version 7 IBM DB2 Universal Database Business Intelligence Tutorial Version 7 Before using this information and the product it supports, be sure

More information

Intellicus Cluster and Load Balancing (Windows) Version: 7.3

Intellicus Cluster and Load Balancing (Windows) Version: 7.3 Intellicus Cluster and Load Balancing (Windows) Version: 7.3 Copyright 2015 Intellicus Technologies This document and its content is copyrighted material of Intellicus Technologies. The content may not

More information

EMC Documentum Connector for Microsoft SharePoint

EMC Documentum Connector for Microsoft SharePoint EMC Documentum Connector for Microsoft SharePoint Version 7.1 Installation Guide EMC Corporation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Legal Notice Copyright 2013-2014

More information

Workflow Conductor for SharePoint 2010

Workflow Conductor for SharePoint 2010 Workflow Conductor for SharePoint 2010 Release 1.6 (SA08) Overview System Requirements Installing Workflow Conductor Configuring Workflow Conductor Using Workflow Conductor Studio Managing Workflows Licensing

More information

XenClient Enterprise Synchronizer Installation Guide

XenClient Enterprise Synchronizer Installation Guide XenClient Enterprise Synchronizer Installation Guide Version 5.1.0 March 26, 2014 Table of Contents About this Guide...3 Hardware, Software and Browser Requirements...3 BIOS Settings...4 Adding Hyper-V

More information

Business Intelligence Tutorial: Introduction to the Data Warehouse Center

Business Intelligence Tutorial: Introduction to the Data Warehouse Center IBM DB2 Universal Database Business Intelligence Tutorial: Introduction to the Data Warehouse Center Version 8 IBM DB2 Universal Database Business Intelligence Tutorial: Introduction to the Data Warehouse

More information

Elixir Schedule Designer User Manual

Elixir Schedule Designer User Manual Elixir Schedule Designer User Manual Release 7.3 Elixir Technology Pte Ltd Elixir Schedule Designer User Manual: Release 7.3 Elixir Technology Pte Ltd Published 2008 Copyright 2008 Elixir Technology Pte

More information

IBM WebSphere Application Server Version 7.0

IBM WebSphere Application Server Version 7.0 IBM WebSphere Application Server Version 7.0 Centralized Installation Manager for IBM WebSphere Application Server Network Deployment Version 7.0 Note: Before using this information, be sure to read the

More information

Redpaper Axel Buecker Kenny Chow Jenny Wong

Redpaper Axel Buecker Kenny Chow Jenny Wong Redpaper Axel Buecker Kenny Chow Jenny Wong A Guide to Authentication Services in IBM Security Access Manager for Enterprise Single Sign-On Introduction IBM Security Access Manager for Enterprise Single

More information

Email Address Registration. Administrator Guide

Email Address Registration. Administrator Guide Email Address Registration Administrator Guide Address Registration Administrator Guide Documentation version: 1.0 Legal Notice Legal Notice Copyright 2013 Symantec Corporation. All rights reserved. Symantec,

More information

PrintShop Mail Web. Release Notes

PrintShop Mail Web. Release Notes PrintShop Mail Web Release Notes Copyright Information Copyright 1994-2010 Objectif Lune Inc. All Rights Reserved. No part of this publication may be reproduced, transmitted, transcribed, stored in a retrieval

More information

NASA Workflow Tool. User Guide. September 29, 2010

NASA Workflow Tool. User Guide. September 29, 2010 NASA Workflow Tool User Guide September 29, 2010 NASA Workflow Tool User Guide 1. Overview 2. Getting Started Preparing the Environment 3. Using the NED Client Common Terminology Workflow Configuration

More information

How To Set Up An Intellicus Cluster And Load Balancing On Ubuntu 8.1.2.2 (Windows) With A Cluster And Report Server (Windows And Ubuntu) On A Server (Amd64) On An Ubuntu Server

How To Set Up An Intellicus Cluster And Load Balancing On Ubuntu 8.1.2.2 (Windows) With A Cluster And Report Server (Windows And Ubuntu) On A Server (Amd64) On An Ubuntu Server Intellicus Cluster and Load Balancing (Windows) Intellicus Enterprise Reporting and BI Platform Intellicus Technologies [email protected] www.intellicus.com Copyright 2014 Intellicus Technologies This

More information

Manual POLICY PATROL SECURE FILE TRANSFER

Manual POLICY PATROL SECURE FILE TRANSFER Manual POLICY PATROL SECURE FILE TRANSFER MANUAL Policy Patrol Secure File Transfer This manual, and the software described in this manual, are copyrighted. No part of this manual or the described software

More information

Network Event Viewer now supports real-time monitoring enabling system administrators to be notified immediately when critical events are logged.

Network Event Viewer now supports real-time monitoring enabling system administrators to be notified immediately when critical events are logged. About Network Event Viewer is a network wide event log monitoring, consolidation, auditing and reporting tool enabling System Administrators to satisfy Sarbanes-Oxley auditing requirements while proactively

More information

VERITAS Backup Exec TM 10.0 for Windows Servers

VERITAS Backup Exec TM 10.0 for Windows Servers VERITAS Backup Exec TM 10.0 for Windows Servers Quick Installation Guide N134418 July 2004 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software

More information

Ad Hoc (Temporary) Accounts Instructions

Ad Hoc (Temporary) Accounts Instructions DLG/PDV SFTP Server Instructions 1. Ad Hoc (Temporary) Accounts. 2. LeadsGen (Permanent) Accounts. 3. Manually configuring SFTP Clients (WinSCP & FileZilla). 4. Uploading files into SFTP server. 5. Frequently

More information

Portals and Hosted Files

Portals and Hosted Files 12 Portals and Hosted Files This chapter introduces Progress Rollbase Portals, portal pages, portal visitors setup and management, portal access control and login/authentication and recommended guidelines

More information

Microsoft Dynamics GP. Extender User s Guide

Microsoft Dynamics GP. Extender User s Guide Microsoft Dynamics GP Extender User s Guide Copyright Copyright 2010 Microsoft. All rights reserved. Limitation of liability This document is provided as-is. Information and views expressed in this document,

More information

HP IMC Firewall Manager

HP IMC Firewall Manager HP IMC Firewall Manager Configuration Guide Part number: 5998-2267 Document version: 6PW102-20120420 Legal and notice information Copyright 2012 Hewlett-Packard Development Company, L.P. No part of this

More information

Installing, Uninstalling, and Upgrading Service Monitor

Installing, Uninstalling, and Upgrading Service Monitor CHAPTER 2 Installing, Uninstalling, and Upgrading Service Monitor This section contains the following topics: Preparing to Install Service Monitor, page 2-1 Installing Cisco Unified Service Monitor, page

More information

IBM Unica emessage Version 8 Release 6 February 13, 2015. User's Guide

IBM Unica emessage Version 8 Release 6 February 13, 2015. User's Guide IBM Unica emessage Version 8 Release 6 February 13, 2015 User's Guide Note Before using this information and the product it supports, read the information in Notices on page 403. This edition applies to

More information

User Guide. Version 3.2. Copyright 2002-2009 Snow Software AB. All rights reserved.

User Guide. Version 3.2. Copyright 2002-2009 Snow Software AB. All rights reserved. Version 3.2 User Guide Copyright 2002-2009 Snow Software AB. All rights reserved. This manual and computer program is protected by copyright law and international treaties. Unauthorized reproduction or

More information

TIBCO ActiveMatrix BusinessWorks Plug-in for TIBCO Managed File Transfer Software Installation

TIBCO ActiveMatrix BusinessWorks Plug-in for TIBCO Managed File Transfer Software Installation TIBCO ActiveMatrix BusinessWorks Plug-in for TIBCO Managed File Transfer Software Installation Software Release 6.0 November 2015 Two-Second Advantage 2 Important Information SOME TIBCO SOFTWARE EMBEDS

More information