IBM SmartCloud Analytics - Log Analysis Version 1.1. Extending IBM SmartCloud Analytics - Log Analysis

Size: px
Start display at page:

Download "IBM SmartCloud Analytics - Log Analysis Version 1.1. Extending IBM SmartCloud Analytics - Log Analysis"

Transcription

1 IBM SmartCloud Analytics - Log Analysis Version 1.1 Extending IBM SmartCloud Analytics - Log Analysis

2

3 IBM SmartCloud Analytics - Log Analysis Version 1.1 Extending IBM SmartCloud Analytics - Log Analysis

4 Note Before using this information and the product it supports, read the information in Notices on page 103. Edition notice This edition applies to IBM SmartCloud Analytics - Log Analysis and to all subsequent releases and modifications until otherwise indicated in new editions. References in content to IBM products, software, programs, services or associated technologies do not imply that they will be available in all countries in which IBM operates. Content, including any plans contained in content, may change at any time at IBM's sole discretion, based on market opportunities or other factors, and is not intended to be a commitment to future content, including product or feature availability, in any way. Statements regarding IBM's future direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. Please refer to the developerworks terms of use for more information. Copyright IBM Corporation US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

5 Contents About this publication Audience Publications Accessing terminology online Accessibility Tivoli technical training Providing feedback Conventions used in this publication Typeface conventions Extending IBM SmartCloud Analytics - Log Analysis Overview Workflow for creating an Insight Pack Prerequisite knowledge Overview of IBM SmartCloud Analytics - Log Analysis extension options Customizing artifacts Custom annotations and splitters Custom Annotated Query Language (AQL) rules 12 Using Java to create annotators and splitters.. 16 Using Python to create annotators and splitters 20 Indexing configuration Field configuration Data type configuration IBM Tivoli Log File Agent Configuration Configuring remote monitoring that uses the predefined configuration files Steps to create an Insight Pack Creating a custom Insight Pack Extending an existing Insight Pack Upgrading a custom Insight Pack Tools for extending IBM SmartCloud Analytics - Log Analysis Software prerequisites Installing the Insight Pack tooling Using the Eclipse tools to create Insight Pack artifacts Insight Pack project structure Completing the project Overview tab Creating an Insight Pack project in Eclipse Importing an Insight Pack Editing the index configuration Creating File Sets Creating Rule Sets Creating Source Types Creating Collections Building the Insight Pack Eclipse project Using the pkg_mgmt command to manage Insight Packs Displaying Insight Pack information Installing an Insight Pack Upgrading an Insight Pack Removing an Insight Pack Custom Apps Defining a Custom App Steps to create a Custom App Defining a search filter app Search REST API overview Query API for search Best practices information Guidelines for developing AQL Notices Trademarks Copyright IBM Corp iii

6 iv IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

7 About this publication This guide contains information about how to use IBM SmartCloud Analytics - Log Analysis. Audience Publications This publication is for users of the IBM SmartCloud Analytics - Log Analysis product. This section provides information about the IBM SmartCloud Analytics - Log Analysis publications. It describes how to access and order publications. Accessing terminology online The IBM Terminology Web site consolidates the terminology from IBM product libraries in one convenient location. You can access the Terminology Web site at the following Web address: Accessibility Accessibility features help users with a physical disability, such as restricted mobility or limited vision, to use software products successfully. In this release, the IBM SmartCloud Analytics - Log Analysis user interface does not meet all accessibility requirements. Tivoli technical training Accessibility features This information center, and its related publications, are accessibility-enabled. To meet this requirement the user documentation in this information center is provided in HTML and PDF format and descriptive text is provided for all documentation images. Related accessibility information You can view the publications for IBM SmartCloud Analytics - Log Analysis in Adobe Portable Document Format (PDF) using the Adobe Reader. IBM and accessibility For more information about the commitment that IBM has to accessibility, see the IBM Human Ability and Accessibility Center. The IBM Human Ability and Accessibility Center is at the following web address: (opens in a new browser window or tab) For Tivoli technical training information, refer to the following IBM Tivoli Education Web site at Copyright IBM Corp

8 Providing feedback We appreciate your comments and ask you to submit your feedback to the IBM SmartCloud Analytics - Log Analysis community. Conventions used in this publication This publication uses several conventions for special terms and actions, operating system-dependent commands and paths, and margin graphics. Typeface conventions This publication uses the following typeface conventions: Bold Italic v v v v v Lowercase commands and mixed case commands that are otherwise difficult to distinguish from surrounding text Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons, list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs, property sheets), labels (such as Tip:, and Operating system considerations:) Keywords and parameters in text Citations (examples: titles of publications, diskettes, and CDs Words defined in text (example: a nonswitched line is called a point-to-point line) v Emphasis of words and letters (words as words example: "Use the word that to introduce a restrictive clause."; letters as letters example: "The LUN address must start with the letter L.") v New terms in text (except in a definition list): a view is a frame in a workspace that contains data. v Variables and values you must provide:... where myname represents... Monospace v Examples and code examples v File names, programming keywords, and other elements that are difficult to distinguish from surrounding text v Message text and prompts addressed to the user v Text that the user must type v Values for arguments or command options 2 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

9 Extending IBM SmartCloud Analytics - Log Analysis Overview This section describes how to extend the features of IBM SmartCloud Analytics - Log Analysis using the guidance and tools provided. This section provides an overview of IBM SmartCloud Analytics - Log Analysis and outlines how you can extend IBM SmartCloud Analytics using tools and techniques outlined in this guide. You can extend IBM SmartCloud Analytics to ingest new log data and to develop custom applications to visualize the indexed data. You can extend IBM SmartCloud Analytics to ingest new log data and to develop custom applications to visualize the indexed data. A set of related artifacts to ingest data or to develop applications will be packaged together as an installable package called an Insight Pack. The information contained in this section is intended for developers who want to understand how to the extend IBM SmartCloud Analytics - Log Analysis to provide support for a new log file source, modify support for an existing log source, or to develop a custom application. An Insight Pack is a set of artifacts packaged together to allow IBM SmartCloud Analytics - Log Analysis to ingest data or used to develop custom applications. An Insight Pack contains a complete set of artifacts required to process a log source. You can install, uninstall, or upgrade an Insight Pack as a stand-alone package. The Insight Pack defines: v The type of log data that is to be consumed. v How data is annotated. The data is annotated to highlight relevant information. v v How the annotated data is indexed. The indexing process allows you to manipulate search results for better problem determination and diagnostics. How to render the data in a chart. IBM SmartCloud Analytics - Log Analysis includes the Insight Packs: WebSphere Application Server Insight Pack This Insight Pack includes support for ingesting and performing metadata searches against the following WebSphere Application Server V7 and V8 log files DB2 Insight Pack The Insight Pack includes support for ingesting and performing metadata searches against the DB2 version 9.7 and 10.1 db2diag.log files. Generic Annotator Insight Pack This Insight Pack is not specific to any particular log data type. It can be used to annotate tokens so that you can analyze log files for which a log-specific Insight Pack is not available. Workflow for creating an Insight Pack This topic outlines the steps that you must complete to create an Insight Pack. Copyright IBM Corp

10 Before you begin Prerequisite knowledge Create a Log Source using the IBM SmartCloud Analytics - Log Analysis Generic Annotation to determine whether the default annotations provided by IBM SmartCloud Analytics - Log Analysis are sufficient to process your log file data. If the results are not sufficient for your requirements, you can develop an Insight Pack for your log file type by completing these steps: Procedure 1. Acquire a representative sample of log files. Choose log files with as many different log record patterns as possible. 2. If you are using the IBM Tivoli Monitoring Log File Agent to push data to IBM SmartCloud Analytics - Log Analysis, create IBM Tivoli Monitoring Log File Agent configuration artifacts for the new data source. 3. Identify the log file record boundaries, patterns, and so on. 4. Identify fields for annotation within logical record patterns. 5. Use the Insight Pack tools to: a. Create and test Annotation Query Language (AQL) rules to split log file records and extract relevant pieces of data that you want to index. b. (Optional), Create custom logic to perform the split and annotate functions. c. Develop the index configuration which describes the characteristics of fields to be indexed. d. Create the administrative configuration artifact definitions that are installed with the Insight Pack. e. Generate the Insight Pack for testing. 6. Use IBM SmartCloud Analytics - Log Analysis to test that log records, from the log file type, are split, annotated, and indexed correctly. 7. Validate that the data is split, annotated, and indexed and perform some searches on the indexed fields to verify the results. To create an IBM SmartCloud Analytics - Log Analysis Insight Pack, you must have knowledge and experience in a number of areas. This topic describes the prerequisite skills and knowledge required to develop an Insight Pack. Before you begin, ensure that you understand the use and workflows for IBM SmartCloud Analytics - Log Analysis. In particular, ensure that you understand how to: v v Configure IBM SmartCloud Analytics - Log Analysis using the Administrative Settings User Interface. Configure the IBM Tivoli Monitoring Log File Agent, including understanding how to create regular expressions to control the log file records sent to IBM SmartCloud Analytics - Log Analysis. Alternatively, configure the REST data collector client. In addition to these topics, knowledge of one or more of these might be required: v IBM InfoSphere BigInsights 2.0 tools for Eclipse v Annotation Query Language (AQL) v Java Script Object Notation (JSON) v Java Database Connectivity (JDBC) 4 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

11 v v v v Structured Query Language (SQL) Java Python Regular expressions Note: You can use Java or Python as alternatives to AQL. Overview of IBM SmartCloud Analytics - Log Analysis extension options This section describes how data is consumed by IBM SmartCloud Analytics - Log Analysis, the processes that are used to consume the data, and the aspects of those processes that can be customized to create an Insight Pack. Figure 1 illustrates the flow of data in IBM SmartCloud Analytics - Log Analysis and outlines the extension interfaces that you can use to develop an Insight Pack. Figure 1. Overview of Insight Pack extension options Data can only be processed after it has first been consumed by IBM SmartCloud Analytics - Log Analysis. Data can be consumed using one of: v IBM Tivoli Monitoring Log File Agent v IBM SmartCloud Analytics - Log Analysis Data collector client Note: The WebSpere Insight Pack, installed when you install IBM SmartCloud Analytics - Log Analysis is used to illustrate the topics in this guide. Extending IBM SmartCloud Analytics - Log Analysis 5

12 IBM Tivoli Monitoring Log File Agent The IBM Tivoli Monitoring Log File Agent reads log records and sends them using an Event Integration Facility (EIF) event to the IBM SmartCloud Analytics - Log Analysis server. You can use multiple remote IBM Tivoli Monitoring Log File Agents to send data to an EIF receiver running on the same machine as the IBM SmartCloud Analytics - Log Analysis server or you can consume data remotely using the IBM Tivoli Monitoring Log File Agent that is installed and running on the same machine as the IBM SmartCloud Analytics - Log Analysis server. In each scenario, the IBM Tivoli Monitoring Log File Agent formats the log data as an EIF event and sends it to the EIF Receiver. The EIF Receiver forwards this event to the to Data collector on the IBM SmartCloud Analytics - Log Analysis server. You must configure the IBM Tivoli Monitoring Log File Agent EIF record format to include the data required by the IBM SmartCloud Analytics - Log Analysis server. IBM SmartCloud Analytics - Log Analysis Data collector client The Data collector is a client application that reads log records and sends them directly to the Data collector REST API provided by the IBM SmartCloud Analytics - Log Analysis server. You can use the Data collector client application that is provided when you install IBM SmartCloud Analytics - Log Analysis. This application reads a log file and sends the data to the IBM SmartCloud Analytics - Log Analysis server in multiple batches. The batch size can be configured to meet your requirements. You can also create your own client that invokes the Data collector REST API. Annotation As data is passed to IBM SmartCloud Analytics - Log Analysis for processing, it is annotated to extract information based on rules or other custom logic that has been specifically developed for the log Source Type. After the key information is extracted, the log record data is indexed using configuration attributes that you have provided. These attributes indicate to the indexer how the data can be used in subsequent retrievals. After the data has been indexed, you can then search the data to gain more insight into the data for better problem determination. Customizing artifacts There are a number of interfaces in IBM SmartCloud Analytics - Log Analysis for which you can create Insight Pack artifacts to provide support for a new log Source Type. Adding data There are two ways in which data can be consumed by IBM SmartCloud Analytics - Log Analysis: IBM Tivoli Monitoring Log File Agent When you are using the IBM Tivoli Monitoring Log File Agent to push log file data into IBM SmartCloud Analytics - Log Analysis, configuration files are required to format the Event Integration Facility (EIF) record sent to the EIF Receiver. These configuration files ensure that all of the data required by IBM SmartCloud Analytics - Log Analysis is present. The IBM Tivoli Monitoring Log File Agent format configuration can, if required, include a more restrictive expression that selectively passes log record data on to the 6 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

13 Annotation EIF Receiver component. Include the default configurations for the IBM Tivoli Monitoring Log File Agent in your Insight Pack. Annotation is the extraction of key pieces of data from unstructured or semi-structured input text. When you develop annotations for IBM SmartCloud Analytics - Log Analysis, you can use Annotation Query Language (AQL) rules, or custom Java or Python logic. Split/Annotate There are two steps to the annotation process, split and annotate. During the split stage, specific logic, that is comprised of rules or custom logic, is invoked to determine the logical beginning and end of an input data record. For example, if the logic is written to split log records by timestamp, then all physical records without a timestamp which follow the first physical record with a timestamp are considered part of the current logical record until the presence of the next timestamp is detected. After a complete logical record has been established, it is forwarded on to the annotate stage where additional logic is executed. This additional logic annotates or extracts the key pieces of information that are to be indexed. The fields annotated and subsequently indexed are those that provide the most insight for searches or other higher-level operations performed on the indexed data. AQL Annotation Query Language (AQL) rules can be used to split input data records based on some known boundary and also used to annotate data from each record so that the records can be indexed. AQL rules included in an Insight Pack are installed into the IBM SmartCloud Analytics - Log Analysis server when the Insight Pack is installed. Tools are provided to assist you with the development of AQL rules. Custom You can write custom logic, in Java or Python script, to perform the split and annotate functions. This is useful when you do not want to use or write AQL rules. You can include custom logic in an Insight Pack. none You can choose to exclude split and annotation logic from your Insight Pack. If you choose this option, any data records processed by Collections defined in the Insight Pack are indexed based on the indexing configuration only. In this case, only free form searches can be performed on the indexed data records. Index configuration To allow the fields extracted by the annotation logic to be indexed by IBM SmartCloud Analytics - Log Analysis, you must supply an indexing configuration. The index configuration determines what is indexed, and how indexed data can be used in subsequent retrievals. After the data has been indexed, you can perform searches and other higher-level operations to gain greater insight into the data for better problem determination. Tools are provided to allow you to develop an indexing configuration. Administrative configuration IBM SmartCloud Analytics - Log Analysis provides a REST API to allow you to create configuration artifacts. As an Insight Pack developer, you can include definitions for various configuration artifacts such as Source Types, Collections, Extending IBM SmartCloud Analytics - Log Analysis 7

14 Rule Sets and so on. These artifacts are created when the content Insight Pack is installed. Tools are available to assist you with creating the configuration artifacts. Custom annotations and splitters To control how the system processes incoming log file records, you can define custom annotations and splitters for your Insight Pack. Before IBM SmartCloud Analytics - Log Analysis indexes any data, it can split and annotate the incoming log file records. You can use either the Annotation Query Language (AQL) rules or custom logic implemented using technologies such as Java or Python. Splitting Splitting describes how IBM SmartCloud Analytics - Log Analysis separates physical log file records into logical records using a logical boundary such as time stamp or a new line. For example, when a timestamp is used as the logical boundary, all records after the beginning of the first detected timestamp are included in the logical record. The beginning of the next timestamp is used to end the logical record and to start the next logical record. The logic used by a splitter to determine how to manage incoming data records must adhere to a schema that is required by IBM SmartCloud Analytics - Log Analysis. This is true for both AQL and custom logic splitters. Splitter logic is used to process batches of records when a complete set of logical log records might not be included in a record batch. The splitter must process partial records that can occur at the start of the batch as well as at the end of the batch. A splitter must distinguish between incoming data records that form a complete log record from records that it must buffer to be marked as complete when additional records are added. It also must identify records that can be discarded, for example, records that the splitter determines are not going to be part of complete log records. The splitter logic can process a batch of incoming records and must split them on the defined boundary. It returns split records with a type that indicates to IBM SmartCloud Analytics - Log Analysis how each record is handled. The general schema that is returned by the splitter contains the following attributes: Log text The text that is contained in the log record after it is split. Timestamp The timestamp, if there is one, that is associated with the log record. Type The type is a single character, A, B, or C, that indicates the type of this log record. The possible types are as follows: v A: indicates a complete log record. The splitter logic determines that the associated record is complete. The record can be sent to the annotation and indexing processes. For example, in this example, the first record is a type A record and the second is of type B. This is because the second record indicates to the splitter that the first record is complete: 8 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

15 v v [9/21/12 14:31:13:117 GMT+05:30] e InternalGener I DSRA8203I: Database product name : D2/LINUXX8664 [9/21/12 14:31:13:119 GMT+05:30] e InternalGener I DSRA8204I: Database product version : SQL09070 B: indicates that there is a partial log record at the end of the set. For example, the splitter detects the start of a new logical record but cannot determine if it is complete because the splitter cannot find the next logical record boundary that indicates the start of the next record. The splitter marks the record as type B to indicate to the IBM SmartCloud Analytics - Log Analysis server that this record is a partial record and it must be buffered until more incoming records are received to allow it to complete the logical record. The IBM SmartCloud Analytics - Log Analysis server sends all type A log records for annotation and indexing. It buffers type B records. The buffered type B records are then prefixed to the next batch of input that is sent to the splitter when it receives more input records. For example: [9/21/12 14:31:27:882 GMT+05:30] servlet E com.ibm.ws.webcontainer.servlet.servletwrapper service SRVE0068E: Uncaught exception created in one of the service methods of the servlet TradeAppServlet in application DayTrader2-EE5. Exception created : javax.servlet.servletexception: TradeServletAction.doLogout (...)exception logging out user uid:1 at org.apache.geronimo.samples.daytrader.web.tradeservletaction.dologout(tradeservletaction.java:458) at org.apache.geronimo.samples.daytrader.web.tradeappservlet.performtask(tradeappservlet.java:169) at org.apache.geronimo.samples.daytrader.web.tradeappservlet.doget(tradeappservlet.java:78) C: indicates that the text can be discarded. The IBM SmartCloud Analytics - Log Analysis server discards this text. This type of record is not sent for annotation and indexing. It is not buffered. You must define the splitter so that it only marks text as type C if it is certain that it is not part of a log record that is not complete. For example, a partial log record is detected at the beginning of a batch of records. Then, a complete but unrelated logical log record is found. IBM SmartCloud Analytics - Log Analysis can never complete the partial record that was detected first. The record must be marked as type C and discarded. For example: ************ Start Display Current Environment ************ WebSphere Platform [ND r ] running with process name cldftp48node01cell\cldftp48node01\server1 and process id Host Operating System is Linux, version el5 Java version = 1.6.0, Java Compiler = j9jit24, Java VM name = IBM J9 VM Annotating After the log records are split, the logical records are sent to the annotation engine. The engine uses rules that are written in AQL or custom logic that is written in Java or Python to extract important pieces of information that are sent to the indexing engine. IBM SmartCloud Analytics - Log Analysis represents the results from the annotation process in a Java Script Object Notation (JSON) data structure called annotations. The annotations JSON structure is part of a larger structure which also contains the original log record text (the content key) and the metadata passed into the Data Collector API (the metadata key). You can reference the annotations structure to access the actual values from the annotation result. Extending IBM SmartCloud Analytics - Log Analysis 9

16 For more information, see the example. You can reference the annotation results in the source.paths attributes that are contained in the field definitions in the indexing configuration. You use dot notation to indicate where the values of the fields that are indexed are located in the annotations structure. For example, the annotation engine in IBM SmartCloud Analytics generates the following JSON structure when it processes an AQL rule set against an incoming logical log record: "annotations" : "annotatorcommon_eventtypeoutput" : [ "field_type" : "EventTypeWS", "span" : "begin" : 57, "end" : 58, "text" : "E", "text" : "E" ], "annotatorcommon_logtimestamp" : [ "span" : "begin" : 1, "end" : 32, "text" : "03/24/13 07:16:28:103 GMT+05:30" ], "annotatorcommon_msgidoutput" : [ "field_type" : "MsgId", "span" : "begin" : 59, "end" : 68, "text" : "DSRA1120E", "text" : "DSRA1120E" ], "annotatorcommon_shortnameoutput" : [ "field_type" : "ShortnameWS", "span" : "begin" : 43, "end" : 56, "text" : "TraceResponse", "text" : "TraceResponse" ], "annotatorcommon_threadidoutput" : [ "field_type" : "ThreadIDWS", "span" : "begin" : 34, "end" : 42, "text" : " ", "text" : " " ], "annotatorcommon_msgtext" : [ "fullmsg" : "begin" : 59, "end" : 167, "text" : "DSRA1120E: Application did not explicitly close all handles to this Connection. Connection cannot be pooled.", "span" : "begin" : 70, "end" : 167, "text" : "Application did not explicitly close all handles to this Connection. Connection cannot be pooled." ], 10 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

17 "content" : "span" : "begin" : 1, "end" : 169, "text" : "[03/24/13 07:16:28:103 GMT+05:30] TraceResponse E DSRA1120E: Application did not explicitly close all handles to this Connection. Connection cannot be pooled.\n", "text" : "[03/24/13 07:16:28:103 GMT+05:30] TraceResponse E DSRA1120E: Application did not explicitly close all handles to this Connection. Connection cannot be pooled.\n", "metadata" : "batchsize" : "506", "flush" : true, "hostname" : "mylogfilehost", "inputtype" : "logs", "logpath" : "/data/unityadm/ibm/loganalyticsworkgroup/logsources/was/ SystemOut.log", "logsource" : "WAS system out", "regex_class" : "AllRecords", "timestamp" : "03/24/13 07:16:28:103 GMT+05:30", "type" : "A" In the example, there are three main sections or keys that are defined in the JSON data structure: v v v Annotations: provide access to the annotation results that are created by the annotations engine when it processes an incoming log record according to AQL rules or custom logic. Content: provides access to the raw logical log record. Metadata: provides access to some of the metadata that describes the file that the log record was obtained from. For example, the host name or log source. In general, the metadata section contains any name/value pairs sent to the IBM SmartCloud Analytics - Log Analysis server from a client along with the log data. When you create the indexing configuration, you can set the value of the sourcepaths attribute for each field to a dot notation reference to an attribute within the input JSON data structure. For example, to specify the text value for the annotated field MsgId from the previous example, use the following dot notation reference that references the actual value DSRA1120E: annotations.annotatorcommon_msgidoutput.text The following reference produces the same result: annotations.annotatorcommon_msgidoutput.span.text In a similar manner, you can use dot notation references to the content and metadata keys for the sourcepaths attribute value of each field to be indexed. For example: content.text metadata.hostname For more information about indexing configuration, see the topic about index configuration, see Indexing configuration. Extending IBM SmartCloud Analytics - Log Analysis 11

18 Custom Annotated Query Language (AQL) rules You can define custom rules for splitting and annotating log records in AQL. AQL is similar to Structured Query Language (SQL) where data generated by executing AQL statements is stored in tuples. A collection of tuples generated for a given statement forms a view which is the basic AQL data model. All tuples for a given view must have the same schema. AQL is a feature of the IBM InfoSphere BigInsights platform. For more information, see %2Fcom.ibm.swg.im.infosphere.biginsights.text.doc%2Fdoc %2Fbiginsights_aqlref_con_aql-overview.html. You must be aware of the key concepts of AQL. Some of the key concepts are as follows: v v v v v v You must add a.aql extension to any file containing AQL statements. You can group related AQL files in the same directory on a file system. The directory then becomes an AQL module. Declare the module at the beginning of each.aql file. Then, when you want to reuse the same logic elsewhere, you can import the modules into other AQL files that are in a different directory. The text that is sent to the AQL engine in IBM SmartCloud Analytics - Log Analysis for annotation is represented in a specific view that is called Document. The Document view is populated by the engine when it runs. Each AQL statement can access this view and perform operations on it. The fields in an AQL tuple must belong to one of the built-in scalar types. The types are Boolean, Float, Integer, List, Span, String, and Text. The Span type represents a contiguous region of text in a text object that is identified by the beginning and ending positions. For examples, see Custom annotations and splitters on page 8. The following are some of the primary AQL language statements: import, export, and module are used to create, share, and use modules create table is used to define static lookup tables to augment annotations with additional information create dictionary is used to define dictionaries that contain words or phrases. The dictionary is used to identify matching terms across input text through extract statements or predicate functions. create view is used to create a view and to define the tuples inside that view create external view is used to specify additional metadata about a document as a new view. You can use this view alongside the predefined Document view that holds the textual and label content. extract is used to extract useful data from text. select is used to provide a powerful mechanism for constructing and combining sets of tuples that are based on various specifications AQL also has the following built in functions that you can use in extraction rules: Predicate functions such as Contains, Equals, and Follows. Scalar functions such as GetLength, GetString, and LeftContext. Aggregate functions such as Avg, Count, Min, and Max. 12 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

19 v You can also add user-defined functions (UDFs) that you define to AQL. For more information, see index.jsp?topic=%2fcom.ibm.swg.im.infosphere.biginsights.text.doc%2fdoc %2Fbiginsights_aqlref_ref_udfs.html. For examples of AQL statements, see the AQL files provided with each of the Insight Packs that are installed with IBM SmartCloud Analytics - Log Analysis. ThreadID.aql contains the views for annotating the thread Id field from a WebSphere log file. The ThreadID.aql file is located in the $UNITY_HOME/ unity_content/was/wasinsightpack_v1.1.0/extractors/ruleset/annotatorcommon directory. Requirements for a custom splitter in AQL If you define your own splitter in AQL, you must name the AQL view LogRecord. You also must define the columns in the AQL view and the corresponding data types as outlined in the following table. Table 1. LogRecord columns and data types Column Data type Description logspan Span The span of the input document that this log record represents. logtext String The text of the log record. timestamp String The time stamp, if there is any, that is associated with the log record. If the log record does not contain a time stamp, this entry contains an empty string. type String A single character that denotes the type of the log record. The value for this entry is A, B or C. For more detailed information about these values, see Custom annotations and splitters on page 8. Tooling for custom AQL rules You use the Eclipse based tools that are provided by the IBM InfoSphere BigInsights platform to help you to develop and test AQL rules. You can use the tools to import sample log file data, write AQL statements that extract the relevant information, and to test the AQL statements before you install your custom Insight Pack on the IBM SmartCloud Analytics - Log Analysis Server. For more information about how to install the tools, see Tools for extending IBM SmartCloud Analytics - Log Analysis on page 37. Best practices To help ensure that you write effective and reusable rules, read the best practices section of the documentation before you create your own AQL rules for IBM Extending IBM SmartCloud Analytics - Log Analysis 13

20 SmartCloud Analytics - Log Analysis. For more information, see Best practices information on page 99. Reusable Insight Pack components Common, reusable Annotation Query Language views and dictionaries are installed with the standard Insight Packs included with IBM SmartCloud Analytics - Log Analysis. You can save development time by copying and reusing these components in other Insight Packs. Common AQL module The Insight Packs for WebSphere Application Server, DB2, and Generic Annotations each contain a common AQL module containing AQL views and dictionaries that you can use in any Insight Pack. These views contain logic for annotating general concepts such as time and date, IP addresses, hostname, and so on from incoming file data. Some of the AQL files in the common module define functions that utilize User Defined Functions (UDFs), which are implemented in Java. JAR files that contain UDF classes are also included within the common module. The UDFs expose capabilities through AQL functions for: v date and time manipulation v pattern matching v string utility functions. The common AQL module including the views, dictionaries, and UDF JAR files is installed as part of each standard Insight Pack. For example, within the WebSphere Application Server Insight Pack, the common module is located at: $UNITY_HOME/unity_content/WAS/WASInsightPack_v1.1.0/extractors/ruleset/ common Within the common module, all files ending with the extension.aql contain the AQL views and are located in the commondirectory. Dictionaries All of the dictionaries associated with the common module and referenced by the common module AQL views reside in the dicts subdirectory and all of the UDF JAR files utilized by the common module AQL views reside in the lib subdirectory. Within the common module, the included dictionaries are the following: v month.dict - dictionary of month names and abbreviations. See the file Date_BI.aql for an example of how the month dictionary is used within a view. v timezone.dict - dictionary of timezone and time-related abbreviations. See MacrosForTimeDates.aql for an example of how the timezone dictionary is used within a view. v tlds-alpha-by-domain.dict - dictionary of top-level domains. See HostName.aql for an example of how the top-level domains dictionary is used within a view. v wkday.dict - dictionary of weekday names and abbreviations. See MacrosForTimeDates.aql for an example of how the weekday dictionary is used within a view. 14 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

21 Views Examples of some of the AQL views included within the common AQL module are the following: v v v v v DateTimeOutput (see DateTime-consolidation_BI.aql) - a view that contains date and time stamps extracted from input data. This view can process many different date and time formats based on the underlying and related views on which it was built. HostnameOutput (see HostName.aql) - a view that extracts hostnames that are either fully qualified or followed by a top level domain name IPAddressOutput (see IPAddress.aql) - a view that extracts IPv4 addresses SingleLine (see logrecordsingleline.aql) - a view that extracts single lines delimited by newline character from the input document URLOutput (see url_bi.aql) - a view that extracts URLs that begin with https or ftps or that have no protocol UDFs Examples of some of the AQL functions (that utilize UDFs) included within the common AQL module are the following: v v StrCat (see StringUtils.aql) - concatenates a given list of input strings and returns a single string. Matches (see PaternMatcherUtils.aql) - determines if a given input string matches any of a given set of patterns Reusing views To reuse views, dictionaries, functions from the "common" AQL module do the following: 1. Create a new Insight Pack project using the eclipse-based tooling. 2. Copy the common directory and everything within it from one of the existing Insight Packs to the src-files/extractors/ruleset directory within your Insight Pack project. After you copy the files, the common directory and its contents should reside under the ruleset directory as follows: src-files/extractors/ruleset/common 3. Utilize the views defined within the common AQL module from within your own AQL files in your project-specific AQL module by doing the following: a. Add an import statement at the top of your AQL file in your project-specific AQL module. For example, import module common; b. Use a qualifier when referencing the common AQL module views from within your AQL file in your project-specific AQL module. For example, Select S.logSpan from common.singleline S; 4. Include the location for the common AQL module in your Insight Pack project ruleset definition. For example, a rule set defined using the Eclipse tooling can have the following values: Name: MyProjectRuleSet Type: Annotate Rule file directory: extractors/ruleset/common;extractors/ruleset/myaqlmodule Extending IBM SmartCloud Analytics - Log Analysis 15

22 Using Java to create annotators and splitters You can use Java technology to split and annotate incoming log records. About this task You create Java classes that implement the IBM SmartCloud Analytics - Log Analysis interfaces used by the splitter and annotator functions. This method is an alternative to using Annotation Query Language (AQL) rules to create the log splitters and annotators. Java interfaces for splitters and annotators The Java interfaces are included with the IBM SmartCloud Analytics - Log Analysis are described here. The implementation process for the Java-based splitters and annotators is: 1. Create Java classes that implement specific interfaces. You create one class to implement the splitter interface and you create one class to implement the annotator interface. The JAR file that contains the classes for each of these interfaces is installed with IBM SmartCloud Analytics - Log Analysis. 2. Import the interface jar files into the Insight Pack project under the lib directory. The name of the JAR files required for compiling are unity-data-ingestion.jar and JSON4J.jar. After successful compilation, the Java splitter and annotator implementation class files are packaged in a JAR file which is included within the Insight Pack when it is exported from the tooling. 3. Use the pkg_mgmt script utility to install the Insight Pack into the IBM SmartCloud Analytics server. During the installation, the pkg_mgmt utility copies the implementation JAR to the required location in the IBM SmartCloud Analytics server.. Splitter interface The Java splitter interface is defined as follows: package com.ibm.tivoli.unity.splitterannotator.splitter; /************************************************************************ * This interface defines the APIs for Java based Splitters and is used * by third party custom Java Splitter developers * ***********************************************************************/ public interface IJavaSplitter /****************************************************************** * Split a batch of log records packaged in the input JSON * batch JavaSplitterException ******************************************************************/ public ArrayList<JSONObject> split( JSONObject batch ) throws Exception ; /***************************************************************** * Data section * ***************************************************************/ public static final String IBM_COPYRIGHT = "Licensed Materials - Property of IBM\n" + "LK3T-3580\n" + "(C)Copyright IBM Corporation 2002.\n" 16 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

23 + "All Rights Reserved.\n" + "US Government Users Restricted Rights - Use, duplication \n" + "or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.\n\n"; Input JSON The input JSON is primarily a batch of raw log records that needs to be split into logical log records according to a particular criteria (for example, timestamp). The class implementing the IJavaSplitter interface provides the logic that performs the splitting for the given criteria. The basic structure of the incoming JSON object is: content : text : // raw text to be split,,, metadata :...meta data fields, eg. hostname, logpath, other fields passed from client... Output JSON The class implementing IJavaSplitter must return an ArrayList of JSONObjects. Each JSONObject represents either a complete logical log record or a partial log record (for cases where the splitter was unable to specifically determine that the record was complete) and meta-data to indicate whether the included record is complete or not. Output JSON: content :, metadata : type : text : // text for this complete/partial log record,,, // A = complete log record // B = partial log record at end // C = partial log record at beginning, annotations : "timestamp": // include the timestamp for the current record represented in this JSON object Annotator interface The Java annotator interface is defined as follows: package com.ibm.tivoli.unity.splitterannotator.annotator; /************************************************************************ * This interface defines the APIs for Java based Annotators and is used * by third party custom Java Annotator developers * ***********************************************************************/ public interface IJavaAnnotator Extending IBM SmartCloud Analytics - Log Analysis 17

24 /***************************************************************** * Annotate the input log record & return the output with annotations * input JavaAnnotatorException *****************************************************************/ public JSONObject annotate( JSONObject input ) throws Exception ; /***************************************************************** * Data section * ***************************************************************/ public static final String IBM_COPYRIGHT = "Licensed Materials - Property of IBM\n" + "LK3T-3580\n" + "(C)Copyright IBM Corporation 2002.\n" + "All Rights Reserved.\n" + "US Government Users Restricted Rights - Use, duplication \n" + "or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.\n\n"; Input JSON The input JSON includes a logical log record (formed by splitter or raw record if no split was performed) that is now ready for annotation. The class implementing the IJavaAnnotator interface provides the logic that performs the annotation against the given input record and creates an output JSONObject representing the JSON structure containing the annotations. The basic structure of the incoming JSON object is: content : text : // logical record to be annotated, metadata :...meta data fields, eg. hostname, logpath, other fields passed from client... Output JSON The class implementing IJavaAnnotator must return a single JSONObject representing a JSON data structure containing the original data passed as input plus the annotated fields parsed from the incoming record. The following sample JSON structure depicts the format of the data that is expected to be returned in the object. Output JSON: content : text : // same text as passed in the input JSON object, metadata :...meta data fields, eg. hostname, logpath, other fields passed from client..., annotations :...annotation fields and their values produced by IJavaAnnotator implementation 18 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

25 Building splitters and annotators in Java Building custom splitter and annotator classes in Java. About this task To build custom Java splitter and annotator classes in Java. Procedure 1. Create an Insight Pack Project. 2. Import the interface JAR files into the Insight Pack project in the lib directory. The name of the JAR files required for compiling are unity-dataingestion.jar and JSON4J.jar. These files are located $UNITY_HOME/wlp/usr/ servers/unity/apps/unity.war/web-inf/lib/unity-data-ingestion.jar and $UNITY_HOME/wlp/usr/servers/Unity/apps/Unity.war/WEB-INF/lib/ JSON4J.jar. 3. Create your Java source files that implement the IJavaSplitter and IJavaAnnotator interfaces under the <project name>/src directory of your insight pack project. 4. Compile your class files and package them into a JAR file. Restriction: The JAR file that contains the custom Java splitter and annotator classes must reside in the <project name>/src-files/extractors/fileset/java directory. Otherwise, the JAR file does not install successfully when you install the insight pack on the server. Note: The JAR file containing the IJavaSplitter and IJavaAnnotator interfaces as well as other JAR files containing classes needed for compilation must be located within the project under the <project name>/lib directory. These JAR files must be on the classpath in order for compilation to be successful. To resolve any workspace compilation errors within your eclipse development environment, you can edit the properties for the insight pack project and add the JARs residing under <project name>/lib to the Java Build Path. To run the build file externally: a. Set the ANT_HOME variable: set ANT_HOME=<your home location for ANT> The recommended version is Apache ANT version b. Set the JAVA_HOME variable: set JAVA_HOME=<your java SDK - home location> Use the recommended version of the IBM Java SDK at version 1.7.0, which is the JRE installed with Log Analysis. c. From the directory in which the build file exists (for example <workspace>/<project name>), issue the command: ant all 5. Using the Insight Pack editor within the tooling, create two file set definitions; one for the custom splitter and one for the custom annotator. To create a file set using the file set editor do the following: a. Click add to define a new file set b. Enter a name for the fileset (for example, Custom Splitter) c. Select the type (Split or Annotate) d. Select the file type (Java) Extending IBM SmartCloud Analytics - Log Analysis 19

26 e. Select the file name (you should see the name of the JAR file containing your custom Java splitter and annotator) f. Enter the class name corresponding to the type of file set that it is (split or annotate) - include the full package name (for example, com.mycompany.splitter.mysplitter) Repeat the above procedure twice - once for defining the splitter file set and once for defining the annotator file set. 6. Using the editors provided within the tooling, create other artifacts that you wish to include within your insight pack (sourcetypes, collections, index configuration, etc). 7. When you are ready to test your custom Java splitter and annotator functions, you can build an installable insight pack from the tooling and then transfer the generated archive file to a IBM SmartCloud Analytics server and install it. Using Python to create annotators and splitters You can use Python technology to split and annotate incoming log records. About this task You create Python scripts that implement the IBM SmartCloud Analytics - Log Analysis interfaces used by the splitter and annotator functions. This method is an alternative to using Annotation Query Language (AQL) rules to create the log splitters and annotators. Python interfaces for splitters and annotators You can create log splitters and annotators using Python scripts with the IBM SmartCloud Analytics - Log Analysis. The implementation process for the Python-based splitters and annotators is: 1. Create Python scripts that implement specific interfaces. You create separate scripts - one for the splitter and one for the annotator. Create or copy the splitter and annotator scripts to the specific directory for an Insight Pack. When the Insight Pack is packaged and exported from the Log Analysis Tooling, it contains the implementation scripts. 2. Use the pkg_mgmt script utility to install the Insight Pack into the IBM SmartCloud Analytics server. During the installation, the implementation scripts are copied to the required location within the IBM SmartCloud Analytics server.. Note: The Input JSON and Output JSON formats described here for the splitters and annotators are the same for both the Java and Python implementations. That is, the logical JSON format is the same for both Java and Python. The formats are included here for completeness. The key difference between Java and Python is how the input and output JSON is passed in and returned. For Java, the JSON data is passed in and returned using objects. For Python the JSON data is passed in and returned using input and output files for splitters and stdin and stdout for the annotators. Splitter interface Use Python to define your log splitter. Input JSON 20 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

27 The input JSON is primarily a batch of raw log records that needs to be split into logical log records according to a particular criteria (for example, timestamp). The log records are passed to the script using an input file. The basic structure of the incoming JSON data is: content : text : // raw text to be split,,, metadata :...meta data fields, eg. hostname, logpath, other fields passed from client... Output JSON The splitter script must return output files in the required JSON format. Each JSON record represents either a complete logical log record or a partial log record (for cases where the splitter was unable to specifically determine that the record was complete) and meta-data to indicate whether the included record is complete or not. The basic structure of the output files is: Output JSON: content : text : // text for this complete/partial log record,,, metadata : type :, // A = complete log record // B = partial log record at end // C = partial log record at beginning, annotations : "timestamp": // include the timestamp for the current record represented in this JSON structure Example splitter script IBM SmartCloud Analytics - Log Analysis includes a sample splitter script here: $UNITY_HOME/DataCollector/annotators/scripts/DB2PythonSplitter.py The DB2PythonSplitter.py script splits the data within the db2diag.log. Develop the Python splitter script to process the input JSON and transfer the output JSON records to a file. You specify the file names when you invoke the splitter script. For example, the splitter script DB2PythonSplitter.py is invoked with the command: python DB2PythonSplitter.py -i /opt/unitycontent/db2logbatch.json -o /opt/unitycontent/db2logbatchsplitout.json Extending IBM SmartCloud Analytics - Log Analysis 21

28 Where db2logbatch.json is the name of the input JSON and db2logbatchsplitout.json is the name of the output JSON. Annotator interface Use Python to define your log annotator using the Input and Output JSON records. Input JSON The input JSON includes a logical log record (formed by splitter or raw record if no split was performed) that is ready for annotation. The log records are passed to the script using stdin and the creates a JSON data structure that contains the annotations and is written to stdout. The basic structure of the incoming JSON structure is: content : text : // logical record to be annotated, metadata :...meta data fields, eg. hostname, logpath, other fields passed from client... Output JSON The script implementing the annotator writes a single JSON data structure to stdout that contains the original data passed as input plus the annotated fields parsed from the incoming record. The following sample JSON structure depicts the format of the data that is expected to be written to stdout. Output JSON: content : text : // same text as passed in the input JSON structure, metadata :...meta data fields, eg. hostname, logpath, other fields passed from client..., annotations :...annotation fields and their values produced by Python script implementation Example annotator script IBM SmartCloud Analytics - Log Analysis includes a sample annotator script here: $UNITY_HOME/DataCollector/annotators/scripts/DB2PythonAnnotator.py The DB2PythonAnnotator.py script annotates the data within the db2diag.log. Develop the Python annotator script to process the input JSON from stdin and transfer the output JSON records to stdout. For example, the annotator script DB2PythonAnnotator.py is invoked with the command: python DB2PythonAnnotator.py 22 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

29 Building splitters and annotators in Python Building custom splitter and annotator scripts with Python. Before you begin Before you begin, install the tools for extending theibm SmartCloud Analytics. About this task Indexing configuration To build an Insight Pack that contains custom splitters and annotators implemented in Python: Procedure 1. Create an Log Analysis Insight Pack Project. 2. Create your Python script files that implement the splitter and annotator functions under the <project name>/src-files/extractors/fileset/script directory of your Insight Pack project. The files must be located in this directory or they will not install successfully. 3. Using the Insight Pack editor within the tooling, create two file set definitions; one for the custom splitter and one for the custom annotator. To create a File set definition, open the File sets tab in the Insight Pack Editor and complete the steps: a. Click Add to define a new File set b. Enter a name for the File set (for example, Custom Splitter) c. Select the type (Split or Annotate) d. Select the file type (Script) e. Select the file name. The scripts containing your custom Python splitter and annotator are listed in the drop-down list. Repeat the above procedure twice - once for defining the splitter File Set and once for defining the annotator File Set. 4. Using the editors provided within the Log Analysis tooling, create other artifacts that you wish to include within your Insight Pack (Source types, Collections, Index configuration, and so on). 5. When you are ready to test your custom Python splitter and annotator functions, you can build an installable Insight Pack from the tooling and then transfer the generated archive file to a IBM SmartCloud Analytics server and install it. To control how IBM SmartCloud Analytics - Log Analysis indexes records from a log file, you can create indexing settings for your content Insight Pack. The indexing configuration settings specify the data type for each field that is indexed. The settings also specify a set of indexing attributes for each field. The index processing engine uses these attributes to define how a field is processed. One configuration is defined for each Source Type that is contained in an Insight Pack. For more information about Source Types, see the topic about Source Types in the IBM SmartCloud Analytics - Log Analysis Administration Guide. The index configuration settings are defined in the Java Script Object Notation (JSON) format. To edit the index configuration settings, use the Eclipse based Extending IBM SmartCloud Analytics - Log Analysis 23

30 tooling that is provided with IBM SmartCloud Analytics - Log Analysis. For more information about how to edit the index configuration settings, see Editing the index configuration on page 43. The indexing configuration specification consists of the following attributes: v indexconfigmeta contains some basic metadata information about the indexing configuration itself. This information includes the following attributes: name specifies the name of the indexing configuration. For example, WAS SystemOut Config. Description specifies the description of the indexing configuration. For example, WAS SystemOut indexing config. version specifies version of the indexing configuration. For example, 1.0. lastmodified specifies the last modified date. For example, 01/11/2013. v Fields are used to define field descriptions for the each record to be indexed. IBM SmartCloud Analytics - Log Analysis uses the following field descriptions to define the data for each field that is indexed: fieldname specifies the name of field to be indexed datatype specifies the data type of field to be indexed. This can be TEXT, LONG, DOUBLE, and DATE. indexingattributes are five attributes that contain binary values. IBM SmartCloud Analytics - Log Analysis uses the five attributes to indicate how the field is processed. The five attributes are: - retrievable - retrievebydefault - sortable - filterable - searchable For more information about field configuration, see Field configuration on page 26 IBM SmartCloud Analytics - Log Analysis also uses an attribute that is called Source during indexing. The Source attribute is structured as follows: indexconfigmeta timezone fields: <field name> <data type> <list of indexing attributes such as sortable, searchable.> source : paths : [json_path1, json_path2,..., json_pathn], dateformats : [date_format1, date_format2], combine : one of two possible values ALL or FIRST The Source attribute consists of three other attributes: paths The paths attribute contains an array of one or more JSON path expressions. dateformats 24 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

31 The dateformats attribute is only relevant for fields that use the DATE type. It is used to specify format strings that determine how date values that are entered in this field are parsed. Attention: The number of elements in the array must be the same for both the paths and dateformats attributes. combine The combine determines how the values that are returned by the paths and dateformats attributes are used. The combine attribute has two possible values, ALL or FIRST. ALL is the default value. If combine is set to ALL, all the non-null values from all the paths are added to the content of the corresponding field. This setting allows an index field to be populated from multiple attributes in the JSON record that you specify. For example, consider a scenario where you want to index all the host names that are associated with each record into a single indexed field. The host names can be part of the structured metadata that belongs to an incoming log record or they can be extracted by analytics from a log message. For example, IBM SmartCloud Analytics - Log Analysis generates the following JSON structure after the annotation is complete: logrecordid : , hostname : host1.ibm.com, message : Server failed to ping host2.ibm.com and host3.ibm.com, Annotations : hosts : [ name : host2.ibm.com, begin : 22, end : 35, name : host3.ibm.com, begin :40, end :53 ] To ensure that the value for the field that is indexed includes both of the host names that are related to the annotated record, you use the following source attribute definition in the indexing configuration: source : paths : [ hostname, Annotations.hosts.name ], combine : ALL If combine is set to FIRST, the JSON path expressions are evaluated individually in the order that they are listed in the array. The first path expression that returns a non-null and non-empty string value is used and the subsequent expressions are ignored. If the first path expression that returns a non-null and non-empty string value returns multiple values, IBM SmartCloud Analytics - Log Analysis uses all the values to populate the indexed fields. For example, imagine that you want to index a field that stores the host names that are included in the log message. However, IBM SmartCloud Analytics - Log Analysis cannot extract the host name from some log records. In this case, you want to use the host name that is associated with the overall log record as a substitute. You use the following source attribute to do this: source : paths : [ Annotations.hosts.name, hostname ], combine : FIRST Extending IBM SmartCloud Analytics - Log Analysis 25

32 Example The following example shows an abbreviated example of the indexing configuration for WebSphere Insight Pack: "indexconfigmeta" : "description" : "Index Mapping Configuration for WAS SystemOut logs", "lastmodified" : "11/01/2013", "name" : "WAS SystemOut Config", "version" : "0.4", "timezone" : "UTC", "fields" : "classname" : "datatype" : "TEXT", "filterable" : true, "retrievable" : true, "retrievebydefault" : true, "searchable" : true, "sortable" : false, "source" : "paths" : [ "annotations.annotatorcommon_classnameoutput.span.text" ], "tokenizer" : "literal", "timestamp" : "datatype" : "DATE", "filterable" : true, "retrievable" : true, "retrievebydefault" : true, "searchable" : true, "sortable" : true, "source" : "combine" : "FIRST", "dateformats" : [ "MM/dd/yy HH:mm:ss:SSS Z", "MM/dd/yy HH:mm:ss:SSS Z" ], "paths" : [ "annotations.annotatorcommon_logtimestamp.span.text", "metadata.timestamp" ], "tokenizer" : "literal" Field configuration IBM SmartCloud Analytics - Log Analysis uses the attributes that are listed in the table to configure individual fields during indexing. The indexing configuration is a file in the JavaScript Object Notation (JSON) format. The attributes are set up as the key-value pairs in the indexing configuration file and the resulting record is mapped to the appropriate field name. The JSON record key for each attribute is listed in the first column. The possible values that are associated with this key and default values that are used when the key is missing are shown in the second and third columns. The symbols true and false refer to the corresponding JSON Boolean values. All other values, unless otherwise specified, are JSON strings. Table 2. Field configuration Attribute key Possible value Default Description datatype TEXT, LONG, DOUBLE and DATE TEXT Specifies the type of data that is stored in this field. 26 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

33 Table 2. Field configuration (continued) Attribute key Possible value Default Description retrievable true or false false Determines whether the contents of this field are stored for retrieval. When set to false, the content is not stored in the index. When set to true, the content is stored and available for retrieval. The retrievebydefault value controls how and when the content of this field is included in search results. retrievebydefault true or false false When set to true, the contents of the field is always returned as part of any search response. When set to false, the field is not part of the default response. However, when required, the content of the field can be explicitly requested using the appropriate parameters that are supported by the search run time. The retrieveable flag must be set to true for this attribute to work. sortable true or false false Enable or disable the field for sorting and range queries filterable true or false false Enable or disable facet counting and filtering on this field searchable true or false true Controls whether the field is enabled for searching/matching against it enablewildcard true or false false Controls whether the field is enabled for wildcard matching Extending IBM SmartCloud Analytics - Log Analysis 27

34 Data type configuration You can include custom data type definitions in your custom Insight Pack. You can create data type configurations for each of the following entities: Collections You use a Collection to group log data from different log sources that have the same Source Type. The Collection definition depends on the Source Type definition that specifies how the IBM SmartCloud Analytics - Log Analysis Server splits, annotates, and indexes the incoming data records. You must define values for the following properties in the Collection definition: Name Specify a unique name that is used to identify the Collection Source Type Specify the name of the Source Type that is associated with the log records in the Collection Source Types A Source Type defines how data of a particular type is split, annotated, and indexed by IBM SmartCloud Analytics - Log Analysis. The Source Type specifies the Rule Sets and, if you want to implement custom processing, the File Sets that the IBM SmartCloud Analytics - Log Analysis Server uses to split and annotate the log records for the particular log Source Type. The Source Type specifies the index configuration settings that the IBM SmartCloud Analytics - Log Analysis uses to index the log records for the particular log Source Type. You must define values for the following properties in the Source Type definition: Name Specify a unique name that is used to identify the Source Type. Enable splitter Select this flag to enable the splitter function that splits the log records during processing. Splitter Rule Set name Specify the name of the Annotation Query Language (AQL) rule set that governs how log records are split. Splitter File Set name Specify the name of a file that you created that contains custom splitter logic that you defined, for example Java or Python script, that governs how log records are split. This is an alternative to the Rule Sets. Enable annotator Select this flag to enable the annotator function that annotates the log records during processing. Annotator Rule Set name Specify the name of AQL rule set used to perform annotator function. Annotator File Set name Specify the name of a file that you created that contains custom 28 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

35 annotator logic that you defined, for example Java Archive (JAR) or Python script, that governs how log records are annotated. This is an alternative to the Rule Sets. Deliver data on annotator execution failure Set this indicator to enable indexing even when the annotation fails. By default, indexing is stopped if the annotation fails. Index configuration Specify the name of index configuration JSON file that you use in your custom Insight Pack. Rule Sets A Rule Set is a collection of files that contain rules that are written in the Annotation Query Language (AQL). IBM SmartCloud Analytics - Log Analysis uses the AQL rules to split logical log records according to a known boundary or to extract the data from fields in log records that contain structured or semi-structured data. You must define the following properties in the Rule Set definition: Name Specify a unique name that is used to identify the Rule Set. Type Specify whether you want the Rule Set to split or annotate the log records. Rule file directory Specify the paths for the directories that contain the AQL rule files that the Rule Set uses. The paths must be relative to the src-files directory path that is defined in your custom Insight Pack. For example, extractors/ruleset/common;extractors/ruleset/ splittersystemout. File Sets A File Set is a collection of files that contain the custom logic that you defined to split or annotate log data. You can use either Java or Python to create the custom logic. You must define the following properties in the File Set definition: Name Specify a unique name that is used to identify the File Set. Type Specify whether the File Set is used to split or annotate data. File type Specify whether the file is Java or script. File name Specify the name of the file that contains the custom logic that you defined. For example, if you use Java, this file is a Java Archive (JAR) file. Class name If you use Java, specify the name of the main Java class name. Note: Data sources, such as log source definitions, are not defined as part of a custom Insight Pack because data sources require specific information, such as host name, log path, and service topology information that is dependent on the server and environment. This information varies depending on where IBM SmartCloud Extending IBM SmartCloud Analytics - Log Analysis 29

36 Analytics - Log Analysis is installed. As a result, when you define a custom Insight Pack, you only need to define data types such as Collections, Source Types, and Rule and File Sets. After you install your custom Insight Pack, you must define the required data sources. For more information about how to create data sources, see the Administering IBM SmartCloud Analytics - Log Analysis section. IBM Tivoli Log File Agent Configuration You can use either the IBM Tivoli 6.3 Log File Agent or the REST client in the data collector to load data into the IBM SmartCloud Analytics - Log Analysis. For detailed information about how to configure the loading of data into IBM SmartCloud Analytics - Log Analysis, see the topic about loading data into IBM SmartCloud Analytics - Log Analysis in the Configuring IBM SmartCloud Analytics - Log Analysis section. For more information about how to use the REST client to load data into IBM SmartCloud Analytics - Log Analysis, see the topic about using the REST client to load log file information in the Configuring IBM SmartCloud Analytics - Log Analysis section of the IBM SmartCloud Analytics - Log Analysis documentation. If you use the IBM Tivoli 6.3 Log File Agent to load data into the IBM SmartCloud Analytics - Log Analysis server, you must install the configuration files into the agent. This configuration ensures that the agent knows where the log files for a log source are located, how to process the records in the log file, and the server to which records are sent. When you define your custom Insight Pack, include the LFA configuration files in the lfa folder within the project. When you install the custom Insight Pack, the files are installed into the LFA that is installed with IBM SmartCloud Analytics. The files are installed in the../config/lo subdirectory under the root directory where the LFA is installed. For example, /home/unityadm/ibm/ LogAnalyticsWorkgroup/IBM-LFA-6.30/config/lo. The LFA configuration for a particular log source is defined in the following files: v A <name>.conf file that contains the properties that are used by the Log File Agent (LFA) for processing the log files. v A <name>.fmt file that contains an expression and format that is used by the agent to identify matching log file records and to identify the properties to include in the Event Integration Format (EIF) record. The EIF is sent from the agent to the receiving server. The receiving server is the server where the IBM SmartCloud Analytics server is installed. The <name>.fmt file uses a regular expression to determine matching records in the log file and to send each matching record to the IBM SmartCloud Analytics server in an EIF event. If you want to use the LFA to send your log files to IBM SmartCloud Analytics server, you must customize the regular expression and define your own stanza in the <name>.fmt file to capture the log records that are to be sent. The event record format must include the host name, file name, log path, and text message. The IBM SmartCloud Analytics server uses these values to process the logs. For more information about the IBM Tivoli 6.3 Log File Agent and the configuration files and properties, see Tivoli Log File Agent User's Guide. 30 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

37 The file names must be identical for both files. For example, lfawas.conf and lfawas.fmt. LFA configuration file examples The following example shows the files that are installed as part of the WebSphere Insight Pack that is included as standard with IBM SmartCloud Analytics - Log Analysis. The lfawas.conf file contains many properties, including the following examples: # Files to monitor. The single file /tmp/regextest.log, or any file like /tmp/foo-1.log or /tmp/foo-a.log. LogSources=/home/unityadm/IBM/LogAnalyticsWorkgroup/logsources/was/* # Our EIF receiver host and port. ServerLocation=<EIF Receiver host name> ServerPort=5529 The lfawas.fmt file contains the following regular expression that matches any record within a monitored log file. In this example, the regular expression matches all the log records in the file and to the SmartCloud Analytics server as an EIF event. The EIF event contains the host name where the agent is running, the file name of the log file, the log file path of the log file, and the log file record itself. // Matches records for any Log file: // REGEX AllRecords (.*) hostname LABEL -file FILENAME logpath PRINTF("%s",file) text $1 END Configuring remote monitoring that uses the predefined configuration files Before you can remotely monitor log files, you must modify the IBM Tivoli Monitoring Log File Agent (LFA) configuration files. About this task This procedure describes how to use the predefined files that are delivered with IBM SmartCloud Analytics. The files are in the UNITY_HOME/IBM/ LogAnalyticsWorkgroup/IB-LFA-6.30/config/lo directory. This directory includes configuration and format files for WebSphere Application Server, DB2, and the Generic Annotator Pack. To enable remote monitoring, you edit the relevant LFA configuration files for your custom Insight Pack. These files use either the.fmt or.conf file formats. The following files are installed for the Generic Annotator Pack. For example, if you did use these files in a custom Insight Pack, you must edit one or both of these files to enable remote monitoring: v GAInsightPack-lfageneric.conf v GAInsightPack-lfageneric.fmt Extending IBM SmartCloud Analytics - Log Analysis 31

38 Procedure 1. Open the configuration file that you want to use for remote monitoring. 2. Define the following settings that are required for remote monitoring: LogSources Specify the Log Source that you want to monitor. If you are specifying multiple Log sources, they must be comma-separated and without spaces. SshAuthType You must set this value to either PASSWORD or PUBLICKEY. If you set this value to PASSWORD, IBM SmartCloud Analytics - Log Analysis uses the value that is entered for SshPassword as the password for Secure Shell (SSH) authentication with all remote systems. If you set this value to PUBLICKEY, IBM SmartCloud Analytics - Log Analysis uses the value that is entered for SshPassword as pass phrase that controls access to the private key file. SshHostList You use the SshHostList value to specify the hosts where the remotely monitored log files are generated. IBM SmartCloud Analytics - Log Analysis monitors all the log files that are specified in the LogSources or RegexLogSources statements in each remote system. If you specify the local machine as a value for this parameter, the LFA monitors the files directly on the local system. If you specify that the localhost SSH is not used to access the files on the system, IBM SmartCloud Analytics - Log Analysis reads the files directly. SshPassword If the value of the SshAuthType parameter is PASSWORD, enter the account password for the user that is specified in the SshUserid parameter as the value for the SshPassword parameter. If the value of the SshAuthType parameter is PUBLICKEY, enter the pass phrase that decrypts the private key that is specified in the SshPrivKeyfile parameter. SshPort You specify the TCP port that is used for SSH connections. If you do not enter anything, this value is defaulted to 22. SshPrivKeyfile If the value of the SshAuthType parameter is set to PUBLICKEY, enter the directory path to the file that contains the private key of the user that is specified in the SshUserid parameter as the value for this parameter. If the value of the SshAuthType parameter is not set to PUBLICKEY, this value is not required. SshPubKeyfile If the value of the SshAuthType parameter is set to PUBLICKEY, enter the directory path to the file that contains the public key of the user that is specified in the SshUserid parameter as the value for this parameter. If the value of the SshAuthType parameter is not set to PUBLICKEY, this value is not required. 32 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

39 SshUserid Enter the user name from the remote system that the agent uses for SSH authentication. 3. Save your changes. Example For example: =============== SshHostList=host1,host2,host3 SshUserid=loguser SshAuthType=PASSWORD SshPassword=<password> ===================== SshHostList=host1,host2,host3 SshUserid=loguser SshAuthType=PUBLICKEY SshPrivKeyfile = <SshUserid_Private_Key_File_Path> (Or) SshPubKeyfile = <SshUserid_Private_Key_File_Path> ====================== where <password> is the password that you want to use. <SshUserid_Private_Key_File_Path> is the full path for the file that contains the private key of the user that is specified in the SshUserid user. For example, if you save the password to a file called password.txt in the <UNITY_HOME>/utilities directory, the full path is as follows: SshPrivKeyfile = <UNITY_HOME>/utilities/password.txt Steps to create an Insight Pack This section describes the key concepts and requirements that you must know if you want to create your own Insight Pack. To create your own custom Insight Pack, you must complete the following steps: 1. If you want to create a new Insight Pack, see Creating a custom insight pack. 2. If you want to create a new Insight Pack based on an existing Insight Pack, see Extending an existing insight pack. 3. If you want to update an Insight Pack, see Updating a custom Insight Pack. Creating a custom Insight Pack Create a new Insight Pack. About this task To create a new insight pack, follow these steps. Procedure 1. Create an Insight Pack project. In the Log Analysis Tooling, click File > New Project. 2. Expand the Application Analytics node and select Insight Pack Project. Click Next. Extending IBM SmartCloud Analytics - Log Analysis 33

40 3. Enter a name for your project and click Finish. 4. Create rules or files used for splitting and annotating to meet your requirements. The AQL rules and Java or Python files are in the src-files/extractors directory. For more information about developing in Annotated Query Language (AQL), Java and Python, see Custom annotations and splitters. 5. Configure the index configuration for your Insight Pack. The index configuration is created automatically. You must configure it to meet your requirements. For more information about indexing configuration, see Editing the index configuration. 6. Create the additional artifacts that are required for your Insight Pack using the Insight Pack Editor. The artifacts are as follows: v Collection v Source Type v Rule Set v File Sets v Package.properties file v IBM Tivoli Monitoring Log File Agent files (optional) For more information about artifacts, see Using the Eclipse tools to create Insight Pack artifacts. 7. After you create the artifacts, click File > Save to save the project. 8. To build the Insight Pack, right-click the project in the Project explorer and select Build Insight Pack For more information about builds and build components, see Using the Eclipse tools to create Insight Pack artifacts. 9. Copy the Insight Pack archive from C:\<$ECLIPSE_WORKSPACE\<project_name>\ dist\<project_name_version>.zip to the $UNITY_HOME/unity_content/ <product>/ directory on your IBM SmartCloud Analytics - Log Analysis system. 10. Use the pkg_mgmt command to install the Insight Pack. For more information about the pkg_mgmt command, see Installing an Insight Pack. 11. To verify that the Insight Pack works as expected, create a log source on the UI and use your Insight Pack to ingest data. 12. If the Insight Pack does not work as expected, refer to Scenario 1 in Updating an Insight Pack. For more information, see Updating a custom Insight Pack. Extending an existing Insight Pack You can create new Insight Packs by extending an existing Insight Pack. About this task To create an Insight Pack by extending an existing Insight Pack, complete the following steps: Procedure 1. Create an Insight Pack project. In the Log Analysis Tooling UI, click File > New > Project. 2. Expand the Log Analysis node and select Insight Pack Project. Click Next. 3. Enter a name for your project. You can also specify a directory location if you want to, this setting is optional. Click Finish. 34 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

41 4. Import the Insight Pack you plan to use as the base for your custom Insight Pack into your Eclipse environment. Insight Pack zips can be found in the $UNITY_HOME/unity_content/<product> directory of the server. The rules and files are in the src-files/extractors directory. 5. Copy any AQL modules, Java files, or Python files used for splitting and annotating from the base Insight Pack project to your new custom Insight Pack project. Important: Do not update or overwrite an Insight Pack that was supplied to you by IBM as this action might cause errors in future upgrades to that Insight Pack. 6. Update any modules or files that are used for splitting and annotating to meet your requirements. The files are in the $UNITY_HOME/unity_content/<product>/ <insight_pack>/extractors directory. where $UNITY_HOME is the directory where you installed IBM SmartCloud Analytics - Log Analysis. For more information about developing custom annotations and splitters, see Custom annotations and splitters on page Configure the index configuration for your Insight Pack. The index configuration is created automatically. You must configure it to meet your requirements. For more information about indexing configuration, see Editing the index configuration on page To create any additional artifacts that you require for your Insight Pack, use the Insight Pack Editor. The possible artifacts are as follows: v Collections v Source types v Rule Sets v File Sets v Package.properties file v IBM Tivoli Monitoring Log File Agent configuration files (optional) For more information about artifacts, see Using the Eclipse tools to create Insight Pack artifacts on page After you create the artifacts, click File > Save to save the project. 10. To build the Insight Pack, right-click the project in the Project explorer and select Build Insight Pack For more information about builds and build components, see Building the Insight Pack Eclipse project on page Copy the Insight Pack archive file, <project_dir>/dist/ <project_name_version>.zip, tothe$unity_home/unity_content/<product>/ <insight_pack> directory. where <project_name_version> is the name of the project and <project_dir> is the directory to which you saved your project. $UNITY_HOME is the name of the directory where you installed IBM SmartCloud Analytics - Log Analysis is installed. For example: v On Windows operating systems, copy the C:/<project_dir>/dist/ <project_name_version>.zip to the $UNITY_HOME/unity_content/<product>/ <insight_pack> directory. Extending IBM SmartCloud Analytics - Log Analysis 35

42 v On Linux operating systems, copy the /<project_name>/dist/ <project_name_version>.zip file to the $UNITY_HOME/unity_content/ <product>/<insight_pack> directory. 12. Use the pkg_mgmt command to install the Insight Pack. For more information about the pkg_mgmt command, see Installing an Insight Pack on page To verify that the Insight Pack works as expected, use the IBM SmartCloud Analytics - Log Analysis UI to create a log source and use your Insight Pack to ingest data. If the Insight Pack does not work as expected, complete the steps for updating an Insight Pack, starting with step 2. For more information, see Upgrading a custom Insight Pack Upgrading a custom Insight Pack Upgrade a custom Insight Pack. About this task Do not upgrade an Insight Pack that was supplied to you by IBM as this action might cause errors in future upgrades to that Insight Pack. This topic outlines how to upgrade a custom Insight Pack that you have created. There are two scenarios where upgrading an Insight Pack may be required. The first scenario is a destructive upgrade where the Insight Pack collections and stored data are deleted. The benefit of following this path is that the limitations on what can upgraded are avoided. This scenario will probably be most valuable to a content developer who is actively developing an Insight pack. In this scenario, the old content pack should be uninstalled, then the new content pack should be installed. The second scenario is an in place upgrade where the Insight Pack collections and stored data are preserved. The benefit of this scenario is the data is preserved, but the negative is the limitations on upgrade restrict what changes can be made to the Insight Pack. This scenario will probably be most valuable to an administrator who is upgrading the insight pack in a production environment. In this scenario, the content developer must not modify any old collections or source types. They must create new source types and collections to accommodate the required changes. The new Insight Pack must be installed using the upgrade option. Scenario 1: Destructive upgrade Upgrade a custom Insight Pack in a destructive upgrade where the Insight Pack collections and stored data are deleted.. Procedure 1. Import the Insight Pack into Eclipse. 2. Update any rules or files used for splitting and annotating to meet your requirements. The AQL rules and Java or Python files are in the src-files/extractors directory. 3. To add or remove configuration artifacts, open the Insight Pack Editor. For each artifact, click the associated tab and make any edits that you require. 4. In this scenario you can upgrade any artifact. Old artifacts will first be uninstalled to avoid the limitations placed on the upgrade procedure. 36 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

43 5. After you complete any edits that you want to make, click File > Save. 6. To build the Insight Pack, right-click the project in the Project explorer and select Build Insight Pack 7. Copy the Insight Pack archive from C:\<$ECLIPSE_WORKSPACE\<project_name>\ dist\<project_name_version>.zip to the $UNITY_HOME/unity_content/ <product>/ directory on your IBM SmartCloud Analytics system. 8. Use the pkg_mgmt command to uninstall the old Insight Pack and install the new Insight Pack. 9. To verify that the Insight Pack works as expected, create a log source in the user interface and use your Insight Pack to ingest data. If the Insight Pack does not work as expected, repeat this procedure. Scenario 2: In place upgrade Upgrade a custom Insight Pack in an place upgrade where the Insight Pack collections and stored data are preserved. Procedure 1. Import the Insight Pack into Eclipse. 2. Update any rules or files used for splitting and annotating to meet your requirements. The AQL rules and Java or Python files are in the src-files/extractors directory. 3. To add or remove configuration artifacts, open the Insight Pack Editor. For each artifact, click the associated tab and make the changes that you require. In this scenario you should not edit Collections or Source Types. You can add Collections and Source Types to accommodate new features. 4. After you complete any edits that you want to make, click File > Save. 5. To build the Insight Pack, right-click the project in the Project explorer and select Build Insight Pack. 6. Copy the Insight Pack archive from C:\<$ECLIPSE_WORKSPACE\<project_name>\ dist\<project_name_version>.zip to the $UNITY_HOME/unity_content/ <product>/ directory on your IBM SmartCloud Analytics system. 7. Use the pkg_mgmt command to upgrade the Insight Pack. 8. To verify that the Insight Pack works as expected, create a log source in the user interface and use your Insight Pack to ingest data. If the Insight Pack does not work as expected, repeat this procedure. Tools for extending IBM SmartCloud Analytics - Log Analysis This section outlines the tools provided to allow you to create and update an IBM SmartCloud Analytics - Log Analysis Insight Pack. Software prerequisites This section describes the software prerequisites that you must install before you install the IBM SmartCloud Analytics - Log Analysis Eclipse tools. Use the IBM SmartCloud Analytics - Log Analysis Eclipse tools to create an Insight Pack. The software requirements for creating an Insight Pack are: v Download and install the Runtimes for Java Technology, Version from the IBM Fix Central website: options?selection=software%3bibm%2fwebsphere%3bibm %2fIBM+SDKs+for+Java+Technology%2fJava+Standard+Edition+%28Java+SE%29 Extending IBM SmartCloud Analytics - Log Analysis 37

44 v v v Download and install the Eclipse IDE for Java EE Developers Version from the Eclipse website: Download the IBM InfoSphere BigInsights Version 2.0 Eclipse tools. These tools are available from Passport Advantage: lotus/passportadvantage/ and are located in the IBM SmartCloud Analytics - Log Analysis Log Analysis Workgroup Edition Version 1.1 media package (CBB88EN). Download and install an Eclipse JSON editor plug-in. You can choose any editor you wish. Set up Java If you have more than one version of Java on machine where you want to install the plug-in, you must point the Eclipse to the correct version by modifying the eclipse.ini file. 1. Change to the directory where you installed Eclipse. For example: \Dev\Tools\Helios\eclipse\ 2. In a text editor, open the file: eclipse.ini. 3. Add the following lines just before -vmargs. These two entries must be on separate lines. -vm java_location\sdk\bin\javaw.exe 4. Save and close the file. Installing the Insight Pack tooling Before you can create an Insight Pack, you must install the Log Analysis Tooling plug-in into Eclipse. Before you begin Ensure that you downloaded and installed all of the prerequisite software. Copy the IBM SmartCloud Analytics - Log Analysis Eclipse plug-in from the $UNITY_HOME/unity_content/tools directory to your Insight Pack environment. This archive name that you require has the naming convention SmartCloudLogAnalyticsTooling_<version>.<timestamp>.zip. Uninstall the Eclipse Data Tools Platform (DTP) if you previously installed Version 1.9 or higher. Procedure To install the tooling plug-in: 1. Launch Eclipse. 2. Install the IBM InfoSphere BigInsights Version 2.0 Eclipse tools: a. From the menu bar, choose Help > Install New Software. b. In the Install dialog, click Add. c. You can install the IBM InfoSphere BigInsights Version 2.0 Eclipse tools from a.zip file or an ISO image. To install the IBM InfoSphere BigInsights Version 2.0 Eclipse tools from an ISO image, complete the following steps: 1) In the Add Repository dialog, type a name for the Repository and click Local. 38 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

45 2) Navigate to the location where you stored the IBM InfoSphere BigInsights Version 2.0 Eclipse tools. Select the IBM InfoSphere BigInsights Version 2.0 Eclipse tools and click OK. To install the IBM InfoSphere BigInsights Version 2.0 Eclipse tools from the.zip file, complete the following steps: 1) In the Add Repository dialog, type a name for the Repository and click Archive. 2) Navigate to the location where you stored the IBM InfoSphere BigInsights Version 2.0 Eclipse tools. This archive file name is BigInsightsEclipseTools.zip. Select the IBM InfoSphere BigInsights Version 2.0 Eclipse tools. Click Open and click OK. d. Click Select All and then Next. Complete the remaining steps in the wizard and allow the installation to complete. 3. Install the IBM SmartCloud Analytics - Log Analysis Eclipse plug-in: a. From the menu bar, choose Help > Install New Software. b. In the Install dialog, click Add. c. In the Add Repository dialog, type a name for the Repository and click Archive. d. Navigate to the location where you stored the IBM SmartCloud Analytics - Log Analysis Eclipse plug-in. Click Open and then click OK. e. Click Select All and then Next. Complete the remaining steps in the wizard and allow the installation to complete. f. Accept the option to allow Eclipse to restart. Results Eclipse installs the plug-in. Using the Eclipse tools to create Insight Pack artifacts After you have installed Log Analysis Tooling, you can then create the artifacts that form your Insight Pack. Insight Pack project structure When you create an Insight Pack project, Eclipse creates a directory structure to contain system and project files. The following directory structure is created: v project_name Contains the following files: build_filesetjar.xml: A sample ANT script from which you can create a script to compile your Java JAR file required when creating a Java-based File Set. indexconfig_spec.ucdk: The index configuration resource file. insightpack_spec.ucdkt: The Insight Pack configuration resource file. v project_name/src Any Java code generated during the Insight Pack creation process is written to this directory. v project_name/jre System Library Contains Java system files. Extending IBM SmartCloud Analytics - Log Analysis 39

46 v project_name/logsamples Location for sample log files. v project_name/metadata Contains the following files and directories: lfa: The location of the CONF and FMT files. A lfa.conf file contains the properties that are used by the Log File Agent (LFA) for processing the log files. The lfa.fmt file contains an expression and format that is used by the agent to identify matching log file records and to identify the properties to include in the Event Integration Format (EIF) record. The EIF is sent from the agent to the receiving server. The receiving server is the server where the Log Analysis Tooling server is installed. The lfa.fmt file uses a regular expression to match all the log records in the file and to send each record to an EIF event. collections.json: The generated collections file. filesets.json: The generated file sets file. indexconfig.json: The generated index configuration file. package.properties: Defines information used by the Insight Pack installer. rulesets.json: The generated rule sets file. sourcetypes.json: The generated Source Types file. v project_name/meta-inf Contains the MANIFEST.MF Insight Pack metadata file. v project_name/unity_apps Contains files and directories for Custom Applications. Custom Applications allow you to run your custom code within IBM Log Analytics Workgroup Edition. For more information about Custom Applications, see the Custom Applications topic. apps: Location for the Application runtime file (.app file). templates: Location for the Application template files. A template file defines a set of custom scripts. chartspecs: Location for the Application chart specification files. v project_name/src-files/extractors/fileset/java Location for the Java code files that define the splitter or annotator. This directory can also contain code that defines the splitter or annotator, or both. v project_name/src-files/extractors/fileset/script Location for scripts that define the splitter or annotator, or both. v project_name/src-files/extractors/ruleset/annotator Contains the main.aql module file. v project_name/src-files/extractors/ruleset/annotator/dicts Location for dictionary files used by the annotator. v project_name/src-files/extractors/ruleset/annotator/lib Location for libraries used by the annotator. v project_name/src-files/extractors/ruleset/annotator/tables Location for tables used by the annotator. v project_name/src-files/extractors/ruleset/splitter Contains the main.aql module file. v project_name/src-files/extractors/ruleset/splitter/dicts Location for dictionary files used by the splitter. 40 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

47 v v project_name/src-files/extractors/ruleset/splitter/lib Location for libraries used by the splitter. project_name/src-files/extractors/ruleset/splitter/tables Location for tables used by the splitter. When you install the Insight Pack, the project directory structure is replicated in IBM SmartCloud Analytics - Log Analysis. Completing the project Overview tab The Overview tab sets the project name, the project version, and the version of IBM SmartCloud Analytics - Log Analysis to which the Insight Pack applies. Before you begin Before you begin, you must create an Insight Pack project. Procedure 1. Open the Insight Pack editor. 2. Click the Overview tab. 3. Enter the General Information details: Name Enter a name for your Insight Pack. The value entered in this field is combined with the value in the version field to determine if the Insight Pack is currently installed. This field must not be longer than 30 characters and must not include special characters except underscores, full stops, and hypens. The name must not contain spaces. Version Enter a version for your Insight Pack. The value entered in this field is combined with the value in the name field to determine if the Insight Pack is currently installed. The value you enter must be a sequence of 3 integers separated with full stops. For example, Framework version Specify your IBM SmartCloud Analytics - Log Analysis version. The value you enter must be a sequence of 3 integers separated with full stops. For example, Click File > Save to save the overview details. Proceed to add any artifacts that you require. Creating an Insight Pack project in Eclipse Create an Eclipse project in which to build your Insight Pack. About this task Before you can create an Insight Pack, you must create an Eclipse project in which to build the Insight Pack. Procedure To create an Insight Pack project: 1. From the menu bar, choose File > New > Project. 2. In the New Project dialog, expand the Log Analysis type and select the Insight Pack Project wizard. Then, click Next. Extending IBM SmartCloud Analytics - Log Analysis 41

48 3. Enter a project name and location. If you want to enter a different location, clear the Use the default location checkbox. Enter the new location and choose the file system that you want to use. 4. Click Finish. 5. Optional: If you want to use an existing Java project as an Insight Pack project, right-click the project folder in the Navigator pane and choose Add Insight Pack Nature and from the menu. You can also choose Add BigInsights nature...? from the same menu if required. Results Eclipse creates an Insight Pack project. The Insight Pack Editor opens automatically. Note: If you applied the Add Insight Pack Nature to an existing Java project and the project includes Insight Pack JSON metadata under the project root, the metadata is imported automatically. Importing an Insight Pack Import an existing Insight Pack into the Eclipse workspace. About this task You can import an Insight Pack ZIP archive that was previously built in Eclipse with the Log Analysis Tooling plug-in or was built by another tool. Insight Pack archive naming: The archive file name contains the Insight Pack package name and the package version. The import uses the Insight Pack package name as the imported Eclipse project name. For example, an Insight Pack zip archive has the naming convention: my_insight_pack_v1.2.0.zip my_insight_pack is the Insight Pack package name. v1.2.0 is the package version. When you import the archive, the new project name is my_insight_pack. If you do not follow this Insight Pack naming convention when you create the archive, the import generates a random project name, for example ImportedProject If you want to rename the project after you import, right-click on the project name, and select Refactor -> Rename. The Rename option changes only the project name, not the files within the Insight Pack. To import an Insight Pack archive: Procedure 1. If the archive is not on your IBM SmartCloud Analytics - Log Analysis system, download the archive to your IBM SmartCloud Analytics system. 2. From the Eclipse workspace menu bar, choose File -> Import. 3. In the Import dialog, choose Log Analysis > Existing Insight Pack into Workspace For example choose: v For a project built by the Log Analysis tooling, choose Log Analysis > Existing Insight Pack into Workspace. 42 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

49 v For a project built outside of the tooling, choose an option such as: File -> Import -> General -> Archive file or File -> Import -> BigInsights -> Text Analytics Results There are numerous options on the import menu. 4. Click Next 5. In the Select dialog, click Browse to select the Insight Pack archive you want to import. You can also enter the full path of the archive by hand. 6. Click Finish to import the archive. If the import is successful, a new project is created in Eclipse workspace and the Insight Pack Editor opens automatically. If the import fails, an error message displays in the Problems tab at the bottom of the screen or in the console. If the Problems tab is not visible, go to Eclipse toolbar and choose Window -> Show View -> Problems to display the tab. Editing the index configuration Edit the basic metadata and add the indexing fields that comprise the index configuration for a project. Before you begin If you want to use an AQL module to define an index configuration, import the module before you start editing the index configuration. About this task When you create an Insight Pack project in Eclipse, the index configuration file (metadata\indexconfig.json) and the index configuration resource file (indexconfig_spec.ucdk) are automatically created. Use the Index Configuration Editor to edit the index configuration resource file. You can also create new index configuration instances and edit them in the Index Configuration Editor. Note: If you manually edit the metadata\indexconfig.json for a project that you have opened in the Log Analysis Tooling, any changes you make are not displayed and are overwritten by changes made within the Tooling. Procedure To edit the index configuration: 1. In the Eclipse Navigator pane, double-click the indexconfig_spec.ucdk file to start the Index Configuration Editor. Alternatively, right-click on the file and choose Open from the context menu. 2. In the Index Configuration Editor Overview tab, you can view the basic metadata for the default project Index Configuration. The metadata fields are represented as strings in the index configuration file. The following metadata fields are available: Extending IBM SmartCloud Analytics - Log Analysis 43

50 Table 3. Index configuration data Metadata JSON string Description Name name Required. Specifies the name of the index configuration. The default is <projectname>_indexconfig. Version version Required. Specifies the index configuration version and must be three digits. The default is Description description Required. Contains a description of the index configuration. The default description is Index configuration - <projectname>. 3. Optional: If you want to create another index configuration instance, click Add. You can also delete an instance by highlighting the instance name and clicking Remove. In the NewIndexConfig dialog, complete the basic metadata index configuration for the new instance and click Finish. 4. The Index Configuration Editor is also used to configure the fields within the Index Configuration instance. The following fields are required for all index configuration instances; they are created automatically, with default values, when a new index configuration instance is created. v timestamp v logrecord To add one or more indexing fields to the index configuration: a. Open the Field Configuration tab. If you have more than one Index configuration instance, choose the instance you want to edit from the Index configuration instances list. b. Click Add to add an indexing field to the index configuration. c. In the New Field Configuration dialog, select the field attributes that you require. When you select a check box to choose an attribute, the attribute is assigned the value true in the index configuration file. Unselected attributes are assigned the value false. The attributes are represented as strings in the index configuration file. The following attributes are available: Table 4. Index configuration attributes Attribute JSON string Description Name User-defined. Required. Specifies the field name. Data type datatype Specifies the field data type. You can choose one of the following data types: v TEXT v LONG v DOUBLE v DATE The default is TEXT. 44 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

51 Table 4. Index configuration attributes (continued) Attribute JSON string Description Retrievable retrievable Optional. Determines whether the contents of the field are stored in the index for later retrieval. When you select this attribute, the contents of the field are not directly searchable but they are returned within query results that match any other searchable field in a log record. The default is false. Retrieve by default retrievebydefault Optional. Determines whether the contents of the field are returned by default as part of any search response. This attribute is only available when you select the Retrievable attribute. If you do not select this attribute, the contents of the field are still returned in search responses when explicitly requested. The default is false. Sortable sortable Optional. Determines whether the contents of the field can be sorted and included in range queries. The default is false. Filterable filterable Optional. Determines whether facet counting and filtering is enabled for the contents of the field. The default is false. Searchable searchable Optional. Determines whether the contents of the field are returned by search queries. The default is true. d. When you are finished selecting attributes, click Next. e. Click Add to add source details for the field. You can also Edit or Remove source details. f. In the New Field Source dialog, enter the source path. Note: The paths must be prefixed with metadata, content. orannotations. For example, metadata.logsource or content.text You create the field path in the following format: annotations.<modulename>_<viewname>.<viewfieldname> where <modulename> is the AQL module name and <viewname> is the name of an output view in the AQL module. <viewfieldname> is a field in the output view. This field must be a text type. For example, consider the following AQL sample: module Unityannotator; create view UnityLog as extract regex /.*\d\d\s\[(.*)\]\s([a-z]*)\s*-\s*([a-za-z]*)\s*:\s*(.*)/ on D.text return group 1 as ThreadNo and group 2 as Severity and group 3 as msgclass and Extending IBM SmartCloud Analytics - Log Analysis 45

52 group 4 as Message from Document D; export view UnityLog; You create the following source paths for this AQL sample: annotations.unityannotator_unitylog.threadno.text annotations.unityannotator_unitylog.severity.text annotations.unityannotator_unitylog.msgclass.text annotations.unityannotator_unitylog.message.text In the above source paths, the AQL module is Unityannotator. The view name is UnityLog. The view field names are ThreadNo.text, Severity.text, msgclass.text, and Message.text. g. If you select Date as the data type, you must select a date format. The Date format field is only displayed for the Date data type. Any date format consistent with the Java Simple Date Format is valid. Examples of supported date formats include the following, but are not limited to these example formats: WebSphere Application Server logs date format [MM/dd/YY HH:MM:ss:SS z] Generic annotation log date format [YYYY/MM/dd HH:mm:ss,z] DB2 date format YYYY-MM-dd-HH-mm.ss.SSS Z For more information about date/time masks and format specifiers, see: %2Fcom.ibm.eglce.lr.doc%2Ftopics%2Fregl_core_date_format.html h. Click Finish. i. Choose a source combination. The available options are ALL and FIRST. The Combine field only becomes available when you have already created two or more sources. j. In the New Field Configuration dialog, click Finish. The field attributes are displayed in an Attributes pane in the Field Configuration tab. After you create a field, you can modify the attributes in the Attributes pane. 5. Create more indexing fields, if required. 6. To remove a field, select it from the list of fields in the Fields pane and click Remove. 7. Save the index configuration. Results The index configuration file is updated with the new metadata and field details. Creating File Sets You can display existing File Sets and create new ones in the Insight Pack editor. 46 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

53 Before you begin Before you create a File Set, you must complete the following prerequisite tasks: v You must create an Insight Pack project. If you use the Java fileset type, you must complete the following tasks: 1. Create the Java files in the src directory. 2. Create the Java Archive (JAR) file that contains the relevant compiled Java classes. Save the JAR file in the src-files/extractors/fileset/java directory. IBM SmartCloud Analytics - Log Analysis includes a sample Apache Ant build file that is called build_filesetjar.xml. This file is for reference only. You can find the file in the root of the project folder. The file is configured to create the HelloWorld.jar file from the src/com.ibm.tivoli.unity.content.helloworld.java file. The Apache Ant file compiles the classes into a build directory and builds the JAR file. For more information about Apache Ant, see If you use the Script file set type, create the script and save it in the src-files/extractors/fileset/script directory. About this task You use a File Set to define the criteria that are used to split or annotate a log record that belongs to a specified data type. Note: If you manually edit the metadata\filesets.json for a project that you have opened in the Log Analysis Tooling, any changes you make are not displayed and are overwritten by changes made within the Tooling. Procedure 1. Open the Insight Pack editor. 2. To open the File Set tab, click File Set. 3. To create a File Set, click Add and complete the following fields: a. Enter a name for the File Set. b. Select Split or Annotate from the Type list. c. Select a file type. You select either Java or Script. Java is the default value. d. Select a file name. If the file type is Java, select a.jar file. If the file type is script, select.py. e. If you select the Java file type, enter the class name. 4. Save the File Set. Creating Rule Sets You can display existing Rule Sets and create new ones in the Insight Pack editor. Before you begin Before you create a Rule Set, you must complete the following prerequisite tasks: v You must create an Insight Pack Eclipse project. v You must import the Annotated Query Language (AQL) rules and save them in the /src-files/extractors/ruleset directory. Extending IBM SmartCloud Analytics - Log Analysis 47

54 About this task You use a Rule Set to define the rules that are used to split or annotate a log record that belongs to a specified data type. Note: If you manually edit the metadata\rulesets.json for a project that you have opened in the Log Analysis Tooling, any changes you make are not displayed and are overwritten by changes made within the Tooling. Procedure 1. Open the Insight Pack editor. 2. To open the Rule Set tab, click Rule Set. 3. To create a Rule Set, click Add and complete the following fields: a. Enter a name for the Rule Set. b. Select Split or Annotate from the Type list. c. Enter a directory in the Rule file directory field. This directory denotes the path relative to the to the main AQL module and related modules located in the src-files directory. To delimit each module, add a semicolon (;). For example, you enter the following directory path to denote the splitter directory: extractors/ruleset/splitter 4. Save the Rule Set. Creating Source Types Source Types define how a particular kind of data is split, annotated, and indexed so that it can be searched using IBM SmartCloud Analytics - Log Analysis. You must create a Source Type before you can create a Log Source. Before you begin Before you create a Source Type, you must complete the following prerequisite tasks: v You must create an Insight Pack Eclipse project. v v You must also define the Rule Sets or File Sets that are used to split and annotate the Source Type that you are creating. You must define an index configuration. The index configuration determines how data of that Source Type is indexed. Index configuration is specified using JSON configuration notation. Note: If you manually edit the metadata\sourcetypes.json for a project that you have opened in the Log Analysis Tooling, any changes you make are not displayed and are overwritten by changes made within the Tooling. Procedure 1. Open the Insight Pack editor. 2. To open the Source Types tab, click Source Types. 3. To create a Source Type, click Add and complete the following fields: a. Enter a name for the Source Type. b. Select a splitter that you want to use. This list is populated with the Split Rule Sets and File Sets that you created previously. 48 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

55 c. Select the annotator that you wan to use. This list is populated with the Annotate Rule Sets and File Sets that you created previously. d. (Optional) Select the Post data on annotator execution failure option if you want data records that fail during annotation to be added. e. Select an index configuration for your Source Type from the Index config list. 4. Click Finish and then save the Source Type. Creating Collections Collections group together data from different Log Sources that have the same Source Type. For example, you might want to assign all the Log Sources for a WAS cluster into a single Collection so that you can search them as a group. Before you begin Before you create a Source Type, you must complete the following prerequisite tasks: v You must create an Insight Pack Eclipse project. v You must also define the Source Type for the Collection that you are creating. About this task Note: If you manually edit the metadata\collections.json for a project that you have opened in the Log Analysis Tooling, any changes you make are not displayed and are overwritten by changes made within the Tooling. Procedure 1. Open the Insight Pack editor. 2. To open the Collections tab, click Collections. 3. Click Add and complete the following fields: a. Enter a name for the Collection. b. Select a Source Type. 4. Click Finish and then save the Collection. Building the Insight Pack Eclipse project Building an Insight Pack for an edited Eclipse project. About this task After you edit the contents of an Insight Pack Eclipse project, you must rebuild the Insight Pack. Procedure To build an Insight Pack for a modified project: 1. From the menu bar, choose Window -> Show View -> Other -> General -> Console to turn on the Eclipse console before you build the package. 2. To open the Project Explorer, click Windows > >Show View > Project Explorer. 3. Select the Insight Pack project you want to build, using one of these methods. v Click the project name in Eclipse Project Explorer and click Build Insight Pack on the toolbar. Extending IBM SmartCloud Analytics - Log Analysis 49

56 v Results Right-click a project name in Eclipse Project Explorer and select Build Insight Pack from the menu that appears. Eclipse creates an Insight Pack archive (zip file) for the project. If the build is successful, the Eclipse Console displays a message such as: Insight Pack build is successful. C:\ayu\projects\mycontentpackproject\dist\mycontentpackproject_v1.0.0.zip If the build is not successful, Eclipse Console displays error messages such as: the *.json files in metadata folder are not in well format or have syntax errors. What to do next After you create the package, copy the Insight Pack archive to a directory on your system. Install the archive as described in Installing an Insight Pack. Using the pkg_mgmt command to manage Insight Packs Use the pkg_mgmt command and parameters described in this section to manage your Insight Packs. Displaying Insight Pack information Use the pkg_mgmt command to list the artifacts in an Insight Pack. This includes artifacts installed with the Insight Pack and any additional artifacts, related to the Insight Pack, that you add after installation. You can also use this command to list all of the Insight Packs that you have installed. Displaying Insight Pack contents To list the contents of an Insight Pack, execute the pkg_mgmt command with these parameters: $UNITY_HOME/utilities/pkg_mgmt.sh -list list_options insight_pack -U username -P password where insight_pack is the path to the Insight Pack for which you want to list the contents. The options for the list parameter are: all Lists all of the artifacts related to an Insight Pack rulesets Lists all of the Rule Sets related to an Insight Pack filesets Lists all of the File Sets related to an Insight Pack sourcetypes Lists all of the Source Types related to an Insight Pack collections Lists all of the Collections related to an Insight Pack logsources Lists all of the Log Sources related an Insight Pack These additional parameters can be defined: 50 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

57 -U (Optional) The username for a user with administrative access rights. This parameter is not necessary if you have not changed the default unityadmin password. -P (Optional) The password for the username that you have specified. This parameter is not necessary if you have not changed the default unityadmin password. Displaying a list of installed Insight Packs To display a list of installed Insight Packs, execute the command: $UNITY_HOME/utilities/pkg_mgmt.sh -list Displaying changes to an Insight Pack You can use the diff parameter to display a list of the changes that have been implemented to an Insight Pack after installation. This parameter allows you to list artifacts that have been added to the Insight Pack. Examples of these artifacts are Log Sources and Source Types. To display a list of artifacts added to an Insight Pack, execute the command: $UNITY_HOME/utilities/pkg_mgmt.sh -diff insight_pack -U username -P password where insight_pack is the path to the Insight Pack for which you want to list the differences. These additional parameters can be defined: -U (Optional) The username for a user with administrative access rights. This parameter is not necessary if you have not changed the default unityadmin password. -P (Optional) The password for the username that you have specified. This parameter is not necessary if you have not changed the default unityadmin password. Installing an Insight Pack You can download an Insight Pack to extend the capabilities of IBM SmartCloud Analytics - Log Analysis from Service Management Connect. This topic outlines how to install an Insight Pack. About this task After you have downloaded the Insight Pack, install it by completing these steps: Procedure 1. Download the Insight Pack archive and copy it to the $UNITY_HOME/ unity_content directory on your IBM SmartCloud Analytics - Log Analysis system. 2. Execute the command to complete the installation: $UNITY_HOME/utilities/pkg_mgmt.sh -install insight_pack.zip -U username -P password where insight_pack is the path to the Insight Pack that you want to install. These additional parameters are also defined: Extending IBM SmartCloud Analytics - Log Analysis 51

58 -U (Optional) The username for a user with administrative access rights. This parameter is not necessary if you have not changed the default unityadmin password. -P (Optional) The password for the username that you have specified. This parameter is not necessary if you have not changed the default unityadmin password. Deploying IBM Tivoli Monitoring Log File Agent configuration files An Insight Pack can contain IBM Tivoli Monitoring Log File Agent configuration files such as FMT and CONF files. You can use the pkg_mgmt.sh command to deploy these files. About this task The IBM Tivoli Monitoring Log File Agent might be on the same server as IBM SmartCloud Analytics - Log Analysis and monitoring a local directory. In this scenario, the pkg_mgmt.sh completes all of the configuration required. If the IBM Tivoli Monitoring Log File Agent is on the same server as IBM SmartCloud Analytics - Log Analysis but monitoring remote directories, some additional configuration is required. You use this procedure to copy the configuration files to the UNITY_HOME/IBM-LFA- 6.30/config/lo or the $LFA_HOME/config/lo directories and adds the Insight Pack name as a prefix to the configuration file name. If you want to monitor log files in remote servers, you must make some specific settings. For more information about these specific settings, see Configuring remote monitoring that uses the predefined configuration files on page 31. To deploy the configuration files, complete the following steps: Procedure 1. To deploy the IBM Tivoli Monitoring Log File Agent configurations files, run the following command: $UNITY_HOME/utilities/pkg_mgmt.sh -deploylfa insight_pack.zip where insight_pack is the path to the Insight Pack containing your configuration files. You can, if required, add an extra parameter, -f, to the command. This parameter removes all prompts and it is intended for advanced users who want to complete an installation that is similar to a silent installation. For example: $UNITY_HOME/utilities/pkg_mgmt.sh -deploylfa insight_pack.zip -f Note: To remove the configuration files, use the same command but replace the -deploylfa parameter with -undeploylfa. This parameter removes any LFA configuration files that are already deployed for the Insight Pack. 2. (Optional) A message is displayed that indicates that the IBM Tivoli Monitoring Log File Agent process is being stopped. Enter Y to continue. If you add the -f parameter to the command, the message is not displayed. 52 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

59 Upgrading an Insight Pack You can upgrade an Insight Pack that you have previously installed. This topic outlines how to upgrade an existing Insight Pack. About this task If the Insight Pack that you want to upgrade is not installed, you can choose to complete a full installation of the Insight Pack. In addition to upgrading existing artifacts and installing any artifacts added to the Insight Pack, this command removes unused artifacts that have been excluded from the upgraded Insight Pack. Upgrade an Insight Pack by completing these steps: Procedure 1. Download the Insight Pack archive and copy it to the $UNITY_HOME/ unity_content directory on your IBM SmartCloud Analytics - Log Analysis system. 2. Execute the command: $UNITY_HOME/utilities/pkg_mgmt.sh -updgrade insight_pack.zip -U username -P password -f where insight_pack is the path to the Insight Pack that you want to upgrade. These additional parameters are also defined: -U (Optional) The username for a user with administrative access rights. This parameter is not necessary if you have not changed the default unityadmin password. -P (Optional) The password for the username that you have specified. This parameter is not necessary if you have not changed the default unityadmin password. -f (Optional) This parameter can also be used to install the Insight Pack, if it is not already installed. 3. (Optional) If the Insight Pack is not installed and you have not specified the -f parameter, a message is displayed indicating that the Insight Pack is not installed. If you want to proceed, enter Y. Removing an Insight Pack You can remove an Insight Pack that you have previously installed. This topic outlines how to remove an Insight Pack. About this task Any IBM SmartCloud Analytics - Log Analysis artifacts that you create using the items in an Insight Pack are removed when you remove the Insight Pack. Remove an Insight Pack by completing these steps: Procedure 1. Execute the command: $UNITY_HOME/utilities/pkg_mgmt.sh -uninstall insight_pack -U username -P password -f where insight_pack is the path to the Insight Pack that you want to remove. These additional parameters are also defined: Extending IBM SmartCloud Analytics - Log Analysis 53

60 -U (Optional) The username for a user with administrative access rights. This parameter is not necessary if you have not changed the default unityadmin password. -P (Optional) The password for the username that you have specified. This parameter is not necessary if you have not changed the default unityadmin password. -f (Optional) Allows you to automatically remove any artifacts created using artifacts contained in the Insight Pack. If you add this parameter, you are not warned before the removal of these artifacts. 2. Unless you have specified the -f parameter, a message is displayed listing artifacts that you have created using the items in the Insight Pack. This message indicates that these items are being removed. Specify Y and allow the removal to complete. Custom Apps You can use Custom Apps to extract data from logs loaded into IBM SmartCloud Analytics - Log Analysis and present that data in a useful format such as a chart. A Custom App must includes a script to generate data, any parameters required by the script, and a specification to define how that data is displayed. There are two types of output that are generated by Custom Apps: Dashboard data Create a IBM SmartCloud Analytics - Log Analysis dashboard to display charts based on the data generated by a Custom App. You can also use a dashboard to display HTML content. Search filters Populate a Search Filters widget to drill down through a search. When using the Insight Pack tooling in Eclipse, there is a folder called src-files/unity_apps in the Insight Pack project, specifically for custom app files. There are sub folders for apps, chartspecs, and templates to contain application definitions, chart specifications, and template definitions, respectively. Defining a Custom App To create a Custom App you must create an application file and also create a script that extracts data to be displayed in your Custom App. Where appropriate, you can also create a template file that contains an array of JSON objects. Templates define the structure for Custom Apps and also allow you to share common components across Custom Apps. Prerequisite knowledge Creating a Custom App requires that you have a knowledge of these areas: v Coding in one of the accepted formats: Python, Shell scripting, or Java programming v JSON development Custom App components These are the components that are required for a Custom App: 54 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

61 Application The application file is a JSON file that contains the configuration for your Custom App. The JSON application file is saved with a.app file extension and must be located in the $UNITY_HOME/AppFramework/Apps/ directory or in a directory within this directory. Any directory structure you create is reflected in the Custom Apps pane in the Search workspace. The application file contains: 1. References the script that defines your Custom App 2. Parameters that must be passed to the script to generate data, such as user credentials and host name. 3. Specifications that define the layout of your Custom App and the charts are displayed, including the chart type, and/or the HTML to be displayed. Script The script that you run to define your Custom App can be a shell script, a python script or a Java program. If you are running a Java program, you must bundle it as an executable JAR file. The script performs the actions: 1. Generates an HTTP POST request to a IBM SmartCloud Analytics - Log Analysis URL. 2. Extracts the JSON that contains the data generated by the search in the HTTP POST request. 3. Generates JSON string representing HTML and/or data that the Charts can process. The script must be saved to the same folder as the application file and is called by the application file. Templates Templates contain JSON objects that are similar to the JSON used for the application file, but it contains an additional key named "template" with the value that is set to "true". Each template can reference one or more custom scripts. The templates are saved with a file extension of.tmpl to the $UNITY_HOME/AppFramework/Templates/ directory or a directory within this directory. You must store the script file(s) referenced by the template in the same folder as the template. If an application file specifies a template, the script located in the same folder as the template is executed. Templates can be used to create a common structure or a group of applications and also to share common configuration elements across applications. Script parameters can be marked as mandatory by adding the parameter required and setting the value to true. You can also set a default value for any parameter. Steps to create a Custom App Complete these steps to create a Custom App. Procedure 1. The source log file you want to use must be loaded and processed by IBM SmartCloud Analytics - Log Analysis. Extending IBM SmartCloud Analytics - Log Analysis 55

62 2. Using python, shell scripting, or Java, create a script. The script and its corresponding application or template file must reside in the same directory: $UNITY_HOME/AppFramework/Apps. If no value is specified for the type parameter at the top of the application file, the application file runs the script from the same folder as the application file. Insight Packs and applications: If you want to include the application in an Insight Pack project, the script and the associated application file must reside in the same in the folder within the project: src-files/unity_apps/apps under the project folder. In the script, you need to specify the actions: a. Connect to IBM SmartCloud Analytics - Log Analysis. b. Use an HTTP POST to run a search request to a IBM SmartCloud Analytics URL for JSON data. The query uses the same method as the queries you can run in the Search Workspace. For the query, use the JSON format for Search requests as described in the Search runtime API. If you need to narrow down the returned data further, you can parse the data within the script. c. Format the returned JSON data into a structure that can be consumed by the charts. 3. Create a JSON application file and save it to the $UNITY_HOME/AppFramework/ Apps/ directory or a sub-directory within this directory. If you want to include the application in an Insight Pack project, create the JSON application file in the src-files/unity_apps/apps folder. The application file references your script and specifies the chart type and parameters for the dashboard display. Note: If you are using the Insight Pack Tooling, build the Insight Pack containing the application, and install the Insight Pack on your IBM SmartCloud Analytics system using the pkg_mgmt utility. 4. From the Custom Apps pane in the Search workspace, run your application and determine if it is displaying your data as you intended. Amend the script and application file as required. Refresh Custom Apps: If you do not see you application listed, refresh the Custom Apps pane in the Search workspace in order to list the newly installed Custom app. Note: If you close a chart portlet, you must run the Custom App again to reopen the chart. 5. (Optional) If you want to create a template for your application, create a template in the directory: $UNITY_HOME/AppFramework/Templates. If a template name has been specified in the type field, the application file references that template and executes the script in the same directory as that template. That is, the application file and script must be located in the same directory as the template file. Insight Packs and templates: If you want to include the application in an Insight Pack project, the script, the application, and template file must reside in the same in the folder within the project: src-files/unity_apps/templates under the project folder. 56 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

63 Related concepts: Search REST API overview on page 87 The Search REST interface can be used to execute search queries. Search can be executed through a HTTP POST request on /Search. Related reference: Facet requests on page 92 Different types of facet requests are supported by USR, along with the JSON format used to specify each type of facet request. JSON Format for Search Request on page 87 The input JSON for a search request is a single JSON object. Scripts The script that is executed can be written as a Python script, shell script, or a Java application packaged as an executable JAR file. When you execute the application, the parameters are written to a temporary file in JSON format, and the file name is passed to the script. The script reads the parameters from the file. And the script generates the output in the standard JSON format required for the dashboard specification. Within an Insight Pack project, scripts must reside in the same folder as the application or template that references it, either in src-files/unity_apps/apps or src-files/unity_apps/templates. In the script, use an HTTP POST request to query a IBM SmartCloud Analytics URL for JSON data. The query uses the same method as the queries you can run in the Search Workspace. For the query, use the JSON format for Search requests as described in Search REST API. If you need to narrow down the returned data further, you can parse the data within the script. Setting environment and connection in Python scripts If you are using Python, you can import convenience methods in unity.py for interacting with the IBM SmartCloud Analytics server. unity.py is shipped with IBM SmartCloud Analytics and can be found in PYTHONPATH. It contains methods for connecting to the SmartCloud Analytics servers, as well as making HTTP requests that include the required Cross Site Request Forgery (CSRF) tokens. To use unity.py, simply include it in an import statement. For example: from unity import UnityConnection, get_response_content try: import json except ImportError: import simplejson as json import sys To connect to the SmartCloud Analytics server, create an instance of UnityConnection() and call the UnityConnection.login() method. The UnityConnection() object takes the parameters: v url - base URL to the SmartCloud Analytics server; if you changed the default port number during install, you should also change it here v username - username used for authentication v password - password used for authentication For example: Extending IBM SmartCloud Analytics - Log Analysis 57

64 # Create a new UnityConnection to the server and login unity_connection = UnityConnection( unityadmin, password ) unity_connection.login() The parameters to the Python script are written to a file in JSON format, and the filename is passed to the script. For example, the parameters can be passed in a file containing the following: "parameters": [ "name": "search", "type": "SearchQuery", "value": "filter": "range": "timestamp": "from":"01/01/ :00: EST", "to":"01/01/ :00: EST", "dateformat":"mm/dd/yyyy HH:mm:ss.SSS Z", "logsources": [ "type": "logsource", "name": "SystemOut" ],, ] Here is an example of how the parameters can be retrieved and parsed: # Get the parameters if len(sys.argv) > 1: filename = str(sys.argv[1]) fk = open(filename,"r") data = json.load(fk) parameters = data[ parameters ] for i in parameters: if i[ name ] == search : search = i[ value ] for key in search.keys(): if key == filter : filter = search[ filter ] elif key == logsources : logsource = search[ logsources ] To make an http request that uses the Search runtime API, use the connection.post() and get_response_content() methods. For example: request = "logsources": logsource, "query": "*", "filter": filter // this is the value from the parameters // this is the value from the parameters response = connection.post( /Search, json.dumps(request), content_type= application/json; charset=utf-8 ); content = get_response_content(response) 58 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

65 # parse the results in found in content... Lastly, close the connection at the end of the Python script: # close the connection connection.logout() Related reference: JSON Format for Search Request on page 87 The input JSON for a search request is a single JSON object. Facet requests on page 92 Different types of facet requests are supported by USR, along with the JSON format used to specify each type of facet request. Example search request: The search request retrieves JSON data and is the body in the HTTP POST request. This is an example of a JSON search string that is the body HTTP POST request. "start":0, "results":1, "filter": "range": "timestamp": "from":"5/31/2012 5:37: ", "to":"5/31/2013 5:37: ", "dateformat":"mm/dd/yyyy HH:mm:ss.SSS Z", "query":"severity:(w OR E)", "sortkey":[ "-timestamp" ], "getattributes":[ "timestamp", "severity" ], "facets": "datefacet": "date_histogram": "field":"timestamp", "interval":"hour", "outputdateformat":"mm-dd HH:mm", "nested_facet": "severityfacet": "terms": "field":"severity", "size":10, "logsources":[ "type":"logsource", Extending IBM SmartCloud Analytics - Log Analysis 59

66 ] "name":"/systemout" The request specifies: v The number of records returned, in this case it is "results":1. It is helpful to set the number records to a low number when you are building and testing your script. v The date/time range is from May to May 31, 2013: "timestamp":"from":"5/31/2012 5:37: ","to":"5/31/2013 5:37: ","dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z" v The query searches for Warnings or Errors and the results are sorted by timestamp. Only the timestamp and severity attributes are returned in the results. "query":"severity:(w OR E)", "sortkey":["-timestamp"],"getattributes":["timestamp","severity"], v The logsource is set as the WebSphere Application Server SystemOut log. v "logsources":["type":"logsource","name":"/systemout"] (Optional) Facet requests that you can use to create sums, time interval, or other statistical data for a given field/value pair. In this example there is a date_histogram facet with a nested terms facet. Within each time interval returned by the date_histogram facet, the term facet, called severityfacet, counts the number of each type of severity. "facets":"datefacet":"date_histogram":"field": "timestamp","interval":"hour","outputdateformat":"mm-dd HH:mm", "nested_facet":"severityfacet":"terms":"field":"severity","size":10, 1,000 or more results and Custom Apps: When a query in a Custom App returns more than 1000 records, you get only 1000 results back. The search result returned includes a field totalresults which shows total number of matching results. Another field numresults gives the number of records returned. You can check these values in the Custom App script and handle the results accordingly. Example JSON output from search request: Run your script and review the JSON output. Adjust the script to get the output you need. Search request output When you post the HTTP Search request, it returns a JSON string in raw format. It is helpful to open the raw output in a JSON editor. These samples show the output from the example search in raw format and in the tabbed format displayed in JSON editors. This sample is segment of the returned JSON in raw format. "searchrequest":"start":0,"results":1,"filter":"and": ["range":"timestamp":"from":"5\/31\/2012 5:37: ", "to":"5\/31\/2013 5:37: ", "dateformat":"mm\/dd\/yyyy HH:mm:ss.SSS Z","or":["phrase": "logsource":"systemout"], "range":"_writetime":"dateformat":"yyyy-mm-dd T HH:mm:ss.SSSZ","from": " T00:00: ","to":" T05:48: "], "query":"severity:(w OR E)", "sortkey":["-timestamp"],"getattributes":["timestamp","severity"], "facets":"datefacet":"date_histogram":"field":"timestamp","interval":"hour", 60 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

67 "outputdateformat":"mm-dd HH:mm","nested_facet":"severityFacet": "terms":"field":"severity","size":10," usr_fast_date_query":"utc", "logsources":["type":"logsource","name":"\/systemout"],"collections": ["WASSystemOut-Collection1"],"totalResults":805,"numResults":1,"executionInfo": "processingtime":33,"searchtime":54,"searchresults":[ "resultindex":0,"attributes": "msgclassifier":"tcpc0003e","threadid":" ","message": "TCP Channel TCP_2 initialization failed. The socket bind failed for host SystemOut.log SystemOut.logOneLiners SystemOut.logOneLinersNoTS unity_populate_was_log.sh unity_search_pattern_insert.sh was_search_pattern.txt x and port The port may already be in use._log_2_", "_writetime":"05\/27\/13 12:39:03: ","logsourceHostname": "fit-vm12-230","logrecord":"[10\/21\/12 19:57:02:313 GMT+05:30] TCPPort E TCPC0003E: TCP Channel TCP_2 initialization failed. The socket bind failed for host SystemOut.log SystemOut.logOneLiners SystemOut.logOneLinersNoTS unity_populate_was_log.sh unity_search_pattern_insert.sh was_search_pattern.txt x and port The port may already be in use._log_2_", "timestamp":"10\/21\/12 14:27:02: ","severity":"E","logsource" :"SystemOut"], "facetresults":"datefacet":["label":" UTC","low":" :00", "high":" :00","count":425,"nested_facet":"severityFacet":"counts": ["term":"w","count":221,"term":"e","count":204],"total":2,"label": " UTC","low":" :00","high":" :00","count":380, "nested_facet":"severityfacet":"counts":["term":"e","count":197, "term":"w","count":183],"total":2], "metadata":"msgclassifier":"filterable":true,"datatype":"text","sortable":false,... ""exceptionmethodname":"filterable":false,"datatype":"text","sortable":false, "severity":"filterable":true,"datatype":"text" This sample shows the same results formatted in a JSON editor. "searchrequest": "start": 0, "results": 1, "filter": "and": [ "range": "timestamp": "from": "5/31/2012 5:37: ", "to": "5/31/2013 5:37: ", "dateformat": "MM/dd/yyyy HH:mm:ss.SSS Z", "or": [ "phrase": "logsource": "SystemOut" ], "range": "_writetime": "dateformat": "yyyy-mm-dd T HH:mm:ss.SSSZ", "from": " T00:00: ", "to": " T05:48: " Extending IBM SmartCloud Analytics - Log Analysis 61

68 ], "query": "severity:(w OR E)", "sortkey": [ "-timestamp" ], "getattributes": [ "timestamp", "severity" ], "facets": "datefacet": "date_histogram": "field": "timestamp", "interval": "hour", "outputdateformat": "MM-dd HH:mm", "nested_facet": "severityfacet": "terms": "field": "severity", "size": 10, " usr_fast_date_query": "UTC", "logsources": [ "type": "logsource", "name": "/SystemOut" ], "collections": [ "WASSystemOut-Collection1" ], "totalresults": 805, "numresults": 1, "executioninfo": "processingtime": 33, "searchtime": 54, "searchresults": [ "resultindex": 0, "attributes": "msgclassifier": "TCPC0003E", "threadid": " ", "message": "TCP Channel TCP_2 initialization failed. The socket bind failed for host SystemOut.log SystemOut.logOneLiners SystemOut.logOneLinersNoTS unity_populate_was_log.sh unity_search_pattern_insert.sh was_search_pattern.txt x and port The port may already be in use._log_2_", "_writetime": "05/27/13 12:39:03: ", "logsourcehostname": "fit-vm12-230", "logrecord": "[10/21/12 19:57:02:313 GMT+05:30] TCPPort E TCPC0003E: TCP Channel TCP_2 initialization failed. The socket bind failed for host SystemOut.log SystemOut.logOneLiners SystemOut.logOneLinersNoTS unity_populate_was_log.sh unity_search_pattern_insert.sh was_search_pattern.txt x and port The port may already be in use._log_2_", "timestamp": "10/21/12 14:27:02: ", 62 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

69 "severity": "E", "logsource": "SystemOut" ], "facetresults": "datefacet": [ "label": " UTC", "low": " :00", "high": " :00", "count": 425, "nested_facet": "severityfacet": "counts": [ "term": "W", "count": 221, "term": "E", "count": 204 ], "total": 2,..., "logsource": "filterable": true, "datatype": "TEXT", "sortable": true Note: The TotalResults value shows the total number of matching results. The numresults value shows the number of returned results. The search request specified the return of one result. ("results":1). JSON output from script: You can specify a Custom App with multiple charts with each chart pointing to a different data element. Data elements can be in the form of data or HTML. The script returns this JSON structure which contains the data used to populate the charts, or HTML to be displayed. Data elements There are three parameters required for data elements: id Specify an identifier for the data element. The Custom App uses this value to determine the data that is displayed. The Custom App uses this value to determine the data that is displayed for a particular chart. fields Specify an array containing field descriptions. Each field that you specify in the array has an ID, a label, and a type. rows Using the rows array to specify a value for each of the fields that you have specified. Extending IBM SmartCloud Analytics - Log Analysis 63

70 HTML elements There are two parameters required for data elements: id Specify an identifier for the data element. The Custom App uses this value to determine the data that is displayed. In the application file, the chart specifications use this id to specify the data element used for the chart. htmltext Specify the HTML that you want to display. A HTML portlet is used to display the HTML that you specify. Example This is a sample JSON output which contains both the types of output which contains both chart data and HTML output: "data":[ "id":"errorswarningsvstime", "fields":[ "id":"timestamp", "label":"date", "type":"text", "id":"severity", "label":"severity", "type":"text", "id":"count", "label":"count", "type":"long" ], "rows":[ "timestamp":" :00", "severity":"w", "count": 221, "timestamp":" :00", "severity":"e", "count": 204 ], "id":"htmldata", "htmltext":",<!doctype html><html><body><div><h1>sample HTML</h1> </div></body></html>" ] In this example, the data id ErrorsWarningsVsTime is defined in the script. chartdata.append( id : ErrorsWarningsVsTime, fields :facetfields, rows :facetrows) 64 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

71 In the application file chart specification, the id ErrorsWarningsVsTime is referenced to specify the data set used in the chart. Example Script: "type": "Tree Map", "title": "Messages by Hostname - Top 5 over Last Day", "data": "$ref": "ErrorsWarningsVsTime", "parameters": "level1": "date", "level2": "count", "level3": "severity", "value": "count" The full script example shown here contains all the required elements. In this particular script, the value for some elements are defined as variables inside the script. For example, the elements of the search request such as the logsource and query are defined as variables. from unity import UnityConnection, get_response_content from urllib import urlencode import CommonAppMod try: import json except ImportError: import simplejson as json import sys import re from datetime import datetime, date, time, timedelta import UnityAppMod # Create a new Unity connection to the server and login unity_connection = UnityConnection( unityadmin, unityadmin )unity_connection.login() response_text = get_response_content(unity_connection.get ( /RetentionConfiguration/latest )) #print response_text retention_period = json.loads(response_text)[ retentionperiod ] #print The retention period was %d. % retention_period ########################################################## # Calculates the curren time, relative time, and forms timestamps ########################################################## currenttimestamp = datetime.now() # Currently chosen relative timestamp newtimestamp = currenttimestamp - UnityAppMod.getLastYear() # Format the timestamp currentformatted = UnityAppMod.formatTimestamp(currentTimestamp) newformatted = UnityAppMod.formatTimestamp(newTimestamp) # Define the logsource for the search logsource = "logsources":["type":"logsource","name":"/systemout"] # Define the output data chartdata = [] timeunit = hour Extending IBM SmartCloud Analytics - Log Analysis 65

72 timeunitformat = MM-dd HH:mm timestamp = "timestamp":"from":" + newformatted + ","to":" + currentformatted + ","dateformat":"mm/dd/yyyy HH:mm:ss.SSS Z" # Parameter defaults start = 0 results = 1 ########################################################## def getsearchdata(query, facet, sortkey, attributes): #Search query to be used as content for POST body = "start": + str(start) +,"results": + str(results) + \,"filter":"range": + timestamp +, + \ query +, + \ sortkey +, + \ attributes +, + \ facet #print body if logsource: body = body +, + logsource body=body+ #print body # Post the search request response = unity_connection.post( /Search, body, content_type= application/json; charset=utf-8 ); content = get_response_content(response) # Convert the response data to JSON data = try: data = json.loads(content) #print json.dumps(data, sort_keys=false,indent=4, separators=(,, : )) except: pass if result in data: result = data[ result ] if status in result and result[ status ] == failure : msg = result[ message ] print >> sys.stderr, msg sys.exit(1) return data ########################################################## ########################################################## def geterrorswarningsvstime(chartdata): # Define the query for the search query = "query":"severity:(w OR E)" # Define the facet to be used for the search facet = "facets":"datefacet":"date_histogram":"field":"timestamp", "interval":" + timeunit + ","outputdateformat":" + timeunitformat + ","nested_facet": "severityfacet":"terms":"field":"severity","size":10 #Define the sortkey to be used for the search results sortkey = "sortkey":["-timestamp"] 66 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

73 # get the facetfields facetfields = [ "id":"date", "label":"timestamp", "type":"text", "id":"severity", "label":"severity", "type":"text", "id":"count", "label":"count", "type":"long" ] facetrows = [] # Define the attributes attributes = "getattributes":["timestamp","severity"] # do the query data = getsearchdata(query, facet, sortkey, attributes) if facetresults in data: # get the facet results facetresults = data[ facetresults ] #print json.dumps(facetresults, sort_keys=false,indent=4, separators=(,, : )) if datefacet in facetresults: # get the facetrows datefacet = facetresults[ datefacet ] CommonAppMod.dateSort(dateFacet) for daterow in datefacet: for severityrow in daterow[ nested_facet ][ severityfacet ] [ counts ]: facetrows.append("date":daterow[ low ], "severity":severityrow [ term ], "count":severityrow[ count ] ); #print facetrows chartdata.append( id : ErrorsWarningsVsTime, fields :facetfields, rows :facetrows) return chartdata ########################################################## # Define the timestamp for the search #timestamp = "timestamp":"from":" + newformatted + ","to":" + currentformatted + ","dateformat":"mm/dd/yyyy HH:mm:ss.SSS Z" # Define the logsource for the search #logsource = "logsources":["type":"logsource","name":"/systemout"] geterrorswarningsvstime(chartdata) unity_connection.logout() # # Build the final output data JSON # # build the chart data dt = data :chartdata #Print the JSON to system out #print( json.dumps(dt, sort_keys=false,indent=4, separators=(,, : ))) print json.dumps(dt, sort_keys=false,indent=4, separators=(,, : )) Extending IBM SmartCloud Analytics - Log Analysis 67

74 Application files The application file JSON can be created by implementing the structure provided in the sample JSON outlined in this topic. If you are using the Insight Pack Tooling, the application file JSON can be created in the Insight Pack project in the subfolder src-files/unity_apps/apps. The parameters defined for application files are: name Specify the name of the application file. This is the name of the.app file that displays in the Custom Apps pane in the Search workspace. type (Optional) Where applicable, specify the name of the template used. The name you specify in this field is the name of the template file excluding the file extension. description Specify the purpose of the application. customlogic Specify the details for the script including any parameters and outline the details for the output that is to be displayed. These parameters are defined for customlogic: script Specify the name of the script that you want to execute. The script can be a Python or shell script, or a Java application packaged as an executable JAR file. If you are going to use a template, the script you execute must be in the same directory as the template. If you are not using a template, save the script to the same directory as the application file. description Provide a description for the script. parameters Complete a JSON array of parameters that are passed to the script. This array can be empty. The parameters you pass depend on the requirements of the script that you are executing. In the first set of parameters in the sample, the parameters for a search query are passed. output This section outlines the nature of the output that you want to display for your Custom App. type Specify Data as the value for this parameter. visualization Add dashboard or searchfilters as a sub-parameter for this parameter. This determines what is displayed when you execute a Custom App. The dashboard requires that you specify the layout for the dashboard that you are creating. Under the Dashboard parameter, add a columns sub-parameter and specify a numeric value to indicate the number of columns that you require. Add a chart as an additional sub-parameter for the dashboard parameter and add an array of JSON objects to indicate what charts you want to include in your dashboard. Include only valid 68 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

75 chart types and also include the required parameters for each chart. For more information, see the Charts section of this document. The searchfilters parameter is for the search filter app and requires a relationalquery as the input parameter. The relational query is JSON object that consists of JSON keys: For more information see Search filter app. Example Dashboard visualizations This example references the ExecuteSearchWithParameters.py script and specifies a bubble chart based on data from the chartdata01 data set. The chartdata01 data set id is defined in the ExecuteSearchWithParameters.py script and referenced in the chart specification. "name": "Search Results from python script- Parameters", "type": "SearchApps", "description": "Captures long running queries in maximo logs", "customlogic": "script": "ExecuteSearchWithParameters.py", "description": "View chart on search results", "parameters": [ "name": "search", "type": "SearchQuery", "value": "start": 0, "results": 10, "filter": "range": "timestamp": "from": "12/03/ :21: India Standard Time", "to": "12/03/ :21: India Standard Time", "dateformat": "dd/mm/yyyy HH:mm:ss.SSS Z", "logsources": ["type": "tag","name": "*"], "query": "*", "name": "url", "type": "String", "value": "http: // : 9988/Unity/" ], "output": "type": "Data", "visualization": "dashboard": "columns": 1, "charts": [ "type": "Bubble Chart", "title": "Severity against Time", "data": "$ref": "chartdata01", "parameters": "xaxis": "timestamp","yaxis": "severity" ] Extending IBM SmartCloud Analytics - Log Analysis 69

76 Related tasks: Creating the search filter app on page 84 You can create the search filter app to create a customized search query in IBM SmartCloud Analytics - Log Analysis. Modifying the search filter app core logic on page 86 If you want to extend the default implementation, the core logic of the search filter app is described here, Charts: The chart types specified here are supported by IBM SmartCloud Analytics - Log Analysis. The chart specifications are contained in the $UNITY_HOME/AppsFrameowrk/ chartspecs directory. Displaying a chart: To display a chart, you execute your Custom App from the Search workspace. If you close a chart portlet, you must run the Custom App again to reopen the chart. These parameters are defined for all charts: type title data Specify the type of chart required. The value must match the ID of a chart specification contained in the $UNITY_HOME/AppFramework/chartspecs directory. The supported chart types are outlined in this section. Specify the chart title. Specify the ID for the data element that is represented in the chart. This ID specified must match to the ID provided in the dashboard specifications. parameters Fields to be displayed in the chart. Line chart The line chart is defined with these limitations: v Chart name: Line Chart v Parameters: xaxis and yaxis v Chart Specification: v "type": "Line Chart", "title": "Line Chart ( 2 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "timestamp", "yaxis": "throughput" Aggregation To aggregate the throughput parameter, define the following code: "type": "Line Chart", "title": "Line Chart ( 2 parameters )", "data": "$ref": "searchresults01", "summarizedata": "column": "throughput", "function": "sum" 70 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

77 , "parameters": "xaxis": "timestamp", "yaxis": "throughput" The summarizedata key determines whether aggregation is to be performed or not. column is the name of numeric column that uses the LONG or DOUBLE data type to perform aggregation on. sum is the aggregation function to be applied. Supported functions are sum, min, max. Bar chart The bar chart is defined with these limitations: v Chart name: Bar Chart1 v Parameters: xaxis, yaxis, and categories v Limitations: Only integer values are supported for the yaxis parameter. v Chart Specification: Point chart "type": "Bar Chart", "title": "Bar Chart ( 3 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "timestamp", "yaxis": "CPU", "categories": "hostname" The point chart is defined with these limitations: v Chart name: Point Chart v Parameters: xaxis and yaxis v Chart Specification: Pie chart "type": "Point Chart", "title": "Point Chart ( 2 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "timestamp", "yaxis": "errorcode" The pie chart is defined with these limitations: v Chart name: Pie Chart v Parameters: xaxis and yaxis v Chart Specification: "type": "Pie Chart", "title": "Pie Chart ( 2 parameters )", Extending IBM SmartCloud Analytics - Log Analysis 71

78 "data": "$ref": "searchresults03", "parameters": "xaxis": "count", "yaxis": "severity" Cluster bar chart The cluster bar chart is defined with these limitations: v Chart name: Cluster Bar v Parameters: xaxis, yaxis, and sub-xaxis v Chart Specification: Bubble chart "type": "Cluster Bar", "title": "Cluster Bar ( 3 parameters )", "data": "$ref": "searchresults02", "parameters": "xaxis": "hostname", "yaxis": "errorcount", "sub-xaxis": "msgclassifier" The bubble chart is defined with these limitations: v Chart name: Bubble Chart v Parameters: xaxis, yaxis, and categories v Chart Specification: "type": "Bubble Chart", "title": "Bubble Chart ( 3 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "timestamp", "yaxis": "CPU", "categories": "errorcode" The size of the bubble on the graph depends on the number of items in the parameter that is being represented. In some cases, for example if you have a large bubble and a small bubble, the large bubble may cover the smaller one. Tree map chart The tree map chart is defined with these limitations: v Chart name: Tree Map v Parameters: level1, level2, level3, and value v Chart Specification: 72 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

79 "type": "Bubble Chart", "title": "Bubble Chart ( 3 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "timestamp", "yaxis": "CPU", "categories": "errorcode" Two-series line chart The two-series line chart is defined with these limitations: v Chart name: Two Series Line Chart v Parameters: xaxis, yaxis1, and yaxis2 v Chart Specification: "type": "Two Series Line Chart", "title": "Two Series Line Chart ( 3 parameters)", "data": "$ref": "searchresults01", "parameters": "xaxis": "timestamp", "yaxis1": "throughput", "yaxis2": "ResponseTime" Stacked bar chart The stacked bar chart is defined with these limitations: v Chart name: Stacked Bar Chart v Parameters: xaxis, yaxis, and categories v Chart Specification: "type": "Stacked Bar Chart", "title": "Stacked Bar Chart ( 3 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "hostname", "yaxis": "CPU", "categories": "severity" Stacked line chart The stacked line chart is defined with these limitations: v Chart name: Stacked Line Chart v Parameters: xaxis, yaxis, and categories v Chart Specification: Extending IBM SmartCloud Analytics - Log Analysis 73

80 Heat map "type": "Stacked Line Chart", "title": "Stacked Line Chart ( 3 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "threadid", "yaxis": "timestamp", "categories": "MBO" The heat map chart is defined with these limitations: v Chart name: Heat map v Parameters: xaxis, yaxis, and count v Chart Specification: "type": "Heat Map", "title": "Heat Map ( 3 parameters )", "data": "$ref": "searchresults01", "parameters": "xaxis": "messageclassifier", "yaxis": "hostname", "count": "throughput" Example application file: This example shows an application file that references the 3Params_was_systemout.py script example and specifies how multiple charts display the output JSON from the script. Example application file and chart dashboard This example shows the application file and the dashboard of charts it creates when you run the application. The ErrorsWarningsVsTime and ExceptionByHost data sets are defined in the 3Params_was_systemout.py script and referenced in the chart specifications in this application file. "name": "WAS Errors and Warnings Dashboard", "description": "Display a dashboard of charts that show WAS errors and warnings", "customlogic": "script": "3Params_was_systemout.py", "description": "View charts based on search results", "parameters": [ "name": "search", "type": "SearchQuery", "value": "logsources": [ "type": "logsource", "name": "SystemOut" ] 74 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

81 , "name": "relativetimeinterval", "type": "string", "value":"lastday", "name": "timeformat", "type": "data", "value": "timeunit": "hour", "timeunitformat": "MM-dd HH:mm", "name": "hostnamefield", "type": "string", "value": "logsource_hostname" ], "output": "type": "Data", "visualization": "dashboard": "columns": 2, "charts": [ "title": "Message Counts - Top 5 over Last Day", "parameters": "xaxis": "date", "yaxis": "count", "sub-xaxis": "severity", "data": "$ref": "ErrorsWarningsVsTime", "type": "Cluster Bar", "type": "Heat Map", "title": "Message Counts - Top 5 over Last Day", "data": "$ref": "ErrorsWarningsVsTime", "parameters": "xaxis": "date", "yaxis": "count", "category": "severity", "type": "Bubble Chart", "title": "Java Exception Counts - Top 5 over Last Day", "data": "$ref": "ErrorsWarningsVsTime", "parameters": "xaxis": "date", "yaxis": "count", "size": "severity", "type": "Two Series Line Chart", "title": "Error and Warning Total by Hostname - Top 5 over Last Day", Extending IBM SmartCloud Analytics - Log Analysis 75

82 "data": "$ref": "ErrorsWarningsVsTime", "parameters": "xaxis": "date", "yaxis1": "count", "yaxis2": "severity",, "type": "Tree Map", "title": "Messages by Hostname - Top 5 over Last Day", "data": "$ref": "ErrorsWarningsVsTime", "parameters": "level1": "date", "level2": "count", "level3": "severity", "value": "count" "type": "Stacked Bar Chart", "title": "Java Exception by Hostname - Top 5 over Last Day", "data": "$ref": "ExceptionByHost", "parameters": "xaxis": "date", "yaxis": "count", "categories": "hostname" ] This sample shows a chart created from the example script and application files. 76 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

83 Templates A template file defines a set of custom scripts. For each custom script, it specifies the parameters with the type, name, and the default value. If you are using the Insight Pack Tooling, the template file JSON can be created in an Insight Pack project in the subfolder src-files/unity_apps/templates. Any script files referenced by the template must also reside in this folder. In addition to the fields included for applications files, these additional parameters are included in the template: template This is a boolean value that specifies that the file is a template. Specify true as the value for this parameter. parameter Although this parameter is specified in the application file. Additional values are required in a template file for this parameter. required If this is set to true, the parameter is required. Custom Apps that use the template must include this parameter. default Specify the default parameter value. For parameters that are not required, this value is used where the application does not specify a value. Example "name": "SearchApps", "type": "SearchApps", "template": true, "description": "SearchApps Template", "customlogic": [ Extending IBM SmartCloud Analytics - Log Analysis 77

84 "script": "ExecuteSearch.py", "description": "View chart on search results", "parameters": [], "output": "type": "Data", "visualization": "dashboard": "charts": ["type": "Pie Chart","title": "Errors/Warnings for last 7 days","yaxis": "msgclassifier", "type": "Bar Chart","title": "Errors/Warnings for each host", "xaxis": "hostname","yaxis": "count"], "script": "ExecuteSearchWithParameters.py", "description": "View chart on search results", "parameters": [ "name": "search", "type": "SearchQuery", "default": "start": 0,"results": 10,"filter": "range": "timestamp": "from": "12/03/ :21: India Standard Time", "to": "12/03/ :21: India Standard Time", "dateformat": "dd/mm/yyyy HH:mm:ss.SSS Z", "logsources": ["type": "tag","name": "*"], "query": "*", "required": false, "name": "url", "type": "String", "default": "http: // : 9988/Unity/", "required": false ], "output": "type": "Data", "visualization": "dashboard": "charts": [ "type": "Pie Chart","title": "Errors/Warnings for last 7 days","yaxis": "msgclassifier", "type": "Bar Chart","title": "Errors/Warnings for each host","xaxis": "hostname","yaxis": "count"] ] Building and Installing an Insight Pack with Custom Apps The Insight Pack Tooling creates an archive file for the Insight Pack using the Build Insight Pack option from the project's context menu. Before you begin The script and application definition files must be created in the Insight Pack project subfolder src-files/unity_apps/apps. If you are using a template, the script, application definition, and template files must be created in the Insight Pack project subfolder src-files/unity_apps/templates. 78 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

85 Procedure v v Any custom app definitions, scripts, and template files found in the src-files/unity_apps folder are included in the archive file. When the Insight Pack archive file is installed using the pkg_mgmt utility, the application definition, scripts, and template files are installed to the appropriate directories under $UNITY_HOME/AppFramework. You may need to refresh the Custom Apps pane in the Search workspace in order to display newly installed custom apps. Example of full Custom App This example shows a complete script, application file, and the resulting display for a custom application named Performance_Msgs. Performance_Msgs.py script This example shows the script Performance_Msgs.py, which collects data on performance messages. For details, read the coding notes within the script. ######################################################### COPYRIGHT-TOP ### # Licensed Materials - Property of IBM # "Restricted Materials of IBM" # 5725-K26 # # (C) Copyright IBM Corp All Rights Reserved. # # US Government Users Restricted Rights - Use, duplication, or # disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ######################################################### COPYRIGHT-END ### #Install simple json before trying the script #Installing simplejson # Download the rpm from /python-simplejson el5.x86_64.rpm/download/ # Run the command rpm -i python-simplejson el5.x86_64.rpm from datetime import datetime, date, time, timedelta import sys from unity import UnityConnection, get_response_content try: import json except ImportError: import simplejson as json # # getsearchdata() # def getsearchdata(logsource, filter): # Build the request parameters using the Search Request APIs. # The logsource and filter values used in the query were passed # in as parameters to the script. request = "start": 0, "results": 1, "filter": filter, "logsources": logsource, "query": "*", "sortkey":["-timestamp"], "getattributes":["timestamp","perfmsgid"], "facets": "datefacet": "date_histogram": Extending IBM SmartCloud Analytics - Log Analysis 79

86 "field":"timestamp", "interval":"hour", "outputdateformat":"mm-dd HH:mm", "nested_facet": "dlfacet": "terms": "field":"perfmsgid", "size":20 # Post the search request response = connection.post ( /Search, json.dumps(request), content_type= application/json; charset=utf-8 ); content = get_response_content(response) #convert the response data to JSON data = try: data = json.loads(content) except: pass if result in data: result = data[ result ] if status in result and result[ status ] == failure : msg = result[ message ] print >> sys.stderr, msg sys.exit(1) return data # # datesort() # def datesort(datefacet): # This function parses the UTC label found in the datefacet in the # format "mm-hh-ddd-yyyy UTC" # and returns an array in the form [yyyy, DD, hh, mm] def parsedate(datelabel): adate = map(int, datelabel.split(" ")[0].split("-")) adate.reverse() return adate # call an in-place List sort, using an anonymous function lambda # as the sort function datefacet.sort(lambda facet1, facet2: cmp(parsedate(facet1[ label ]), parsedate(facet2[ label ]))) return datefacet # # Main script starts here # # define the URLs used for the http request baseurl = connection = UnityConnection(baseurl, unityadmin, unityadmin ) connection.login() 80 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

87 # initialize variables filter = logsource = chartdata = [] # Get the script parameters which were passed in via a temporary # file in JSON format. The name of the temporary file is in argv[1] if len(sys.argv) > 1: filename = str(sys.argv[1]) fk = open(filename,"r") data = json.load(fk) parameters = data[ parameters ] for i in parameters: if i[ name ] == search : search = i[ value ] for key in search.keys(): if key == filter : filter = search[ filter ] elif key == logsources : logsource = search[ logsources ] # # get the data to be returned # # define the fields to be returned in the chart data # each row will contain the date, perfmsgid, and count fields = [ "id":"date", "label":"timestamp", "type":"text", "id":"perfmsgid", "label":"perfmsgid", "type":"text", "id":"count", "label":"count", "type":"long" ] rows = [] # call getsearchdata() to post the search request and retrieve the data data = getsearchdata(logsource, filter) if facetresults in data: # get the facet results facetresults = data[ facetresults ] #print json.dumps(facetresults, sort_keys=false,indent=4, separators=(,, : )) if datefacet in facetresults: # get the datefacet rows datefacet = facetresults[ datefacet ] # the results of the datefacet are not sorted, so call datesort() datesort(datefacet) # iterate through each row in the datefacet for daterow in datefacet: for msgrow in daterow[ nested_facet ][ dlfacet ][ counts ]: # create a row which includes the date, perfmsgid, and count rows.append("date":daterow[ low ], "perfmsgid":msgrow[ term ], "count":msgrow[ count ] ); #print rows # create chart data with the id perfmsgidcountsovertime chartdata.append( id : perfmsgidcountsovertime, fields :fields, rows :rows) # close the connection Extending IBM SmartCloud Analytics - Log Analysis 81

88 connection.logout() # # Create the HTML data to be returned # html = "<!DOCTYPE html><html><body><h1>hello World!</h1></body></html>"</p><p> chartdata.append("id": "htmldata", "htmltext": html ) # # Build the final output data JSON # # build the JSON structure containing the chart data which will be the # output of this script appdata = data :chartdata #Print the JSON to system out print json.dumps(appdata, sort_keys=false,indent=4, separators=(,, : )) Performance_Msgs.app file This sample shows the Performance_Msgs.app application file that references the Performance_Msgs.py script and specifies the chart to display for the Custom App. "name": "Performance Messages", "description": "Displays charts showing performance messages over time", "customlogic": "script": "Performance_Msgs.py", "description": "View chart on search results", "parameters": [ "name": "search", "type": "SearchQuery", "value": "filter": "range": "timestamp": "from":"01/01/ :00: EST", "to":"01/01/ :00: EST", "dateformat":"mm/dd/yyyy HH:mm:ss.SSS Z", "logsources": [ "type": "logsource", "name": "MyTest" ],, ], "output": "type": "Data", "visualization": "dashboard": "columns": 2, "charts": [ "type": "Stacked Bar Chart", "title": "Performance Message ID Counts - Last Year", "data": "$ref": "perfmsgidcountsovertime", "parameters": "xaxis": "date", 82 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

89 , "yaxis": "count", "categories": "perfmsgid" "type": "html", "title": "My Test - Expert Advice", "data": "$ref": "htmldata", "parameters": "html": "text" ] Performance_Msgs chart The chart shows the counts of each message for each day. Defining a search filter app You can use the search filter app to create a customized search query in IBM SmartCloud Analytics - Log Analysis. A search filter app is a custom search request that: v searches databases for strings and timestamps v creates "configured patterns" from the found strings v uses the minimum and maximum timestamps found in the database to frame the search time range. IBM SmartCloud Analytics - Log Analysis includes two files in the $UNITY_HOME/AppFramework/Apps/searchFilter directory that you can use to implement a customized search filter app: Extending IBM SmartCloud Analytics - Log Analysis 83

90 v v searchfilter.jar is self executable JAR file that contains Java classes. The classes support the core functions of the app. SearchFilter.sh is a wrapper script that starts the searchfilter.jar file after the appropriate class path is set. Supported databases The following databases are supported by default v Derby 10.8 v DB2 9.7 To add databases that are not supported by default, you must locate the JAR file that contains the class 4 JDBC driver for the database, change the location of the drivers in the classpath section of the searchfilter.sh file. 1. Download the appropriate class 4 JDBC driver file for your database. 2. Copy the driver file to the $UNITY_HOME/AppFramework/Apps/searchFilter directory. 3. Add the location of the new driver to the classpath parameter in the searchfilter.sh file. At time of publication, Oracle Database 11g Release is the only additional database tested. Creating the search filter app You can create the search filter app to create a customized search query in IBM SmartCloud Analytics - Log Analysis. About this task To create a search filter app: Procedure 1. SearchFilter definition: Use this app definition to define your own search filter: "name": "Example app for search filter", "description": "Example app for search filter", "customlogic": "script": "searchfilter.sh", "description": "Example app for search filter", "parameters": [ "name": "relationalquery", "type": "relationalquery", "value": "datasource": DATASOURCE_DETAILS, "SQL": "SQL_QUERY" ], "output": "type": "searchfilters", "visualization": "searchfilters": 2. Update the datasource field with your datasource details. 84 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

91 For example: "value": "datasource": "schema": "ITMUSER", "jdbcdriver": "com.ibm.db2.jcc.db2driver", "jdbcurl": "jdbc:db2:// :50000/wale", "datasource_pk": 1, "username": "itmuser", "password": "tbsm4120", "name": "datasource", 3. Update the SQL field with your query. For example: "SQL": "select * from ITMUSER.test_OLSC" Results After you run the app, the customized search is displayed in the configured pattern section of the user interface and the log source and time filters are set. The keywords, if any, and the count information are also displayed. Note: 1. If the app output contains log sources, the values that match the search criteria are returned on the user interface. If no log sources are returned, the existing selections are retained and used. 2. If the app output contains time filters, the values that match the search criteria are returned on the UI. If no time filters are returned, the existing selections are retained and used. 3. If the app output contains keywords, IBM SmartCloud Analytics searches the log sources and time filters that are returned and displays the keywords and the search hit count in the configured patterns widget UI. Example The following code example demonstrates the logic that is used for the search filter app: "name": "Sample Search filter app", "description": "App for getting search filter", "customlogic": "script": "searchfilter.sh", "description": "App for getting search filter", "parameters": [ "name": "relationalquery", "type": "relationalquery", "value": "datasource": "schema": "ITMUSER", "jdbcdriver": "com.ibm.db2.jcc.db2driver", "jdbcurl": "jdbc:db2:// u:50000/wale", "datasource_pk": 1, "username": "itmuser", "password": "tbsm4120", "name": "datasource", "SQL": "select * from ITMUSER.test_OLSC" ], Extending IBM SmartCloud Analytics - Log Analysis 85

92 "output": "type": "searchfilters", "visualization": "searchfilters": Note: The output > visualization sub parameter must be set to searchfilters. The search filter app uses relationalquery as the input parameters. The relational query is a JSON object that consists of the following JSON keys: v v datasource is the value of the data source key that defines database connection attributes such as JDBC connection details, schema name, user credentials. SQL is the value of SQL key that defines the query that we want to run against the database Note: If you are trying to run search filter against a data source registered with IBM SmartCloud Analytics, you can provide datasource name instead of datasource details. All details of the corresponding datasource are fetched from IBM SmartCloud Analytics. The datasource is created with the Database connections option in the Data Sources workspace. "name": "relationalquery", "type": "relationalquery", "value": "datasource": "mydatasourcename", "SQL": "select * from MYDB.MYTABLE" where mydatasourcename is the name of a data source that is defined in IBM SmartCloud Analytics - Log Analysis. Related concepts: Application files on page 68 The application file JSON can be created by implementing the structure provided in the sample JSON outlined in this topic. Modifying the search filter app core logic If you want to extend the default implementation, the core logic of the search filter app is described here, Procedure 1. Create a connection to the datasource or database that you specified in the input file using the Data Sources workspace. 2. Run the SQL query that is part of the input file for the app. 3. Tokenize the output of the SQL query and remove common words such as verbs. 4. Pick out the timestamp column and pick up starttime as minimum timestamp and endtime as maximum timestamp. 5. If you already know the log source against which you want to use this search filter, you can use the logsource name in the output parameters. If you do not know this, use an asterisk (*). 6. Construct output in the JSON format. For example: 86 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

93 "keywords": [ "keyword1", "keyword2",... ], "timefilters": "type": "absolute", "starttime": "24/07/ :30:00", "endtime": "24/07/ :30:00", "lastnum": 7, "granularity": "day", "logsources": [ "type": "tag", "name": "*" ] Note: The source code of the default implementation (in Java) of the search filter app is in included with IBM SmartCloud Analytics - Log Analysis in the $UNITY_HOME/AppFramework/Templates/SearchFiltersdirectory. You can update the default implementation based on your requirements. Related concepts: Application files on page 68 The application file JSON can be created by implementing the structure provided in the sample JSON outlined in this topic. Search REST API overview The Search REST interface can be used to execute search queries. Search can be executed through a HTTP POST request on /Search. The Search REST interface is the primary search method that is used to execute search queries. Searching over log sources The input search request and the response back from USR are both modeled as JSON objects. In the following section, the structure of these input and output JSON objects are described. Related tasks: Steps to create a Custom App on page 55 Complete these steps to create a Custom App. Related reference: Facet requests on page 92 Different types of facet requests are supported by USR, along with the JSON format used to specify each type of facet request. JSON Format for Search Request The input JSON for a search request is a single JSON object. JSON Format for Search Request The input JSON for a search request is a single JSON object. The input JSON object has the structure: Extending IBM SmartCloud Analytics - Log Analysis 87

94 logsources : [ logsource1, logsource2,...], query : the search query string entered by the user, filter : //filter query (see section 4.1 below) querylang : the language of the search query, start : 0, results : 10, getattributes : [ attribute1, attribute2,...], sortkey : [ key1, key2,...], facets : facet1d1 : // facet request (see section 4.2 below) facetid2 : // facet request (see section 4.2 below)... advanced : rankinghint : a query expression that provides a ranking hint for Gumshoe, explain : false, interpretations : true, catchallonly : false, rules : allrules : true, categoryrules : true, rewriterules : true, ignorecategories : [ cat1, cat2,...] grouping : [ groupingtype1, groupingtype2,..], highlighting : true, 1,000 or more results and Custom Apps: When a query in a Custom App returns more than 1000 records, you get only 1000 results back. The search result returned includes a field totalresults which shows total number of matching results. Another field numresults gives the number of records returned. You can check these values in the Custom App script and handle the results accordingly.. The following table lists the semantics of the remaining attributes Table 5. Search request input parameters Name Description Default Value Comments logsources logsources against which the search request must be performed Required A list of logsources. This should be a JSON array. Each entry can be a logsource name or tag name. If a tag name is specified, all logsources under the tag will be included in the search "logsources": [ "type":"tag","name":"/ipo", "type":"logsource", "name":"/daytraderlogsource" ] query Search query Required Typically, the value of this parameter is whatever is entered by the end-user in a search box. However, any valid query string as per the Velocity query syntax (excluding range queries) is permitted. 88 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

95 Table 5. Search request input parameters (continued) filter Application filter No filter Valid JSON object as described in section 4.1. This parameter is intended for applications to pass in filters in addition to the user search query. Conceptually, the overall query processed by USR is query AND filter. The separation into two parts is to allow for additional query manipulation of the query part when we implement advanced search capabilities. start Offset of the first result to return (Integer) 0 If specified value is negative, the value will be defaulted to 0; if the specified value is greater than the number of results, no results will be returned. results getattributes Number of results desired (Integer) Attributes to be returned for each result entry 10 Min value is 1 (values <= 0 default to 1); maximum value is Not required When this parameter is not specified in the request, the engine will return only the set of attributes marked as retrievebydefault in the indexing configurations associated with the logsources in question. If this parameter is specified and is a non-empty array, then the attributes listed in the array are fetched. Finally, if the parameter is specified but is an empty array, then ALL retrievable attributes across all logsources will be returned for each result entry. sortkey One or more fields on which to sort the result Relevance order A valid value for this parameter is a comma-separated list of field names, with each field name prefixed by + or -. Each field name appearing in the list must have been declared to be a sortable field at index build time. The first field in the list is treated as the primary sort key, the second field (if present) as the secondary sort key, and so on. The + (resp. - ) prefix is an instruction to sort the corresponding field values in ascending (resp. descending) order. For queries involving multiple logsources, sort keys must exist in all logsources involved in the query, and sort keys must have the same type across logsources. Extending IBM SmartCloud Analytics - Log Analysis 89

96 Table 5. Search request input parameters (continued) outputtimezone outputdateformat Time zone in which DATE field results should be formatted SimpleDateFormat string that specifies how DATE field results should be formatted Collection time zone (for single logsource queries and multi-logsource queries where logsources have identical time zones). Server time zone (for multi-logsource queries where logsources have different time zones) UNITY_DATE_ DISPLAY_FORMAT Read from unitysetup.properties Related concepts: Search REST API overview on page 87 The Search REST interface can be used to execute search queries. Search can be executed through a HTTP POST request on /Search. Scripts on page 57 The script that is executed can be written as a Python script, shell script, or a Java application packaged as an executable JAR file. When you execute the application, the parameters are written to a temporary file in JSON format, and the file name is passed to the script. The script reads the parameters from the file. And the script generates the output in the standard JSON format required for the dashboard specification. Related tasks: Steps to create a Custom App on page 55 Complete these steps to create a Custom App. Filter query A filter query is a Boolean query specified as a nested JSON record. In its simplest form a Boolean query consists of a basic query. A basic query can be a term query, wildcard query, phrase query or range query. Basic queries can also be combined using arbitrarily nested conjunctions (AND queries), disjunctions (OR queries) and negations (NOT queries) to form complex Boolean queries Basic filter queries Term query A term query is specifies a field name and a term. It matches all documents for which the field contains the term. A term query is specified as follows: term : myfield : termvalue Wildcard query 90 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

97 A wildcard query specifies a field name and a wildcard expression. It matches all documents for which the field matches the wildcard expression. A wildcard query is specified as follows: wildcard : myfield : wildcardexpression Phrase query A phrase query specifies a field name and a phrase. It matches all documents for which the field contains the phrase. A phrase query is specified as follows: phrase : myfield : phrasevalue Range query A range query specifies a field name along with a lower bound (inclusive) and an upper bound (exclusive) for the field value. A range query is applicable only to numeric and date fields. For date fields, an additional date format must be provided. A range query is specified as follows: range : myfield : from : lower-bound, // value will be included in the search to : upper-bound, // value will be excluded in the search dateformat : date-format // only for date fields Complex filter queries Complex filter queries can be constructed by combining one or more queries using AND, OR or NOT queries. AND query An AND query consists of two or more sub-queries. Sub-queries can be either a basic query or another complex query. A document satisfies an AND query only if it satisfies all of its sub-queries. An AND query is specified as follows: and :[ query1 :..., query2 :...,... queryn :...] OR query An OR query consists of two or more sub-queries. Sub-queries can be either a basic query or another complex query. A document satisfies an OR query if it satisfies at least one of its sub-queries. An OR query is specified as follows: or :[ query1 :..., query2 :...,... queryn :...] NOT query Extending IBM SmartCloud Analytics - Log Analysis 91

98 A NOT query consists of a single sub-query. The sub-query can be either a basic query or a complex query. A document satisfies a NOT query if it does not satisfy the contained sub-query. A NOT query is specified as follows: not : query :... Facet requests Different types of facet requests are supported by USR, along with the JSON format used to specify each type of facet request. Each facet request is specified as a JSON key-value pair with the key being the facetid and the value being a JSON record. The type of facet being computed determines the structure of this JSON record. The supported facet types and their corresponding JSON request format are described here. Term Facets mytermfacet : terms : field : myfield, size : N Facet counting is performed on the field myfield and the top-n most frequent facet values (for some positive integer N) is returned. The next two facets (histogram and statistical) apply only to numeric fields. In other words, the field on which these facets are being computed must have been configured with a datatype=long or datatype=double in the indexing specification associated with the IBM SmartCloud Analytics collection(s) over which the facet request is being processed. Histogram Facets myhistofacet : histogram : field : myfield, interval : 100 Performs facet counting with buckets determined based on the interval value. Statistical Facets mystatfacet : statistical : field : myfield, stats : [ min, max, sum ] Performs simple aggregate statistics on a facet field. Currently, only three statistics are supported max, min, and sum. The stats attribute specifies which of these statistics should be computed for the given facet request. Date Histogram Facets mydatehistofacet : date_histogram : field : myfield, 92 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

99 interval : day, outputdateformat : yyyy-mm-dd 'T' HH:mm:ssZ A version of the histogram facet specialized for date fields. The value of the interval attribute can be one of the string constants year, month, week, day, hour, orminute. The value of the outputdateformat is any valid date format string as per the Java SimpleDateFormat class. This format string is used to represent the histogram boundaries in the response JSON coming out of USR. For single collection data histogram facets, boundaries are based on the collection time zone (either from the index configuration, or from unitysetup.properties if missing in the index configuration). For multi-collection facets, boundaries are based on collection time zone if the time zone of all collections is identical. For multi-collection facets where collection time zones differ, boundaries are based on the server time zone. Note: The results returned from the date histogram facet are not sorted. If you are plotting the resulting time intervals in a chart, you need to sort the JSON returned by the date histogram facet. For example, in python, if your search request is the following: request = "start": 0, "results": 1, "filter": "range": "timestamp": "from":"01/01/ :00: EST", "to":"01/01/ :00: EST", "dateformat":"mm/dd/yyyy HH:mm:ss.SSS Z", "logsources": ["type": "logsource", "name": "MyTest" ], "query": "*", "sortkey":["-timestamp"], "getattributes":["timestamp","perfmsgid"], "facets": "datefacet": "date_histogram": "field":"timestamp", "interval":"hour", "outputdateformat":"mm-dd HH:mm", "nested_facet": "dlfacet": "terms": "field":"perfmsgid", "size":20 First, retrieve the datefacet from the JSON returned by the http request and then call the datesort() function. response = connection.post( /Search, json.dumps(request), content_type= application/json; charset=utf-8 ); Extending IBM SmartCloud Analytics - Log Analysis 93

100 content = get_response_content(response) #convert the response data to JSON data = json.loads(content) if facetresults in data: # get the facet results facetresults = data[ facetresults ] if datefacet in facetresults: # get the datefacet rows datefacet = facetresults[ datefacet ] # the results of the datefacet are not sorted, # so call datesort() datesort(datefacet) where datesort() is defined as follows: # # datesort() # def datesort(datefacet): # This function parses the UTC label found in the datefacet in # the format "mm-hh-ddd-yyyy UTC" # and returns an array in the form [yyyy, DD, hh, mm] def parsedate(datelabel): adate = map(int, datelabel.split(" ")[0].split("-")) adate.reverse() return adate # call an in-place List sort, using an anonymous function # lambda as the sort function datefacet.sort( lambda facet1, facet2: cmp(parsedate(facet1[ label ]), parsedate(facet2[ label ]))) return datefacet Nested Facets A facet request can be nested inside another facet request, by specifying a nested_facet key. You can nest facets to any number of levels the only restriction is that a statistical facet cannot be included within a nested facet query. The following is a valid nested facet query, with a termsfacet query nested inside a date_histogram facet query: "facets": "datefacet": "date_histogram":, "field": "outputdateformat":"mm-dd HH:mm", "nested_facet": "severityfacet": "terms": "field":"severity", "size":10 "timestamp","interval":"hour", 94 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

101 Related concepts: Search REST API overview on page 87 The Search REST interface can be used to execute search queries. Search can be executed through a HTTP POST request on /Search. Scripts on page 57 The script that is executed can be written as a Python script, shell script, or a Java application packaged as an executable JAR file. When you execute the application, the parameters are written to a temporary file in JSON format, and the file name is passed to the script. The script reads the parameters from the file. And the script generates the output in the standard JSON format required for the dashboard specification. Related tasks: Steps to create a Custom App on page 55 Complete these steps to create a Custom App. JSON Format for Search Response The search results from USR are also packaged as a single JSON object. The JSON object has the structure: searchrequest : // copy of the entire input JSON object that generated this response totalresults : // integer value representing total number of results for the query numresults : // number of top-level results sent back within this JSON ( top-level // because a grouped/clustered result is counted as 1 executioninfo : processingtime : // time (in ms) measured from the receipt of the search // request by USR to point when USR begins to construct the result interpretations : // for advanced search post rules : // for advanced search post searchresults : [ // Array of JSON objects one per top-level result entry. // The size of this array will be the value of the numresults attribute ] facetresults : facetid1 : // Object with facet information for facetid1 facetid2 : // Object with facet information for facetid2... Each element of the searchresults array will have the structure shown below: resultindex : // a number that denotes the position of this result in the // overall result set for this query attributes : field1 : value1, field2 : value2,... // one key-value pair for each field of the result entry; the set of fields // will be determined by the semantics of the getattributes parameter The JSON structure for the facet results depends on the specific type of facet request. Term Facets facetid : total : // total number of distinct terms in the field used for generating this facet counts : [ term : term1, count : 10, Extending IBM SmartCloud Analytics - Log Analysis 95

102 ] term : term2, count : 5,... Histogram Facets facetid : [ low : 50, high : 150, count : 10, low : 150, high : 250, count : 25,... ] Statistical Facets facetid : max : // max value min : //min value sum : // sum value In general, all three aggregates do not have to be present. Only the aggregated listed in the stats attribute of the corresponding facet request will be included. Date histogram Facets Identical to the output of the histogram facets, except that the low and high attributes will be represented according to the format string specified in the input date histogram facet request. For example, the output may be something that looks like the following: facetid : [ low : :00:00, high : :00:00, count : 10, low : :00:00, high : :00:00, count : 10 "label": " UTC",... ] Nested Facets If the outermost facet is a term facet, the response will be as follows: total : // total number of distinct terms in the field used for generating this facet counts : [ term : term1, count : 10, nested_facet : nested facet result..., term : term2, count : 5, nested_facet : nested facet result...]... ] Query API for search The /query api works exactly like the /Search api, only difference being in the structure of response JSON. This api returns the data in tabular format instead of hierarchical format. Search Request Search request structure is the same as described above for /Search api. The only additional field is name. This is an optional field. The name is used as the id of the data set in the results. If the name is not specified, searchresults is used as the id. 96 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

103 "name":"allerrors", "start": 0, "results": 10, "filter":, "logsources": [... ], "query": "*", "getattributes": [... ], "facets": "facetid1" :...,... Other search parameters like search filters, facets, nested facets, logsources, query, getattributes, start, results etc are described above in section 4 Search Results The search results are in a tabular format, which can be consumed by custom applications. The key data in results points to a array of data sets. Each data set has id, fields and rows. Results includes one data set for search results and one for each facet specified in the request. Search results data set uses the name specified in the request as the id. If not specified searchresults is used as id. Facet results use the facet id used in the request as the id for the data set. In case of term, histogram and date-histogram facets, count is added to the fields along with the specified fields. For statistical facets (max, min & sum), field id is generated by combining field name and the function name. For example for min it s fieldname-min where fieldname is the field included in the statistical facet. Similarly for max and sum it s fieldname-max and fieldname-sum. "data": [ "id": "AllErrors", "fields": [ "label": "fieldlabel1", "type": "TEXT", "id": "fieldid1", "label": "fieldlabe2", "type": "LONG", "id": "fieldid2" ], "rows": [ "fieldid1": "value1", "fieldid2": "value2" ], "id": "facetid1", "rows": [ "fieldid1": "value1", "fieldid2": "value2", "count": value3, "fieldid1": "value1", "fieldid2": "value2", "count": value3 ], Extending IBM SmartCloud Analytics - Log Analysis 97

104 "fields": [ "label": "fieldlabel1", "type": "LONG", "id": "fieldid1", "label": "fieldlabel2", "type": "LONG", "id": "fieldid2", "label": "Count", "type": "LONG", "id": "count" ] ] Search request and results This example shows a search request and response with sample data. Search request "start": 0, "results": 100, "name":"allerrors", "logsources": [ "type": "tag", "name": "*" ], "query": "*", "facets": "termfacet01": "terms": "field": "msgclassifier", "size": 419 Search results "data": [ "fields": [ "label": "msgclassifier", "type": "TEXT", "id": "msgclassifier", "label": "classname", "type": "TEXT", "id": "classname", "label": "logsource", "type": "TEXT", "id": "logsource" ], "rows": [ "msgclassifier": "SRVE0250I", "classname": "com.ibm.ws.wswebcontainer.virtualhost", "logsource": "WASLogSource", "msgclassifier": "SECJ0136I", "classname": "com.ibm.ws.wswebcontainer.virtualhost", "logsource": "WASLogSource" ], 98 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

105 Best practices information "id": "AllErrors", "rows": [ "msgclassifier": "SRVE0242I", "count": 132, "msgclassifier": "CWPKI0003I", "count": 3 ], "fields": [ "label": "msgclassifier", "type": "TEXT", "id": "msgclassifier", "label": "count", "type": "LONG", "id": "count" ], "id": "termfacet01" ] Guidelines for developing AQL This section provides guidelines to apply when you are developing Annotation Query Language (AQL) for your IBM SmartCloud Analytics - Log Analysis Insight Pack. Implementing these guidelines ensures that you create effective and reusable code. These guidelines specify how to create output and import statements, how to document your code, and how you can develop and organize your annotation rules and modules. AQL is the primary language used by the InfoSphere BigInsights Text Analytics component. AQL is used to build extractors to extract structured information from unstructured or semistructured text. The log files that you want to search using IBM SmartCloud Analytics - Log Analysis are semi-structured. Therefore, the best practice guidelines provided in this section focus on the specific requirements of IBM SmartCloud Analytics - Log Analysis rather than the wider set of guidelines provided by BigInsights. Reference videos and documentation InfoSphere BigInsights provides documentation and video content to allow you to understand Annotation Query Language (AQL) concepts. This documentation also outlines how you can develop AQL to meet your requirements. The list provided here is intended to provide you with guidance in identifying topics that are of particular relevance to IBM SmartCloud Analytics - Log Analysis. For videos about AQL concepts, see the BigInsights Text Analytics section of the BigInsights Video Guide wiki located: mydeveloperworks/wikis/home/wiki/biginsights/page/video%20guide?lang=en. Extending IBM SmartCloud Analytics - Log Analysis 99

106 For more information about text analytics using InfoSphere BigInsights, see com.ibm.swg.im.infosphere.biginsights.text.doc/doc/ biginsights_textanalytics_intro.html Development guidelines These guidelines allow you to create Annotation Query Language (AQL) that can be consumed and reused as necessary. Because AQL is compiled when IBM SmartCloud Analytics - Log Analysis is started, these guidelines reduce the time taken to start IBM SmartCloud Analytics - Log Analysis. They also reduce the time taken for compiling when you are using the BigInsights tool. These guidelines are intended to support the development of an Insight Pack. Outputs & imports Create a main.aql file in each module that outputs views. For example, the annotatorsystemout module contains a main.aql file. However, because the annotatorcommon module does not output views, it does not contain an main.aql file. The annotatorcommon modules only exports views, which can be imported by other modules such as annotatorsystemout. The main.aql file imports any required modules and outputs views. Do not include any additional code in this file. AQL Doc AQL Doc comments are a way to describe a module or object (such as a view, dictionary, table, or function) in plain language, and in an aspect-rich manner for contextual comprehension by other users. BigInsights provides guidance that describes how to apply AQL Doc comments. These comments provide hover help and descriptive text to developers using imported elements in the BigInsights Eclipse tools. For information about AQL Doc, see infocenter/bigins/v2r0/index.jsp?topic=/ com.ibm.swg.im.infosphere.biginsights.text.doc/doc/biginsights_aqlref_ref_aql-doccomments.html and index.jsp?topic=/com.ibm.swg.im.infosphere.biginsights.text.doc/doc/ biginsights_aqlref_ref_aqlmodule.html Module structure Structure your Insight Pack modules so that they contain one splitter module and one annotator module for each log file type in the Insight Pack. Additional modules are required for common annotator code and code common to the splitter and annotator. Each field that you want to annotate must have a separate AQL file. Within this file, the basic, candidate generation, and consolidation rules should be identified by comments. In addition to the field AQL files, ensure that a main.aql file is included for the annotator and splitter modules for each log file type. Common Modules Modules that are meant to be reused or contain common code should only export views and should not have a main.aql file. It should not output any views. The calling module should import the common module in its main.aql, and output only those views that it needs. 100 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

107 Dictionaries Ensure that you create large dictionaries within their own modules. Because dictionaries are tokenized at compile time, containing dictionaries within their own modules avoids the requirement to recompile unchanged dictionaries. UDF Create user-defined functions (UDF) in a separate module. This contains the JAR file to a single location rather than spreading it across the individual modules. The AQL modules that call the user-defined functions can import the separate UDF module in its main.aql file. Data loading best practice Follow the data loading best practices for DB2 and WebSphere Application Server Insight Packs. For more information, see the DB2 and WebSphere Application Server Insight Packs. topic under Configuring IBM SmartCloud Analytics - Log Analysis. Extending IBM SmartCloud Analytics - Log Analysis 101

108 102 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

109 Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-ibm product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd , Shimotsuruma, Yamato-shi Kanagawa Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement might not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-ibm Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. Copyright IBM Corp

110 IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation 2Z4A/ Burnet Road Austin, TX U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-ibm products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. If you are viewing this information in softcopy form, the photographs and color illustrations might not be displayed. Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at Copyright and trademark information at IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

111 Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other product and service names might be trademarks of IBM or other companies. Notices 105

112 106 IBM SmartCloud Analytics - Log Analysis: Extending IBM SmartCloud Analytics - Log Analysis

113

114 Product Number: Printed in USA

IBM SmartCloud Analytics - Log Analysis. Anomaly App. Version 1.2

IBM SmartCloud Analytics - Log Analysis. Anomaly App. Version 1.2 IBM SmartCloud Analytics - Log Analysis Anomaly App Version 1.2 IBM SmartCloud Analytics - Log Analysis Anomaly App Version 1.2 Note Before using this information and the product it supports, read the

More information

IBM SmartCloud Analytics - Log Analysis Version 1.1.0.3. User's Guide

IBM SmartCloud Analytics - Log Analysis Version 1.1.0.3. User's Guide IBM SmartCloud Analytics - Log Analysis Version 1.1.0.3 User's Guide IBM SmartCloud Analytics - Log Analysis Version 1.1.0.3 User's Guide Note Before using this information and the product it supports,

More information

IBM SmartCloud Analytics - Log Analysis Version 1.1. Troubleshooting Guide

IBM SmartCloud Analytics - Log Analysis Version 1.1. Troubleshooting Guide IBM SmartCloud Analytics - Log Analysis Version 1.1 Troubleshooting Guide IBM SmartCloud Analytics - Log Analysis Version 1.1 Troubleshooting Guide Note Before using this information and the product it

More information

IBM SmartCloud Analytics - Log Analysis Version 1.2.0.3. Installation and Administration Guide

IBM SmartCloud Analytics - Log Analysis Version 1.2.0.3. Installation and Administration Guide IBM SmartCloud Analytics - Log Analysis Version 1.2.0.3 Installation and Administration Guide IBM SmartCloud Analytics - Log Analysis Version 1.2.0.3 Installation and Administration Guide Note Before

More information

How To Use An Org.Org Adapter On An Org Powerbook (Orb) With An Org Idm.Org (Orber) Powerbook With An Adapter (Orbor) With A Powerbook 2 (Orbi) With The Power

How To Use An Org.Org Adapter On An Org Powerbook (Orb) With An Org Idm.Org (Orber) Powerbook With An Adapter (Orbor) With A Powerbook 2 (Orbi) With The Power Tivoli Identity Manager Version 4.6 Oracle ERP Adapter Installation and Configuration Guide SC32-1189-02 Tivoli Identity Manager Version 4.6 Oracle ERP Adapter Installation and Configuration Guide SC32-1189-02

More information

EMC Documentum Composer

EMC Documentum Composer EMC Documentum Composer Version 6.5 User Guide P/N 300 007 217 A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748 9103 1 508 435 1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

Tivoli Identity Manager

Tivoli Identity Manager Tivoli Identity Manager Version 4.6 Active Directory Adapter Installation and Configuration Guide SC32-1376-09 Tivoli Identity Manager Version 4.6 Active Directory Adapter Installation and Configuration

More information

IBM Operational Decision Manager Version 8 Release 5. Getting Started with Business Rules

IBM Operational Decision Manager Version 8 Release 5. Getting Started with Business Rules IBM Operational Decision Manager Version 8 Release 5 Getting Started with Business Rules Note Before using this information and the product it supports, read the information in Notices on page 43. This

More information

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Third edition (May 2012). Copyright International Business Machines Corporation 2012. US Government Users Restricted

More information

VMware vcenter Log Insight User's Guide

VMware vcenter Log Insight User's Guide VMware vcenter Log Insight User's Guide vcenter Log Insight 1.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Change Management for Rational DOORS User s Guide

Change Management for Rational DOORS User s Guide Change Management for Rational DOORS User s Guide Before using this information, read the general information under Appendix: Notices on page 58. This edition applies to Change Management for Rational

More information

IBM Tivoli Netcool Performance Manager 1.4.1 Wireless Component Document Revision R2E1. Tivoli Monitoring Integration Guide

IBM Tivoli Netcool Performance Manager 1.4.1 Wireless Component Document Revision R2E1. Tivoli Monitoring Integration Guide IBM Tivoli Netcool Performance Manager 1.4.1 Wireless Component Document Revision R2E1 Tivoli Monitoring Integration Guide Note Before using this information and the product it supports, read the information

More information

Scheduler Job Scheduling Console

Scheduler Job Scheduling Console Tivoli IBM Tivoli Workload Scheduler Job Scheduling Console Feature Level 1.3 (Revised December 2004) User s Guide SC32-1257-02 Tivoli IBM Tivoli Workload Scheduler Job Scheduling Console Feature Level

More information

Introducing IBM Tivoli Configuration Manager

Introducing IBM Tivoli Configuration Manager IBM Tivoli Configuration Manager Introducing IBM Tivoli Configuration Manager Version 4.2 GC23-4703-00 IBM Tivoli Configuration Manager Introducing IBM Tivoli Configuration Manager Version 4.2 GC23-4703-00

More information

VMware vcenter Log Insight User's Guide

VMware vcenter Log Insight User's Guide VMware vcenter Log Insight User's Guide vcenter Log Insight 1.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Tivoli Access Manager Agent for Windows Installation Guide

Tivoli Access Manager Agent for Windows Installation Guide IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide Version 4.5.0 SC32-1165-03 IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide

More information

IBM VisualAge for Java,Version3.5. Remote Access to Tool API

IBM VisualAge for Java,Version3.5. Remote Access to Tool API IBM VisualAge for Java,Version3.5 Remote Access to Tool API Note! Before using this information and the product it supports, be sure to read the general information under Notices. Edition notice This edition

More information

Cúram Business Intelligence Reporting Developer Guide

Cúram Business Intelligence Reporting Developer Guide IBM Cúram Social Program Management Cúram Business Intelligence Reporting Developer Guide Version 6.0.5 IBM Cúram Social Program Management Cúram Business Intelligence Reporting Developer Guide Version

More information

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Internet Information Services Agent Version 6.3.1 Fix Pack 2.

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Internet Information Services Agent Version 6.3.1 Fix Pack 2. IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Internet Information Services Agent Version 6.3.1 Fix Pack 2 Reference IBM Tivoli Composite Application Manager for Microsoft

More information

IBM WebSphere Application Server Version 7.0

IBM WebSphere Application Server Version 7.0 IBM WebSphere Application Server Version 7.0 Centralized Installation Manager for IBM WebSphere Application Server Network Deployment Version 7.0 Note: Before using this information, be sure to read the

More information

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015 Integration Guide IBM Note Before using this information and the product it supports, read the information in Notices on page 93.

More information

Kony MobileFabric. Sync Windows Installation Manual - WebSphere. On-Premises. Release 6.5. Document Relevance and Accuracy

Kony MobileFabric. Sync Windows Installation Manual - WebSphere. On-Premises. Release 6.5. Document Relevance and Accuracy Kony MobileFabric Sync Windows Installation Manual - WebSphere On-Premises Release 6.5 Document Relevance and Accuracy This document is considered relevant to the Release stated on this title page and

More information

Reading multi-temperature data with Cúram SPMP Analytics

Reading multi-temperature data with Cúram SPMP Analytics IBM Cúram Social Program Management Reading multi-temperature data with Cúram SPMP Analytics Anthony Farrell is a senior software engineer in the IBM Cúram platform group. Anthony has technical responsibility

More information

Transaction Monitoring Version 8.1.3 for AIX, Linux, and Windows. Reference IBM

Transaction Monitoring Version 8.1.3 for AIX, Linux, and Windows. Reference IBM Transaction Monitoring Version 8.1.3 for AIX, Linux, and Windows Reference IBM Note Before using this information and the product it supports, read the information in Notices. This edition applies to V8.1.3

More information

Adam Rauch Partner, LabKey Software [email protected]. Extending LabKey Server Part 1: Retrieving and Presenting Data

Adam Rauch Partner, LabKey Software adam@labkey.com. Extending LabKey Server Part 1: Retrieving and Presenting Data Adam Rauch Partner, LabKey Software [email protected] Extending LabKey Server Part 1: Retrieving and Presenting Data Extending LabKey Server LabKey Server is a large system that combines an extensive set

More information

CA Workload Automation Agent for Databases

CA Workload Automation Agent for Databases CA Workload Automation Agent for Databases Implementation Guide r11.3.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the

More information

Adeptia Suite 6.2. Application Services Guide. Release Date October 16, 2014

Adeptia Suite 6.2. Application Services Guide. Release Date October 16, 2014 Adeptia Suite 6.2 Application Services Guide Release Date October 16, 2014 343 West Erie, Suite 440 Chicago, IL 60654, USA Phone: (312) 229-1727 x111 Fax: (312) 229-1736 Document Information DOCUMENT INFORMATION

More information

User's Guide - Beta 1 Draft

User's Guide - Beta 1 Draft IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent vnext User's Guide - Beta 1 Draft SC27-2319-05 IBM Tivoli Composite Application Manager for Microsoft

More information

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version 6.3.1 Fix Pack 2.

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version 6.3.1 Fix Pack 2. IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version 6.3.1 Fix Pack 2 Reference IBM Tivoli Composite Application Manager for Microsoft Applications:

More information

Tivoli IBM Tivoli Monitoring for Transaction Performance

Tivoli IBM Tivoli Monitoring for Transaction Performance Tivoli IBM Tivoli Monitoring for Transaction Performance Version 5.3.0 Evaluation Guide GC32-9190-00 Tivoli IBM Tivoli Monitoring for Transaction Performance Version 5.3.0 Evaluation Guide GC32-9190-00

More information

Planning an Installation

Planning an Installation IBM Tioli Composite Application Manager for Application Diagnostics Version 7.1.0.2 Planning an Installation GC27-2827-00 IBM Tioli Composite Application Manager for Application Diagnostics Version 7.1.0.2

More information

IBM Tivoli Network Manager 3.8

IBM Tivoli Network Manager 3.8 IBM Tivoli Network Manager 3.8 Configuring initial discovery 2010 IBM Corporation Welcome to this module for IBM Tivoli Network Manager 3.8 Configuring initial discovery. configuring_discovery.ppt Page

More information

User's Guide - Beta 1 Draft

User's Guide - Beta 1 Draft IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Cluster Server Agent vnext User's Guide - Beta 1 Draft SC27-2316-05 IBM Tivoli Composite Application Manager for Microsoft

More information

Business Intelligence Tutorial: Introduction to the Data Warehouse Center

Business Intelligence Tutorial: Introduction to the Data Warehouse Center IBM DB2 Universal Database Business Intelligence Tutorial: Introduction to the Data Warehouse Center Version 8 IBM DB2 Universal Database Business Intelligence Tutorial: Introduction to the Data Warehouse

More information

Python for Series 60 Platform

Python for Series 60 Platform F O R U M N O K I A Getting Started with Python for Series 60 Platform Version 1.2; September 28, 2005 Python for Series 60 Platform Copyright 2005 Nokia Corporation. All rights reserved. Nokia and Nokia

More information

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016 Integration Guide IBM Note Before using this information and the product it supports, read the information

More information

IBM WebSphere Adapter for PeopleSoft Enterprise 6.2.0. Quick Start Tutorials

IBM WebSphere Adapter for PeopleSoft Enterprise 6.2.0. Quick Start Tutorials IBM WebSphere Adapter for PeopleSoft Enterprise 6.2.0 Quick Start Tutorials Note: Before using this information and the product it supports, read the information in "Notices" on page 94. This edition applies

More information

Horizon Debt Collect. User s and Administrator s Guide

Horizon Debt Collect. User s and Administrator s Guide Horizon Debt Collect User s and Administrator s Guide Microsoft, Windows, Windows NT, Windows 2000, Windows XP, and SQL Server are registered trademarks of Microsoft Corporation. Sybase is a registered

More information

SDK Code Examples Version 2.4.2

SDK Code Examples Version 2.4.2 Version 2.4.2 This edition of SDK Code Examples refers to version 2.4.2 of. This document created or updated on February 27, 2014. Please send your comments and suggestions to: Black Duck Software, Incorporated

More information

Tivoli Monitoring for Databases: Microsoft SQL Server Agent

Tivoli Monitoring for Databases: Microsoft SQL Server Agent Tivoli Monitoring for Databases: Microsoft SQL Server Agent Version 6.2.0 User s Guide SC32-9452-01 Tivoli Monitoring for Databases: Microsoft SQL Server Agent Version 6.2.0 User s Guide SC32-9452-01

More information

Understanding class paths in Java EE projects with Rational Application Developer Version 8.0

Understanding class paths in Java EE projects with Rational Application Developer Version 8.0 Understanding class paths in Java EE projects with Rational Application Developer Version 8.0 by Neeraj Agrawal, IBM This article describes a variety of class path scenarios for Java EE 1.4 projects and

More information

TIBCO Runtime Agent Authentication API User s Guide. Software Release 5.8.0 November 2012

TIBCO Runtime Agent Authentication API User s Guide. Software Release 5.8.0 November 2012 TIBCO Runtime Agent Authentication API User s Guide Software Release 5.8.0 November 2012 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED

More information

IBM Information Server

IBM Information Server IBM Information Server Version 8 Release 1 IBM Information Server Administration Guide SC18-9929-01 IBM Information Server Version 8 Release 1 IBM Information Server Administration Guide SC18-9929-01

More information

Release Bulletin Sybase ETL Small Business Edition 4.2

Release Bulletin Sybase ETL Small Business Edition 4.2 Release Bulletin Sybase ETL Small Business Edition 4.2 Document ID: DC00737-01-0420-02 Last revised: November 16, 2007 Topic Page 1. Accessing current release bulletin information 2 2. Product summary

More information

SnapLogic Salesforce Snap Reference

SnapLogic Salesforce Snap Reference SnapLogic Salesforce Snap Reference Document Release: October 2012 SnapLogic, Inc. 71 East Third Avenue San Mateo, California 94401 U.S.A. www.snaplogic.com Copyright Information 2012 SnapLogic, Inc. All

More information

WebSphere Commerce V7 Feature Pack 2

WebSphere Commerce V7 Feature Pack 2 WebSphere Commerce V7 Feature Pack 2 Pricing tool 2011 IBM Corporation This presentation provides an overview of the Pricing tool of the WebSphere Commerce V7.0 feature pack 2. PricingTool.ppt Page 1 of

More information

KonyOne Server Prerequisites _ MS SQL Server

KonyOne Server Prerequisites _ MS SQL Server KonyOne Server Prerequisites _ MS SQL Server KonyOne Platform Release 5.0 Copyright 2012-2013 Kony Solutions, Inc. All Rights Reserved. Page 1 of 13 Copyright 2012-2013 by Kony Solutions, Inc. All rights

More information

Connectivity Pack for Microsoft Guide

Connectivity Pack for Microsoft Guide HP Vertica Analytic Database Software Version: 7.0.x Document Release Date: 2/20/2015 Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty statements

More information

Users Guide. Ribo 3.0

Users Guide. Ribo 3.0 Users Guide Ribo 3.0 DOCUMENT ID: DC37542-01-0300-02 LAST REVISED: April 2012 Copyright 2012 by Sybase, Inc. All rights reserved. This publication pertains to Sybase software and to any subsequent release

More information

vcenter Operations Management Pack for SAP HANA Installation and Configuration Guide

vcenter Operations Management Pack for SAP HANA Installation and Configuration Guide vcenter Operations Management Pack for SAP HANA Installation and Configuration Guide This document supports the version of each product listed and supports all subsequent versions until a new edition replaces

More information

Getting Started using the SQuirreL SQL Client

Getting Started using the SQuirreL SQL Client Getting Started using the SQuirreL SQL Client The SQuirreL SQL Client is a graphical program written in the Java programming language that will allow you to view the structure of a JDBC-compliant database,

More information

How to Configure the Workflow Service and Design the Workflow Process Templates

How to Configure the Workflow Service and Design the Workflow Process Templates How-To Guide SAP Business One 9.0 Document Version: 1.0 2012-11-15 How to Configure the Workflow Service and Design the Workflow Process Templates Typographic Conventions Type Style Example Description

More information

Business Intelligence Tutorial

Business Intelligence Tutorial IBM DB2 Universal Database Business Intelligence Tutorial Version 7 IBM DB2 Universal Database Business Intelligence Tutorial Version 7 Before using this information and the product it supports, be sure

More information

Tivoli Log File Agent Version 6.2.3 Fix Pack 2. User's Guide SC14-7484-03

Tivoli Log File Agent Version 6.2.3 Fix Pack 2. User's Guide SC14-7484-03 Tivoli Log File Agent Version 6.2.3 Fix Pack 2 User's Guide SC14-7484-03 Tivoli Log File Agent Version 6.2.3 Fix Pack 2 User's Guide SC14-7484-03 Note Before using this information and the product it

More information

VMware vrealize Log Insight User's Guide

VMware vrealize Log Insight User's Guide VMware vrealize Log Insight User's Guide vrealize Log Insight 2.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

IBM Unica emessage Version 8 Release 6 February 13, 2015. User's Guide

IBM Unica emessage Version 8 Release 6 February 13, 2015. User's Guide IBM Unica emessage Version 8 Release 6 February 13, 2015 User's Guide Note Before using this information and the product it supports, read the information in Notices on page 403. This edition applies to

More information

Catalog Web service and catalog commerce management center customization

Catalog Web service and catalog commerce management center customization Copyright IBM Corporation 2008 All rights reserved IBM WebSphere Commerce Feature Pack 3.01 Lab exercise Catalog Web service and catalog commerce management center customization What this exercise is about...

More information

WebSphere Business Monitor V7.0: Clustering Single cluster deployment environment pattern

WebSphere Business Monitor V7.0: Clustering Single cluster deployment environment pattern Copyright IBM Corporation 2010 All rights reserved WebSphere Business Monitor V7.0: Clustering Single cluster deployment environment pattern What this exercise is about... 2 Exercise requirements... 2

More information

WebSphere Business Monitor

WebSphere Business Monitor WebSphere Business Monitor Dashboards 2010 IBM Corporation This presentation should provide an overview of the dashboard widgets for use with WebSphere Business Monitor. WBPM_Monitor_Dashboards.ppt Page

More information

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support...

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support... Informatica Corporation B2B Data Exchange Version 9.5.0 Release Notes June 2012 Copyright (c) 2006-2012 Informatica Corporation. All rights reserved. Contents New Features... 1 Installation... 3 Upgrade

More information

Authoring for System Center 2012 Operations Manager

Authoring for System Center 2012 Operations Manager Authoring for System Center 2012 Operations Manager Microsoft Corporation Published: November 1, 2013 Authors Byron Ricks Applies To System Center 2012 Operations Manager System Center 2012 Service Pack

More information

ERserver. iseries. Work management

ERserver. iseries. Work management ERserver iseries Work management ERserver iseries Work management Copyright International Business Machines Corporation 1998, 2002. All rights reserved. US Government Users Restricted Rights Use, duplication

More information

E-mail Listeners. E-mail Formats. Free Form. Formatted

E-mail Listeners. E-mail Formats. Free Form. Formatted E-mail Listeners 6 E-mail Formats You use the E-mail Listeners application to receive and process Service Requests and other types of tickets through e-mail in the form of e-mail messages. Using E- mail

More information

Using Actian PSQL as a Data Store with VMware vfabric SQLFire. Actian PSQL White Paper May 2013

Using Actian PSQL as a Data Store with VMware vfabric SQLFire. Actian PSQL White Paper May 2013 Using Actian PSQL as a Data Store with VMware vfabric SQLFire Actian PSQL White Paper May 2013 Contents Introduction... 3 Prerequisites and Assumptions... 4 Disclaimer... 5 Demonstration Steps... 5 1.

More information

Tivoli Enterprise Portal

Tivoli Enterprise Portal IBM Tivoli Monitoring Version 6.3 Tivoli Enterprise Portal User's Guide SC22-5447-00 IBM Tivoli Monitoring Version 6.3 Tivoli Enterprise Portal User's Guide SC22-5447-00 Note Before using this information

More information

iway iway Business Activity Monitor User's Guide Version 6.0.2 Service Manager (SM) DN3501982.1209

iway iway Business Activity Monitor User's Guide Version 6.0.2 Service Manager (SM) DN3501982.1209 iway iway Business Activity Monitor User's Guide Version 6.0.2 Service Manager (SM) DN3501982.1209 Cactus, EDA, EDA/SQL, FIDEL, FOCUS, Information Builders, the Information Builders logo, iway, iway Software,

More information

PTC Integrity Eclipse and IBM Rational Development Platform Guide

PTC Integrity Eclipse and IBM Rational Development Platform Guide PTC Integrity Eclipse and IBM Rational Development Platform Guide The PTC Integrity integration with Eclipse Platform and the IBM Rational Software Development Platform series allows you to access Integrity

More information

CA Spectrum and CA Service Desk

CA Spectrum and CA Service Desk CA Spectrum and CA Service Desk Integration Guide CA Spectrum 9.4 / CA Service Desk r12 and later This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter

More information

Tutorial: BlackBerry Object API Application Development. Sybase Unwired Platform 2.2 SP04

Tutorial: BlackBerry Object API Application Development. Sybase Unwired Platform 2.2 SP04 Tutorial: BlackBerry Object API Application Development Sybase Unwired Platform 2.2 SP04 DOCUMENT ID: DC01214-01-0224-01 LAST REVISED: May 2013 Copyright 2013 by Sybase, Inc. All rights reserved. This

More information

000-420. IBM InfoSphere MDM Server v9.0. Version: Demo. Page <<1/11>>

000-420. IBM InfoSphere MDM Server v9.0. Version: Demo. Page <<1/11>> 000-420 IBM InfoSphere MDM Server v9.0 Version: Demo Page 1. As part of a maintenance team for an InfoSphere MDM Server implementation, you are investigating the "EndDate must be after StartDate"

More information

<Insert Picture Here> Oracle SQL Developer 3.0: Overview and New Features

<Insert Picture Here> Oracle SQL Developer 3.0: Overview and New Features 1 Oracle SQL Developer 3.0: Overview and New Features Sue Harper Senior Principal Product Manager The following is intended to outline our general product direction. It is intended

More information

FileMaker 11. ODBC and JDBC Guide

FileMaker 11. ODBC and JDBC Guide FileMaker 11 ODBC and JDBC Guide 2004 2010 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc. registered

More information

IBM Tivoli Enterprise Console. Adapters Guide. Version 3.9 SC32-1242-00

IBM Tivoli Enterprise Console. Adapters Guide. Version 3.9 SC32-1242-00 IBM Tivoli Enterprise Console Adapters Guide Version 3.9 SC32-1242-00 IBM Tivoli Enterprise Console Adapters Guide Version 3.9 SC32-1242-00 Note Before using this information and the product it supports,

More information

User's Guide. Product Version: 2.5.0 Publication Date: 7/25/2011

User's Guide. Product Version: 2.5.0 Publication Date: 7/25/2011 User's Guide Product Version: 2.5.0 Publication Date: 7/25/2011 Copyright 2009-2011, LINOMA SOFTWARE LINOMA SOFTWARE is a division of LINOMA GROUP, Inc. Contents GoAnywhere Services Welcome 6 Getting Started

More information

Kofax Export Connector 8.3.0 for Microsoft SharePoint

Kofax Export Connector 8.3.0 for Microsoft SharePoint Kofax Export Connector 8.3.0 for Microsoft SharePoint Administrator's Guide 2013-02-27 2013 Kofax, Inc., 15211 Laguna Canyon Road, Irvine, California 92618, U.S.A. All rights reserved. Use is subject to

More information

Java 7 Recipes. Freddy Guime. vk» (,\['«** g!p#« Carl Dea. Josh Juneau. John O'Conner

Java 7 Recipes. Freddy Guime. vk» (,\['«** g!p#« Carl Dea. Josh Juneau. John O'Conner 1 vk» Java 7 Recipes (,\['«** - < g!p#«josh Juneau Carl Dea Freddy Guime John O'Conner Contents J Contents at a Glance About the Authors About the Technical Reviewers Acknowledgments Introduction iv xvi

More information

Application Interface Services Server for Mobile Enterprise Applications Configuration Guide Tools Release 9.2

Application Interface Services Server for Mobile Enterprise Applications Configuration Guide Tools Release 9.2 [1]JD Edwards EnterpriseOne Application Interface Services Server for Mobile Enterprise Applications Configuration Guide Tools Release 9.2 E61545-01 October 2015 Describes the configuration of the Application

More information

IBM Tivoli Monitoring for Virtual Environments: Dashboard, Reporting, and Capacity Planning Version 7.2 Fix Pack 2. User s Guide SC14-7493-03

IBM Tivoli Monitoring for Virtual Environments: Dashboard, Reporting, and Capacity Planning Version 7.2 Fix Pack 2. User s Guide SC14-7493-03 IBM Tivoli Monitoring for Virtual Environments: Dashboard, Reporting, and Capacity Planning Version 7.2 Fix Pack 2 User s Guide SC14-7493-03 IBM Tivoli Monitoring for Virtual Environments: Dashboard,

More information

Exam Name: IBM InfoSphere MDM Server v9.0

Exam Name: IBM InfoSphere MDM Server v9.0 Vendor: IBM Exam Code: 000-420 Exam Name: IBM InfoSphere MDM Server v9.0 Version: DEMO 1. As part of a maintenance team for an InfoSphere MDM Server implementation, you are investigating the "EndDate must

More information

IBM Tivoli Netcool Performance Manager Wireline Component January 2012 Document Revision R2E1. Pack Upgrade Guide

IBM Tivoli Netcool Performance Manager Wireline Component January 2012 Document Revision R2E1. Pack Upgrade Guide IBM Tioli Netcool Performance Manager Wireline Component January 2012 Document Reision R2E1 Pack Upgrade Guide Note Before using this information and the product it supports, read the information in Notices

More information

HP Operations Orchestration Software

HP Operations Orchestration Software HP Operations Orchestration Software Software Version: 9.00 HP Service Desk Integration Guide Document Release Date: June 2010 Software Release Date: June 2010 Legal Notices Warranty The only warranties

More information

Windows Azure Pack Installation and Initial Configuration

Windows Azure Pack Installation and Initial Configuration Windows Azure Pack Installation and Initial Configuration Windows Server 2012 R2 Hands-on lab In this lab, you will learn how to install and configure the components of the Windows Azure Pack. To complete

More information

etrust Audit Using the Recorder for Check Point FireWall-1 1.5

etrust Audit Using the Recorder for Check Point FireWall-1 1.5 etrust Audit Using the Recorder for Check Point FireWall-1 1.5 This documentation and related computer software program (hereinafter referred to as the Documentation ) is for the end user s informational

More information

LogLogic General Database Collector for Microsoft SQL Server Log Configuration Guide

LogLogic General Database Collector for Microsoft SQL Server Log Configuration Guide LogLogic General Database Collector for Microsoft SQL Server Log Configuration Guide Document Release: Septembere 2011 Part Number: LL600066-00ELS100000 This manual supports LogLogic General Database Collector

More information

Telelogic DASHBOARD Installation Guide Release 3.6

Telelogic DASHBOARD Installation Guide Release 3.6 Telelogic DASHBOARD Installation Guide Release 3.6 1 This edition applies to 3.6.0, Telelogic Dashboard and to all subsequent releases and modifications until otherwise indicated in new editions. Copyright

More information

Upgrading to Document Manager 2.7

Upgrading to Document Manager 2.7 Upgrading to Document Manager 2.7 22 July 2013 Trademarks Document Manager and Document Manager Administration are trademarks of Document Logistix Ltd. TokOpen, TokAdmin, TokImport and TokExRef are registered

More information

Document Management User Guide

Document Management User Guide IBM TRIRIGA Version 10.3.2 Document Management User Guide Copyright IBM Corp. 2011 i Note Before using this information and the product it supports, read the information in Notices on page 37. This edition

More information

IBM Network Performance Insight 1.1.0 Document Revision R2E1. Configuring Network Performance Insight IBM

IBM Network Performance Insight 1.1.0 Document Revision R2E1. Configuring Network Performance Insight IBM IBM Network Performance Insight 1.1.0 Document Revision R2E1 Configuring Network Performance Insight IBM Note Before using this information and the product it supports, read the information in Notices

More information

SAP HANA Core Data Services (CDS) Reference

SAP HANA Core Data Services (CDS) Reference PUBLIC SAP HANA Platform SPS 12 Document Version: 1.0 2016-05-11 Content 1 Getting Started with Core Data Services....4 1.1 Developing Native SAP HANA Applications....5 1.2 Roles and Permissions....7 1.3

More information

IBM BPM V8.5 Standard Consistent Document Managment

IBM BPM V8.5 Standard Consistent Document Managment IBM Software An IBM Proof of Technology IBM BPM V8.5 Standard Consistent Document Managment Lab Exercises Version 1.0 Author: Sebastian Carbajales An IBM Proof of Technology Catalog Number Copyright IBM

More information

ThirtySix Software WRITE ONCE. APPROVE ONCE. USE EVERYWHERE. www.thirtysix.net SMARTDOCS 2014.1 SHAREPOINT CONFIGURATION GUIDE THIRTYSIX SOFTWARE

ThirtySix Software WRITE ONCE. APPROVE ONCE. USE EVERYWHERE. www.thirtysix.net SMARTDOCS 2014.1 SHAREPOINT CONFIGURATION GUIDE THIRTYSIX SOFTWARE ThirtySix Software WRITE ONCE. APPROVE ONCE. USE EVERYWHERE. www.thirtysix.net SMARTDOCS 2014.1 SHAREPOINT CONFIGURATION GUIDE THIRTYSIX SOFTWARE UPDATED MAY 2014 Table of Contents Table of Contents...

More information

Initializing SAS Environment Manager Service Architecture Framework for SAS 9.4M2. Last revised September 26, 2014

Initializing SAS Environment Manager Service Architecture Framework for SAS 9.4M2. Last revised September 26, 2014 Initializing SAS Environment Manager Service Architecture Framework for SAS 9.4M2 Last revised September 26, 2014 i Copyright Notice All rights reserved. Printed in the United States of America. No part

More information

Chapter 24: Creating Reports and Extracting Data

Chapter 24: Creating Reports and Extracting Data Chapter 24: Creating Reports and Extracting Data SEER*DMS includes an integrated reporting and extract module to create pre-defined system reports and extracts. Ad hoc listings and extracts can be generated

More information

Sonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI)

Sonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI) i Sonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI) ii Contents 1

More information

SW5706 Application deployment problems

SW5706 Application deployment problems SW5706 This presentation will focus on application deployment problem determination on WebSphere Application Server V6. SW5706G11_AppDeployProblems.ppt Page 1 of 20 Unit objectives After completing this

More information

Application Developer Guide

Application Developer Guide IBM Maximo Asset Management 7.1 IBM Tivoli Asset Management for IT 7.1 IBM Tivoli Change and Configuration Management Database 7.1.1 IBM Tivoli Service Request Manager 7.1 Application Developer Guide Note

More information

IBM FileNet eforms Designer

IBM FileNet eforms Designer IBM FileNet eforms Designer Version 5.0.2 Advanced Tutorial for Desktop eforms Design GC31-5506-00 IBM FileNet eforms Designer Version 5.0.2 Advanced Tutorial for Desktop eforms Design GC31-5506-00 Note

More information