PI Server System Management Guide

Size: px
Start display at page:

Download "PI Server System Management Guide"

Transcription

1 PI Server System Management Guide PI3 Server Version OSIsoft, Inc. All rights reserved

2 OSIsoft, Inc. 777 Davis St., Suite 250 San Leandro, CA USA Telephone (01) (main phone) (01) (fax) (01) (support phone) North American Offices Houston, TX Johnson City, TN Mayfield Heights, OH Phoenix, AZ Savannah, GA Seattle, WA Yardley, PA Worldwide Offices OSIsoft Australia Perth, Australia Auckland, New Zealand OSI Software GmbH Altenstadt, Germany OSIsoft Canada ULC Montreal, Canada OSIsoft Japan KK Tokyo, Japan OSIsoft Mexico S. De R.L. De C.V. Mexico City, Mexico OSI Software Asia Pte Ltd. Singapore OSIsoft, Inc. Representative Office Shanghai, People s Republic of China Sales Outlets and Distributors Brazil Middle East / North Africa Republic of South Africa Russia / Central Asia South America / Caribbean Southeast Asia South Korea Taiwan Revised: January 2006 Send documentation requests, comments and corrections to customerfeedback@osisoft.com. OSIsoft, Inc. is the owner of the following trademarks and registered trademarks: PI System, PI ProcessBook, Sequencia, Sigmafine, grecipe, srecipe, and RLINK. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Any trademark that appears in this book that is not owned by OSIsoft, Inc. is the property of its owner and use herein in no way indicates an endorsement, recommendation, or warranty of such party's products or any affiliation with such party of any kind. Restricted Rights Legend Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS Copyright Notice Unpublished -- rights reserved under the copyright laws of the United States

3 PREFACE USING THIS GUIDE About this Guide The PI Server System Management Guide is an in-depth manual that provides the information and instructions that system administrators need to operate the PI System. This guide includes comprehensive instructions to assist you in: Using PI Server command-line tools such as PIConfig, PIDiag, and PIArtool Configuration and tuning of your PI Server for optimum performance Monitoring system health, and ensuring data preservation and integrity Managing the Snapshot, Event Queue and Data Archive Troubleshooting and repair To effectively manage a PI System, follow the recommendations and procedures for each of the following topics: Starting and Stopping PI Systems Backing Up PI Servers Managing Archives Managing Interfaces Managing Security Monitoring PI System Health Moving, Copying or Merging PI Servers For troubleshooting and performance issues, this guide includes Appendices of error messages provided in the System Message Log, and an extensive list of performance measurements (statistics) you can use to optimize the System. PI Server System Management Guide Page iii

4 Preface - Using this Guide The PI Server Documentation Set The PI Server Documentation Set includes seven user guides, described below. Tip: Updated user guides, which provide the most up-to-date information, may be available for download from the OSIsoft Technical Support Web site ( Title Introduction to PI System Management PI Server Installation and Upgrade Guide PI Server System Management Guide PI Server Reference Guide Auditing the PI Server PI Server Applications User Guide PINet and PIonPINet User Guide Subject Matter A guide to the PI Server for new users and administrators. It explains PI system components, architecture, data flow, utilities and tools. It provides instruction for managing points, archives, backups, interfaces, security and trusts, and performance. It includes a glossary and resource guide. A guide for installing, upgrading and removing PI Servers on Windows and UNIX platforms, including cluster and silent installations. An in-depth administration guide for the PI Server, including starting and stopping systems, managing the Snapshot, Event Queue and Data Archive, monitoring system health, managing backups, interfaces, security, and moving and merging servers. Includes comprehensive instructions for using command-line tools: PIConfig, PIDiag, and PIArtool, and in-depth troubleshooting and repair information. A comprehensive reference guide for the system administrator and advanced management tasks, including: databases; data flow; PI Point classes and attributes, class edit and type edit; exception reporting; compression testing; security; SQL subsystem; PI time format; and overviews of the PI API, and PI-SDK System Management Tool (SMT). An administration guide that explains the Audit Database, which provides a secure audit trail of changes to PI System configuration, security settings, and Archive Data. It includes administration procedures to enable auditing, to set subsystem auditing mode, to create and archive database files, and to export audit records. A guide to key add-on PI Server Applications: Performance Equations (PE), Totalizer, Recalculator, Batch, Alarm, and Real-Time SQC (Statistical Quality Control). Includes a reference guide for Performance Equations, and Steam calculation functions. A systems administration guide, including installation, upgrade and operations, for PINet for OpenVMS and PIonPINet, which support migration and interoperability between PI2 and PI3 Systems. Page iv

5 Preface - Using this Guide Conventions Used in this Guide This guide uses the following formatting and typographic conventions. Format Use Examples Title Case Italic text PI Client Tools PI System Elements PI Server Subsystems Files, Directories, Paths Emphasis New Terms Fields References to a chapter or section Use the client tool, PI ProcessBook, to verify that all data has been recovered. All incoming data is queued in the Event Queue by the Snapshot Subsystem. The backup script is located in the \PI\adm directory. Archive files can be either fixed or dynamic. The archive receiving current data is called the Primary Archive. See Section 4.2, Create a New Primary Archive. Bold Italic text References to a publication See the PI Server Reference Guide. Bold text Monospace type: "Consolas" font Light Blue - Underlined System and Application components: Subsystems Tools / Utilities Processes / Scripts / Variables Arguments / Switches / Options Parameters / Attributes / Values Properties / Methods / Events / Functions Procedures and Key Commands Interface components Menus / Menu Items Icons / Buttons / Tabs Dialog box titles and options Consolas monospace is used for: Code examples Commands to be typed on the command line (optionally with arguments or switches) System input or output such as excerpts from log files and other data displayed in ASCII text Bold consolas is used in the context of a paragraph Links to URL / Web sites, and addresses The Archive Subsystem, piarchss, manages data archives. Piarchss must be restarted for changes to take effect. On UNIX, invoke site-specific startup script, pisitestart.sh, and on Windows, invoke pisrvsitestart.bat. Three Point Database attributes affect compression: CompDev, CompMin, and CompMax. These are known as the compression specifications. On the Tools menu, click Advanced Options. Press CTRL+ALT+DELETE to reboot Click Tools > Tag Search to open the Tag Search tool. Click the Advanced Search tab. Use the search parameters PImean Value = 1. To list current Snapshot information every 5 seconds, use the piartool -ss command. For example: support@osisoft.com PI Server System Management Guide Page v

6 Preface - Using this Guide Related Documentation OSIsoft provides a full range of documentation to help you understand and use the PI Server, PI Server Interfaces, and PI Client Tools. Each Interface has its own manual, and each Client application has its own online help and/or user guide. The UniInt End User Manual describes the OSIsoft Universal Interface (UniInt), which is recommended reading for PI Server system managers. Many PI Interfaces are based upon UniInt, and this guide provides a deeper understanding of principals of Interface design. Using PI Server Tools The PI Server provides two sets of powerful tools that allow system administrators and users to perform system administration tasks and data queries. The PI Server includes many command-line tools, such as pidiag and piartool. The PI Server Documentation Set provides extensive instruction for performing PI Server administrative tasks using command-line tools. The PI System Management Tools (SMT) is an easy-to-use application that hosts a variety of different plug-ins that provide all the basic tools you need to manage a PI System. You access this set of tools through a single host application. This host application is sometimes referred to as the SMT Host, but it is more commonly called System Management Tools or SMT. You can download the latest version of SMT from the Technical Support Web site: In addition to extensive online help that explains how to use all of the features in the SMT, the SMT includes the Introduction to PI System Management User Guide. Page vi

7 QUICK TABLE OF CONTENTS Chapter 1. Starting and Stopping PI...1 Chapter 2. Monitoring PI System Health...17 Chapter 3. Managing Archives...33 Chapter 4. Backing up the PI Server...63 Chapter 5. Managing Interfaces...93 Chapter 6. Managing Security Chapter 7. Moving PI Servers Chapter 8. Copying a PI Server Chapter 9. Merging Two PI Servers Chapter 10. The piconfig Utility Chapter 11. PI Troubleshooting and Repair Chapter 12. Finding and Fixing Problems: the pidiag Utility PI Server System Management Guide Page vii

8

9 TABLE OF CONTENTS Preface Using this Guide...iii Table of Tables...xix Table of Figures...xxi Chapter 1. Starting and Stopping PI Starting PI Starting PI on Windows Systems Starting PI on UNIX Systems Stopping PI Stopping PI on Windows Systems Stopping PI on UNIX Systems Automatic Startup Automatic Startup and Shutdown on UNIX Systems Shutting Down an Individual Subsystem...16 Chapter 2. Monitoring PI System Health Checking Key System Indicators Viewing System Messages Available Log History Viewing System Messages with pigetmsg Viewing Messages When the Message Subsystem Goes Down Viewing Message Log Files Generated on other Servers Interpreting Error Messages (pidiag) Subsystem Healthchecks (RPC Resolver Error Messages) Monitoring Snapshot Data Flow Listing Current Snapshot Information with piartool -ss Monitoring the Event Queue piartool -qs Monitoring the Archive piartool -as...27 PI Server System Management Guide Page ix

10 Table of Contents 2.6 Monitoring the Update Manager Adjusting the Pending Update Limit System Date and Time...31 Chapter 3. Managing Archives Tasks for Managing Archives About Archives About Archive Shifts About Archive Files About Primary Archives About Fixed and Dynamic Archives About Read-Only Archives Tools for Managing Archives Using the piartool Utility Using the Offline Archive Utility (piarchss) Listing the Registered Archives Determining an Archive Sequence Number from a Listing Listing Archive Record Details Performing an Archive Walk with piartool -aw Estimating Archive Utilization Editing Archives Creating Archives Naming Archives Choosing an Archive Size Selecting an Archive Type: Fixed or Dynamic Specifying the Number of Points in the Archive Specifying the Maximum Archive Size Creating Data Archives Prior to the Installation Date Creating a New Primary Archive Registering an Archive Unregistering an Archive Deleting an Archive Moving an Archive Managing Archive Shifts Archive Shift Enable Flag Forcing Archive Shifts Combining and Dividing Archives...59 Page x

11 Table of Contents Combining Archives into a Single Archive Event Queue Recovery...61 Chapter 4. Backing up the PI Server Planning for Backups Choosing the Backup Platform (VSS vs. Non-VSS) Choosing a Backup Strategy Other Backup Considerations Guidelines for VSS Backups Guidelines for non-vss Backups Guidelines for Backing Up Before Upgrading Automating PI Backups Automating Backups on Windows Do a Test Backup Do a Test Restore Automating Backups on Windows with a 3rd Party Backup Application Automating Backups on a Windows Cluster Automating Backups on UNIX How The PI Backup Subsystem Works Principles of Operation Selecting Files or Components for Backup The Backup Components of PI The Files and Components for each Subsystem Lifetime of a Backup Launching Non-VSS Backups with piartool -backup <path> Managing Backups with piartool Backup Command Summary piartool -backup -query piartool -backup -identify Timeout Parameters Troubleshooting Backups Log Messages VSSADMIN...90 Chapter 5. Managing Interfaces Introduction General Interface Principles About PI Interfaces...94 PI Server System Management Guide Page xi

12 Table of Contents About PI Interface Nodes About Data Buffering About the PI API About the PI SDK About UniInt-Based Interfaces Basic Interface Node Configuration Install the PI SDK and/or the PI API Connect to PI with apisnap.exe Connect with AboutPI SDK.exe Configure PI Trusts for the Interface Node Install the Interface Set the Interface Node Time Connect to the PI Server with the Interface Configure Points for the Interface Configure Buffering Configure Interface for Automatic Startup Configure Site-Specific Startup Scripts Configure PointSource Table Monitor Interface Performance Configure the PI Interface Status Utility Configure Auto Point Synchronization Chapter 6. Managing Security Physical Security Network Security Operating System Security PI Server Security Running applications on the system console Running applications remotely Firewall Security Firewall Database Database Security PIDBSEC PIARCADMIN PIARCDATA PIBatch PICampaign Page xii

13 Table of Contents PIDS PIHeadingSets PIModules PIPOINT PITransferRecords PIUSER Individual Subsystem Tokens Point Security Point Data Access Point Attribute Access Access Algorithm Assigning and Changing Ownership and Access Permissions How to Make All Points Accessible Security for Default Points in the PI Server User Security Group Database User Database Trust Login Security Connection Credentials Defining Trust Records in the Trust Database Evaluating the Match Between Trust Records and Connection Credentials New PI Server Installations Trust changes on System Startup Multiple Network Cards Examples Resolving Ambiguities Chapter 7. Moving PI Servers Preparation Different Computer Same Computer Chapter 8. Copying a PI Server Understanding the Server ID Situations where Server ID must be Addressed Chapter 9. Merging Two PI Servers Server Merging - Procedures PI Server System Management Guide Page xiii

14 Table of Contents 9.2 Server Merge Procedure - Example Shut Down the Retired System Add Retired Point Configuration to Destination System Add PI Batch Unit Configuration to Destination System Convert the Archives Merge the PI Batches Chapter 10. The piconfig Utility PI TagConfigurator & PI SMT Point Builder plug-in A Note to Pidiff Users Key Concepts for Using Piconfig Starting and Stopping Piconfig Interactive Session vs. Batch Method Selecting PI Tables Table Attributes Records Piconfig Commands Mode Structure Type Select Command Operators Wildcards The Ellipsis ( ) Construct for Repeating Attributes Endsection Exit Batch Methods Security on Piconfig Sessions Remote Piconfig Sessions Piconfig Commands and Tables Point Database Table PI Attribute Set Table Point Class Database Point Source Database Digital States Table Snapshot and Snapshot2 Tables Archive Table PI Batch Unit Table Page xiv

15 Table of Contents PI Batch Alias Table PI Batch Table PI Subsystem Table PI Subsystem Statistics Table PINet Manager Statistics Table PI Users Table PI Group Table PI Thread Table PI Database Security Table PI Trust Database PI General Table Interface Helpful Hints Abbreviations Case-sensitivity Command Input Files Input Line Length Using Quoted Strings Sending Values as Strings Boolean Values Configuration Persistence Command Line Parameters Changing Special Characters Convert Mode Example Converting Point Database Information from PI2 to PI Server Hexadecimal and Octal Numbers Chapter 11. PI Troubleshooting and Repair Troubleshooting Checklist Verifying PI Processes Verifying PI Processes on Windows Systems Verifying PI Processes on UNIX Systems Communication with PINet Manager Pimsgss PI Update Manager PI Base Subsystem Snapshot Subsystem Archive Subsystem PI Server System Management Guide Page xv

16 Table of Contents Pishutev Random Interface RampSoak Interface Running PI Processes Independently UNIX Process Quotas IBM AIX Compaq Tru HP-UX Sun Solaris PI Server Data Files Identifying the Update Manager Issues: the pilistupd utility Repairs Recovering Data from Corrupted Archives Restoring a Complete Server from Backup Restoring Archives From Backup Restoring Subsystem Databases From Backup Correcting Archive Event Timestamps Repairing the Archive Registry How to Repair the Snapshot Recovering from Accidental Change to System Time How to Repair the Point Database How to Repair the Module Database Tuning the PI Server Communications Layer of the PI Server Resolving Excessive CPU Usage by Utilities Identifying Abusive Usage Solving Other Problems Failed Backups Slow Reverse Name Lookup Slow Domain Controller Access Flatline in a PI ProcessBook Trend COM Connectors Redirector Troubleshooting COM Connector Troubleshooting Chapter 12. Finding and Fixing Problems: the pidiag Utility General Information Page xvi

17 Table of Contents Version Information Error Code Translation Time Utilities Time Translations Time Zone File-base Utilities Dump File-base Data File Compact a File-base Data File Recover File-base Data File Index Archive Management Dump the Archive Manager Data File Automatic Recovery of the Archive Manager Data File Manual Recovery of the Archive Manager Data File Information about Unregistered Archives Verify the Integrity of Archive Files Downgrade Archive File to Older Versions Server ID Utilities Performance Counter Utilities (Windows Only) Get Performance Counter Path Uninstall Performance Counters Get Performance Counter Values Miscellaneous Wait for Passed Milliseconds Testing Crash Dump Capability of an OS Reset Password to Blank Display Network Definitions Register a COM Component Replaceable Parameter GUID Generation Machine-specific Programming Information Appendix A: PI Messages Appendix B: PI Performance Counters Technical Support and Resources Index of Topics PI Server System Management Guide Page xvii

18

19 TABLE OF TABLES Table 3 1. Options for Use with piartool...37 Table PI Tables Accessible Through piconfig Table Piconfig Commands Table Snapshot and Snapshot2 Tables Attributes Table Archive Table Attributes Table PI Batch Unit Table Attributes Table PI Batch Alias Table Attributes Table PI Batch Table Attributes Table Subsystem Table Attributes Table PI Subsystem Statistics Table Attributes Table PINet Manager Statistics Table Attributes Table PI Users Table Attributes Table PI Group Table Attributes Table PI Thread Table Attributes Table PI Thread Table Actions Table PI Database Security Table Attributes Table Trust Table Attributes Table PI Timeout Table Attributes Table PI Firewall Table Attributes Table A 1. Error Codes 1-99, With Messages Table A 2. Error Codes , With Messages Table A 3. Error Codes , With Messages Table A 4. Error Codes , With Messages Table A 5. Error Codes , With Messages Table A 6. Error Codes , With Messages Table A 7. Error Codes , With Messages Table A 8. Error Codes , With Messages Table A 9. Error Codes , With Messages Table A 10. Error Codes , With Messages PI Server System Management Guide Page xix

20 Table of Tables Table A 11. Error Codes , With Messages Table A 12. Error Codes , With Messages Table A 13. Error Codes , With Messages Table A 14. Error Codes , With Messages Table A 15. Error Codes , With Messages Table B 16. PI Performance Counters Page xx

21 TABLE OF FIGURES Figure 6-1. Establishing PI Performance Monitor as a Windows Service Figure Distributed COM Configuration Properties Figure Windows Task Manager Processes Figure Application Log Properties Figure COM Connector Loading PI Server System Management Guide Page xxi

22

23 Chapter 1. STARTING AND STOPPING PI The PI Server includes several separate processes that participate in startup and shutdown. These are referred to as PI processes or PI subsystems. This section describes the relationship between the processes, and the startup and shutdown scripts used to control them. PI should only be started or stopped by the PI System manager. It is important to remember that stopping PI affects all client applications, performance equation calculations, and the archiving of data. PI Server startup is performed with a pair of scripts: a generic PI startup script starts all PI processes, which then calls a site-specific script to start interfaces and other site specific programs. The system manager should modify only the site-specific script since the generic startup script may be replaced during a PI Server upgrade. The actual file names of these scripts vary with the operating system. Platform-specific details are provided in the following subsections. In general, the PI Server shutdown scripts are similar: a generic PI shutdown script calls a site-specific script to shut down interfaces and site specific programs, and then shuts down all PI processes. Note: The only exception to this is shutting down PI processes running as individual command windows. In this case, you must bring each window in focus and type <CTRL-C>. Running PI subsystems in command windows is very rare; and usually only encountered in troubleshooting scenarios. It is important to configure Shutdown Events. Generally, points collected by interfaces running on the PI Server node should be configured for shutdown events; points collected on remote, buffered nodes usually are not configured for shutdown events. See Stopping PI on page 3. Production systems are usually configured so that PI is automatically started when the computer is powered on. See Automatic Startup on page 6 for more information. 1.1 Starting PI The PI Subsystems may take several minutes to start. Remote or PI API based interfaces and other applications are blocked from connecting until the core PI Subsystems have completed startup. The connection blocking is accomplished by opening the TCP/IP listener when the core subsystems are ready to service requests. The following message is posted to the PI Server log when the listener is opened: PI Server System Management Guide Page 1

24 Chapter 1 - Starting and Stopping PI >> TCP/IP connection listener opened on port: 5450 Connection attempts before the listener is opened will fail. Interfaces will retry until a connection is made Starting PI on Windows Systems On Windows systems, the Manager can be logged into any account that allows full access to the PI Server files and enough privileges to start services. To start PI, change to the PI\adm directory. PI normally runs as Windows services. For diagnostic purposes, you can also start PI in interactive mode. Starting PI as Windows Services To start PI as Windows services: 1. Log in to a Windows account that has full access to the PI Server files, and permission to start PI services 2. Open a Windows Command Prompt window. 3. Change to the PI\adm directory. 4. Use the pisrvstart.bat script to start PI as Windows services: pisrvstart.bat [-nosite] [-base] Nosite is an optional parameter to indicate that the site-specific script file should not be called. This results in the interfaces not being started. Base starts only the core subsystems and is used for troubleshooting. The pisrvstart.bat script starts all the PI Server processes, and then calls pisrvsvrappsstart.bat if PI Server Applications are installed. Then, it calls pisrvsitestart.bat to start all of the interfaces. Starting PI in Interactive Mode To start PI in interactive mode: pistart.bat [-nosite] [-stdout] [-base] The pistart.bat script calls pisitestart.bat. Nosite is an optional parameter to indicate that the site-specific script file should not be called. This results in the interfaces not being started. Base starts only the core subsystems and is used for troubleshooting. Stdout is an optional parameter to indicate that all messages for the processes should be sent to the standard output instead of to the message log. When you use this parameter, the PI Message Subsystem is not started. The error Control service unknown error 5 opening Service Control Manager is returned if the PI startup is attempted by a user without enough privileges. Page 2

25 1.2 - Stopping PI Some interfaces on Windows cannot be run as services. Check the interface documentation Starting PI on UNIX Systems On UNIX systems the manager should be logged in as root or piadmin. For a UNIX system, type: pistart.sh [-nosite] [-stdout] [-base] [M] This starts all the PI Server processes, calls piappstart.sh if Server applications are installed, and then calls the interfaces and programs that are listed in the site-specific script file, pisitestart.sh. Nosite is an optional parameter for pistart used to indicate that the site-specific script file should not be called. This results in the interfaces not being started. Base starts only the core subsystems and is used for troubleshooting. Stdout is an optional parameter to indicate that all messages for the processes should be sent to the standard output instead of the Message Log. When you use this parameter, the PI Message Subsystem is not started. M starts only PI Network Manager, PI License Manager, and the PI Message Subsystem. 1.2 Stopping PI The procedure for stopping PI is slightly different, depending on whether you re running on Windows or UNIX Stopping PI on Windows Systems To stop PI on a Windows system, if PI processes are running as Windows services, type: pisrvstop.bat This stops all of the interfaces and programs listed in pisrvsitestop.bat, and then the PI processes. PI services are shut down automatically when you shut down your system. The order of the shutdown in this case is determined by the operating system. Windows has a registry entry that defines the maximum wait for a service to exit. On PI Systems with large point counts, the maximum wait time may need to be increased to allow PI enough time to shut down properly. The PI installation procedure increases the default value of 20,000 milliseconds to 300,000 milliseconds, or 5 minutes. This is generally enough time for proper shutdown on systems with fewer than 50,000 points. Larger systems may require more time. This can be determined by manually stopping the PI Server using pisrvstop.bat and record the time to shutdown. If it is longer than 5 minutes, the registry entry: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\WaitToKillServiceTimeout should be set to reflect the actual shutdown time. Failing to allow proper shutdown of the PI Server can result in lost data or corrupted data files. PI Server System Management Guide Page 3

26 Chapter 1 - Starting and Stopping PI To shut down an interactively started PI Server, type <CTRL-C> in each of the command windows corresponding to the PI processes. These should be stopped in the following order: Utilities (for example, piconfig) Interfaces (for example, Ramp Soak and Random) pinetmgr (When you instruct pinetmgr to stop, the remaining processes are told to exit in the proper order and, finally, pinetmgr will stop.) Stopping PI on UNIX Systems To stop PI on a UNIX system, change to the PI/adm directory and type: pistop.sh This stops all the interfaces and programs that are listed in the site-specific script file pisitestop.sh. Then it stops all the PI processes. If some processes do not stop successfully, run the pistop.sh script again. If you still have problems stopping the individual programs, use the UNIX kill command. Distributed systems use PINet and Interface Nodes as data source machines. Normally these systems buffer data if the PI Server is not available. The buffered data is sent to the PI Server when it becomes available. There are a few instances in which data is not buffered when the home node is down. Examples include cases where an interface is loaded on the PI Server machine and the PI Server is shut down or where the data is generated locally with performance equations. In these cases, you may wish to record intervals when the PI Server on the home node was shut down. This is accomplished by inserting a shutdown event. A shutdown event is a timestamp and a digital state, typically shutdown, which are written to one or more points. The digital state prevents the interpolation of data over periods with possible missing data, and also records system status, providing a clear indication of a gap in the data. The PI Server provides a utility, pishutev, to insert shutdown events for points that are configured to receive them. The pishutev utility uses a configuration file, shutdown.dat, to determine which points should receive events. Note: Interfaces may also be configured to record instances when no data is received from an interface. Shutdown Flag The shutdown point attribute in the point database is set to TRUE (1) by default. If the shutdown attribute for a point is set to TRUE, the point is able to receive shutdown events if the shutdown.dat file targets the point. Page 4

27 1.2 - Stopping PI pishutev Shutdown events are written at system startup by the pishutev interface. The utility reads a configuration file to determine which tags should receive shutdown events. It also supplies a configurable timestamp for the PI Server shutdown. Unlike most PI processes, pishutev exits after completion. The startup command line for pishutev is located in the PI startup files PI/adm/pistart.sh (UNIX), PI\adm\pistart.bat and PI\adm\pisrvstart.bat. By default, the configuration file is PI\dat\shutdown.dat. It is sometimes useful to create additional shutdown configuration file(s) with additional point selections. These files are processed by starting another instance of the interface. The new shutdown configuration file name is passed as a parameter using the -f flag in the pishutev command line: For example: pishutev -f myshutdown.dat This command line should be added to the PI site-specific startup files PI/adm/pisitestart.sh (UNIX) and PI\adm\pisitestart.bat. Starting a second instance of pishutev in order to process an additional shutdown configuration file is not supported when starting PI as Windows services. By default, pishutev determines the shutdown event timestamp from a file written by the Snapshot Subsystem. This file is updated whenever the Snapshot is flashed to disk, usually every 10 minutes. Hence, in the event of a power failure, the timestamp of the shutdown event are accurate to within 10 minutes. When PI is shut down normally, the timestamps are the actual shutdown time. If the file that is written by the Snapshot Subsystem is missing, the shutdown events interface uses the current time to timestamp the shutdown events. The default time may be overridden by a user-specified time using the -t flag and passing the time as a parameter in the command line. For example: pishutev -t 11:00 By default, the digital state of shutdown is written for each shutdown event. To write a digital state other than shutdown, use the -s flag and pass the digital state as a parameter in the command line. The specified state must already be configured in the System Digital State Set in the Digital State Table. For example: pishutev -s SpecialState The -f, -t, and -s flags may be used in any combination. Shutdown.dat The points receiving shutdown events are specified using the file PI\dat\shutdown.dat. The PI Server is delivered with a shutdown.dat file that selects all points whose shutdown flag is TRUE. This file is shown below. You may wish to edit the file to restrict shutdown events to certain groups of /24/96! default shutdown events file PI Server System Management Guide Page 5

28 Chapter 1 - Starting and Stopping PI! login info - only localhost is supported localhost! tag mask *! attrib selection! add here point attributes,value to select points receiving shutdown shutdown,1! pointsource,r*! location1, 1! etc... To specify more than one tag name use a tag mask. The tag mask may contain the wildcards * and?. The symbol * matches all possibilities with any number of characters. The symbol? matches a single character and may be used any number of times. Caution: Do not specify additional tags by appending comma separated tag masks or by using additional lines. Only one tag mask may be specified. If you do not specify a tag-mask, the interface exists with an error. To prevent all shutdown events, specify a tag mask that does not match any tag. Other point attributes and values may be used in addition to or instead of the shutdown flag. These conditions are logically ANDed together. For example, the following configuration file selects only tags that start with s, have the location1 attribute set to 0, and Pointsource set to H. No other tags will receive shutdown events.! tag mask s*! point attributes location1,0 pointsource,h If no point attributes are specified, all tags specified by the tag mask are selected to receive shutdown events. Caution: The Shutev process is used to execute post installation procedures; therefore the Shutev process should not be disabled or removed from the normal startup procedures. The shutdown.dat file, or point shutdown attribute should be used to prevent writing shutdown events. 1.3 Automatic Startup The procedure to configure PI for automatic startup depends on the operating system. PI Server on Windows is normally run as a collection of services. The installation procedure provides a dialog to optionally set automatic startup on reboot. The reboot startup behavior of PI can also be set using the Control Panel Services applet. Page 6

29 1.3 - Automatic Startup Automatic Startup and Shutdown on UNIX Systems Shutdown and startup for UNIX systems varies with platform, but there are generally two varieties: BSD style systems use one file, /etc/inittab, which specifies what scripts to run in each run level, while System V flavors of UNIX use scripts which reside in a set of directories called rcn.d, where n is an number ranging from 0 on up. Usually these subdirectories are in the /etc directory, but can be in others (HP-UX 10 has them under /sbin, for example). Here, we're going to give examples of how to configure each of the systems we support. No matter which UNIX platform you're using, if you want to have PI automatically start up when the system boots or reboots, and shut down when the system shuts down, you MUST ensure that the.profile or.login file for piadmin does not require interaction with a terminal. This means that if you use, for example, tset to set the TERM variable, you must first check to see if there is a terminal attached to the current process. One way to do this is: if tty -s ; then tset... fi Automatic Startup/Shutdown for Solaris 2.x Solaris recognizes numerous run levels: Run Level s or S Purpose single user (used for administering the system) 2 standard level, non-networked 3 default level 4 standard level, user-defined Which processes run at each level is determined by a set of scripts, /etc/rcs through /etc/rc6. These scripts look in the directories /etc/rc#.d (where # = 0, 2, 3, etc.) for scripts that begin with a K or an S. The K scripts are used to kill processes, and the S scripts are used to start processes. When the system moves from level 3 to level 2, the K scripts in /etc/rc2.d are executed first, then the S scripts. The scripts are run in ASCII collated sequence, so K02stop is run before K04metoo. On shutdown or reboot, the system runs the scripts in /etc/rc0.d. If you are using the default run level, you will need to put the PI startup in /etc/rc3.d. The shutdown script for PI needs to be in /etc/rc2.d, so PI will be shut down if the system goes to non-networked mode, and also in /etc/rc0.d, so PI will be shut down if the system is halted or rebooted. Note: Solaris systems should be restarted using shutdown -i 6. This will cause the shutdown scripts to be run before running reboot. The reboot command simply restarts the kernel, without taking the time to shut down processes properly. The command shutdown -i 5 is for powering down. Script files are actually kept in /etc/init.d, with links placed in the /etc/rc#.d directories. So, in /etc/init.d, you need to create the following three files: PI, PIstart and PIstop. Be sure to change the lines that specify where to find PI on your system. PI Server System Management Guide Page 7

30 Chapter 1 - Starting and Stopping PI #!/bin/ksh # # Filename: /etc/init.d/pi # # become piadmin to start/stop PI PATH=/sbin:/usr/sbin:/usr/bin export PATH case "$1" in 'start') if [ -f /etc/init.d/pistart ] ; then su - piadmin -c "/etc/init.d/pistart" > /dev/console 2>&1 fi ;; 'stop') if [ -f /etc/init.d/pistop ] ; then su - piadmin -c "/etc/init.d/pistop" > /dev/console 2>&1 fi ;; *) echo "usage: $0 {start stop}" ;; esac #!/bin/ksh # # Filename: /etc/init.d/pistart # Make sure to set the directory in these # files to your PIHOME directory, in all # places # if [ -f /opt/pi/adm/pistart.sh ] ; then cd /opt/pi/adm nohup ksh pistart.sh fi #!/bin/ksh # # Filename: /etc/init.d/pistop # Make sure to set the directory in these # files to your PIHOME directory, in all # places # if [ -f /opt/pi/adm/pistop.sh ] ; then cd /opt/pi/adm ksh pistop.sh fi Next, set the owner, group, and permissions on these files to match the other files in this directory: root> chgrp sys /etc/init.d/pi* root> chmod 755 /etc/init.d/pi* Page 8

31 1.3 - Automatic Startup Then, you'll need to set the links in the rc#.d directories. Make sure that the S## number on the startup file is higher than the S## number for inetd, and that the K## number for the kill file is lower than the K## number for inetd (in both directories). root> ln -s /etc/init.d/pi /etc/rc3.d/s96pi root> ln -s /etc/init.d/pi /etc/rc2.d/k04pi root> ln -s /etc/init.d/pi /etc/rc0.d/k04pi Verify that these files have the same owner, group, and permissions as other startup files in those directories. Finally, test your scripts before you restart your machine. To stop PI: root> sh /etc/rc0.d/k04pi stop Verify that PI processes shut down. root> sh /etc/rc3.d/s96pi start Verify that PI starts properly. If there is any problem with stopping or restarting PI, remove the links in the /etc/rc#.d directories until you've debugged and fixed the problems. The files in the /etc/init.d directory will not affect your system as long as there are not links in the /etc/rc#.d directories. Automatic Startup/Shutdown for HP-UX 10 HP-UX 10 recognizes numerous run levels: Run Level s or S Purpose Single user (used for administering the system) 1 6 standard levels (1 is single-user, 2 is default, 3 and up are user-defined) Which processes run at each level is determined by the /sbin/rc script. This script looks in the directories /sbin/rc#.d (where # = 1, 2, 3, etc.) for scripts that begin with a "K" or an "S". The "K" scripts are used to kill processes, and the "S" scripts are used to start processes. When moving down from a higher level to a lower level, all "K" scripts in all the intervening levels are run. When moving up to a higher level, all "S" scripts in all intervening levels are run. So when the system moves from level 4 to level 2, the "K" scripts in /sbin/rc3.d are executed, then the "K" scripts in /sbin/rc2.d. The scripts are run in ASCII collated sequence, so K002stop is run before K004metoo. If you are using the default run level of 2, you must put the PI startup in /sbin/rc3.d. The shutdown script for PI needs to be in /sbin/rc%.d, where % is the default run level, minus 1, so PI will be shut down if the system goes out of the default run level to any other level, or if the system is halted or rebooted. To determine your default run level, check the line in /etc/inittab that reads "init:#:initdefault:." The # is the default run level for the system. Script files are actually kept in /sbin/init.d, with links placed in the /sbin/rc#.d directories. So, in /sbin/init.d, you need to create the following three files: PI, PIstart and PIstop. Be sure to change the lines that specify where to find PI on your system. #!/bin/ksh PI Server System Management Guide Page 9

32 Chapter 1 - Starting and Stopping PI # # Filename: /sbin/init.d/pi # # become piadmin to start/stop PI PATH=/sbin:/usr/sbin:/usr/bin export PATH case "$1" in 'start') if [ -f /sbin/init.d/pistart ] ; then su - piadmin -c "/sbin/init.d/pistart" > /dev/console 2>&1 fi ;; 'stop') if [ -f /sbin/init.d/pistop ] ; then su - piadmin -c "/sbin/init.d/pistop" > /dev/console 2>&1 fi ;; 'start_msg') echo "Starting PI" ;; 'stop_msg') echo "Shutting down PI, please wait" ;; *) echo "usage: $0 {start stop}" ;; esac #!/bin/ksh # # Filename: /sbin/init.d/pistart # Make sure to set the directory in these # files to your PIHOME directory, in all # places # if [ -f /opt/pi/adm/pistart.sh ] ; then cd /opt/pi/adm nohup ksh pistart.sh fi #!/bin/ksh # # Filename: /sbin/init.d/pistop # Make sure to set the directory in these # files to your PIHOME directory, in all # places # if [ -f /opt/pi/adm/pistop.sh ] ; then cd /opt/pi/adm ksh pistop.sh fi Next, set the owner, group, and permissions on these files to match the other files in this directory: Page 10

33 1.3 - Automatic Startup root> chgrp sys /sbin/init.d/pi* root> chmod 755 /sbin/init.d/pi* Then, you'll need to set the links in the rc#.d directories. Make sure the S### number on the startup file is higher than the S### number for inetd, and the K### number for the kill file is lower than the K### number for inetd (in both directories). Note that HP-UX uses three digits in the link file names as opposed to the two digits used by other varieties of UNIX. Here, run level 3 is our default run level, so we're putting the start script in /sbin/rc3.d, and the kill script in /sbin/rc2.d: root> ln -s /sbin/init.d/pi /sbin/rc3.d/s960pi root> ln -s /sbin/init.d/pi /sbin/rc2.d/k004pi Verify that these files have the same owner, group, and permissions as other startup files in those directories. Finally, test your scripts before you restart your machine. To stop PI: root> sh /sbin/rc2.d/k004pi stop Verify that PI processes shut down. root> sh /sbin/rc3.d/s960pi start Verify that PI starts properly. If there is any problem with stopping or restarting PI, remove the links in the /sbin/rc#.d directories until you've debugged and fixed the problems. The files in the /sbin/init.d directory will not affect your system as long as there are no links in the /sbin/rc#.d directories. Automatic Startup/Shutdown for PI on Compaq Tru64 and Tru64 UNIX 4.0x This operating system was originally known as DEC UNIX. Compaq Tru64 UNIX recognizes four run levels: Run Level Purpose 0 shutting down s single user (used for administering the system) 2 local (non-networked) 3 default, networked Which processes run at each level is determined by a set of scripts, /sbin/rc0, /sbin/rc2, and /sbin/rc3. These scripts look in the directories /sbin/rc#.d (where # = 0, 2, 3) for scripts that begin with a "K" or an "S". The "K" scripts are used to kill processes, and the "S" scripts are used to start processes. When the system moves from level 3 to level 2, the "K" scripts in /sbin/rc3.d are executed first if the system is not coming from single-user mode, then the "S" scripts are run (remember the system goes to single-user on boot, then goes to the default run level). The scripts are run in ASCII collated sequence, so K02stop is run before K04metoo. On shutdown or reboot, the system will run the scripts in /sbin/rc0.d. PI Server System Management Guide Page 11

34 Chapter 1 - Starting and Stopping PI You should put the PI startup in /sbin/rc3.d. The shutdown script for PI must be in /sbin/rc2.d, so PI will be shut down if the system goes to non-networked mode, and also in /sbin/rc0.d, so PI will be shut down if the system is halted or rebooted. Note: Compaq Tru64 systems should be restarted using shutdown to go to singleuser mode, then shutdown -r to reboot or shutdown -h to halt. This will cause the shutdown scripts to be run. Using shutdown -r or shutdown -h from run level 3 or 2 will bypass the scripts and simply kill all processes, without taking the time to shut down processes properly. Script files are actually kept in /sbin/init.d, with links placed in the /sbin/rc#.d directories. So, in /sbin/init.d, you need to create the following three files: PI, PIstart and PIstop. Be sure to change the lines that specify where to find PI on your system. #!/bin/ksh # # Filename: /sbin/init.d/pi # # become piadmin to start/stop PI PATH=/sbin:/usr/sbin:/usr/bin export PATH case "$1" in 'start') if [ -f /sbin/init.d/pistart ] ; then su - piadmin -c "/sbin/init.d/pistart" > /dev/console 2>&1 fi ;; 'stop') if [ -f /sbin/init.d/pistop ] ; then su - piadmin -c "/sbin/init.d/pistop" > /dev/console 2>&1 fi ;; *) echo "usage: $0 {start stop}" ;; esac #!/bin/ksh # # Filename: /sbin/init.d/pistart # Make sure to set the directory in these # files to your PIHOME directory, in all # places # if [ -f /opt/pi/adm/pistart.sh ] ; then cd /opt/pi/adm nohup ksh pistart.sh fi #!/bin/ksh # # Filename: /sbin/init.d/pistop # Make sure to set the directory in these Page 12

35 1.3 - Automatic Startup # files to your PIHOME directory, in all # places # if [ -f /opt/pi/adm/pistop.sh ] ; then cd /opt/pi/adm ksh pistop.sh fi Next, set the owner, group, and permissions on these files to match the other files in this directory (check the setting on your system): root> chown bin /sbin/init.d/pi* root> chgrp bin /sbin/init.d/pi* root> chmod 755 /sbin/init.d/pi* Then, you'll need to set the links in the rc#.d directories. Make sure that the S## number on the startup file is higher than the S## number for inet, and that the K## number for the kill file is lower than the K## number for inet (in both directories). root> ln -s /sbin/init.d/pi /sbin/rc3.d/s96pi root> ln -s /sbin/init.d/pi /sbin/rc2.d/k04pi root> ln -s /sbin/init.d/pi /sbin/rc0.d/k04pi Verify that these files have the same owner, group, and permissions as other startup files in those directories. Finally, test your scripts before you restart your machine. To stop PI: root> sh /sbin/rc2.d/k04pi stop Verify that PI processes shut down. root> sh /sbin/rc3.d/s96pi start Verify that PI starts properly. If there is any problem with stopping or restarting PI, remove the links in the /sbin/rc#.d directories until you've debugged and fixed the problems. The files in the /sbin/init.d directory will not affect your system as long as there are no links in the /sbin/rc#.d directories. Automatic Startup/Shutdown for IBM AIX IBM AIX recognizes numerous run levels: Run Level s (S) or m (M) Purpose Single user (used for administering the system) 2 default multi-user run level 3-9 user defined levels The system determines what processes should run at each level by reading the /etc/inittab file. The lines in this file tell the system what scripts to run at what run levels. The line with the initdefault in it (init:#:initdefault:, where # is some number 2-9) tells the system the default PI Server System Management Guide Page 13

36 Chapter 1 - Starting and Stopping PI run level. Each line with this number after the first colon is executed when entering the default run level. Thus, we need to add a line to the /etc/inittab file: pisystem:2:once:su - piadmin -c /etc/rc.pistart > /dev/console 2>&1 # Start PI Caution: Before editing /etc/inittab, you must save the original under another name. If you do not and make an error while editing, your system will not boot. If your initdefault level is not 2, you should replace the 2 with the appropriate number from initdefault. For shutdown and reboot, the system uses the /etc/rc.shutdown script. If this file does not exist, you should create it. Otherwise, just add the last line below to your current file. If you have to create the /etc/rc.shutdown file, you should give it the same owner, group, and permissions as the /etc/rc file. As before, you are strongly advised to save a copy of the original file under another name before editing this file. #! /bin/ksh # # Filename: /etc/rc.shutdown # su - piadmin -c "/etc/rc.pistop" > /dev/console 2>&1 Now you'll have to create these two files you've told the system to use, /etc/rc.pistart and /etc/rc.pistop. Be sure to change the directory in these files to the PI Server directory on your system. #!/bin/ksh # # Filename: /etc/rc.pistart # if [ -f /usr/pi/adm/pistart.sh ] ; then cd /usr/pi/adm nohup ksh pistart.sh fi #!/bin/ksh # # Filename: /etc/rc.pistop # if [ -f /usr/pi/adm/pistop.sh ] ; then cd /usr/pi/adm ksh pistop.sh fi You'll need to change the permissions on these files so that piadmin can run them: root> chmod 755 /etc/rc.pi* Finally, test your scripts before you restart your machine. To stop PI: root> su - piadmin -c "/etc/rc.pistop" > /dev/console 2>&1 Verify that PI processes shut down. Page 14

37 1.3 - Automatic Startup root> su - piadmin -c /etc/rc.pistart > /dev/console 2>&1 Verify that PI starts properly. If there is any problem with stopping or restarting PI, you'll need to restore the previous versions of /etc/inittab and /etc/rc.shutdown until you can find and fix the problem. Automatic Startup/Shutdown for HP-UX 9 PI Server is no longer supported on HP-UX Version 9.x. The information in this section is provided as a reference in case you are still running a previous version of PI. You should be aware that Hewlett-Packard no longer supports HP-UX 9.x and recommends upgrading. HP-UX 9 uses the /etc/rc file to control system startup. An individual site may also have additional scripts specified in the /etc/inittab file, to stop and start processes when moving from one run level to another. If so, the PI System Manager needs to determine which run levels should have PI running and which should not, and should put suitable entries in the scripts to stop and start PI when moving from one run level to another. Here, we'll just deal with the basic system, which uses only the standard /etc/rc file. Caution: Before editing /etc/rc, save the original under another name. If an error is made while editing, the system will not boot. In /etc/rc, there is a section intended for use by the local PI System Manager, "localrc()". In this section, add the following lines (be sure to add them before vuelogin, if you have it): # # start PI # su - piadmin -c /etc/rc.pistart > /dev/console 2>&1 There's another directory, /etc/shutdown.d, which has the shutdown scripts for applications on the system. This directory may be empty. In any case, you should create a file, /etc/shutdown.d/pistop, that looks like this: #! /bin/ksh # # Filename: /etc/shutdown.d/pistop # Stop PI gracefully su - piadmin -c "/etc/rc.pistop" > /dev/console 2>&1 This file should have the same owner, group, and permissions as the /etc/rc file. For our systems, that means running these commands: root> chown bin /etc/shutdown.d/pistop root> chgrp bin /etc/shutdown.d/pistop root> chmod 555 /etc/shutdown.d/pistop Next create the two files in /etc. Make sure you set the directories in these files to point to your PI Server directory. #!/bin/ksh # # Filename: /etc/rc.pistart PI Server System Management Guide Page 15

38 Chapter 1 - Starting and Stopping PI # if [ -f /export/pi/adm/pistart.sh ] ; then cd /export/pi/adm nohup ksh pistart.sh fi #!/bin/ksh # # Filename: /etc/rc.pistop # if [ -f /export/pi/adm/pistop.sh ] ; then cd /export/pi/adm ksh pistop.sh fi Again, you need to set the owner, group, and permissions: root> chown bin /etc/rc.pi* root> chgrp bin /etc/rc.pi* root> chmod 555 /etc/rc.pi* Then test these scripts before restarting your machine. root> sh /etc/shutdown.d/pistop Verify that PI shuts down. root> su - piadmin -c "/etc/rc.pistart" > /dev/console 2>&1 Verify that PI starts up properly. If there is any problem with stopping or restarting PI, you will need to restore your original /etc/rc file, and remove the new file from the /etc/shutdown.d directory. 1.4 Shutting Down an Individual Subsystem Shutting down an individual subsystem depends on the operating system. On Windows, look in the file adm\pisrvstop.bat for the shutdown procedure; on UNIX, adm/pistop.sh. For restart procedure check adm\pisrvstart.bat and adm/pistart.sh onwindows and UNIX, respectively. Page 16

39 Chapter 2. MONITORING PI SYSTEM HEALTH 2.1 Checking Key System Indicators Each day, check the key elements of your PI System to make sure PI is working efficiently and correctly. By checking the PI System daily, you can catch problems quickly and you also familiarize yourself with the normal state of the system. This makes it easier to interpret system metrics (such as I/O rates) and to find abnormal messages, when they occur. Area What to check Tools Backups Have PI System backups been run? piartool -al Message Log Update Manager Tag Data Snapshot data flow Check the System Message Log to see whether unusual events have occurred. Are the clients connecting to the PI Server normally? Does the Archive data for a reference tag look normal? Is the Snapshot data flow normal? pigetmsg pilistupd pisnap.bat or pisnap.sh piartool -ss Archive data flow Is the Archive data flow normal? piartool -as and piartool -qs Archive Shift Interface Logs IO Rate Counters Performance Counters (Windows) License limits and usage Verify that the expected archives are registered and that you have prepared for the next archive shift. Check the interface logs to see whether unusual events have occurred. Interfaces support writing snapshot rates to PI Points. The IO rate values and timestamps are a good indicator of interface health. All Subsystems publish key performance counters to Windows. Subsystem counters are discussed in this chapter. Check the usage statistics and point counts for your system. Anticipate license increase needs. piartool -al PI Datalink or PI ProcessBook to view or trend the IO rate points. Windows Performance Monitor. PI Performance Monitor Interface. piartool -lic PI Server System Management Guide Page 17

40 Chapter 2 - Monitoring PI System Health 2.2 Viewing System Messages During normal operation, the PI Message Subsystem maintains a central log file for messages from all PI subsystems. PI creates a new log every day, on universal time coordinate (UTC) time. PI puts the log files in the PI\log directory and names each log according to the date. The file names are of the form, pimsg_yymmdd.dat, where: YYY is years since 1900 (for example,if the year is 2000, YY is 100) MM is the month (for example, if the month is January, MM is 01) DD is the day (for example, if it is the fifth of the month, DD is 05) For example, the log file for January 5, 2000 is named pimsg_ dat. PI Message log files are binary files that you can view using the pigetmsg utility.the pigetmsg utility lets you view messages according to time, subsystem, or sender s identity. The pigetmsg utility requires PI to be running Available Log History The number of files left on the system determines the amount of log history available. PI creates a new log file every day. PI keeps log files for 35 days, at which point it automatically purges them from the system. If you want to keep the log files longer than 35 days, you can back them up before PI deletes them. If necessary, you can restore the backed up files at a later date for investigation. Note: The message log can be written (but not read) using function calls in the PI API. It can be both read and written using the function calls in the PI SDK. You can also view the log files from PI SMT Viewing System Messages with pigetmsg In general, the message source is a PI subsystem, but it can be a facility within a subsystem, such as pipoint if a point database error is recorded. You can run pigetmsg in interactive or non-interactive mode. The pigetmsg utility is located in the PI/adm directory. To get help on usage of the pigetmsg utility, type: pigetmsg /? Using pigetmsg in Non-Interactive Mode When you use pigetmsg in non-interactive mode, you specify all necessary parameters on the command line when you call pigetmsg. In this mode, There are no defaults for the start time (-st), end time (-et), or maxcount (-mc) options because the utility requires that at least two of these three parameters (start time, end time, max count) be defined. If start time and end time are specified, the utility returns messages from the start time to the end time. Page 18

41 2.2 - Viewing System Messages If start time and max count are specified, the utility returns the number of messages specified by the max count beginning from the start time. If end time and max count are specified, the utility returns the number of messages specified by the max count up to and including the end time. If start time, end time, and max count are all specified, the utility returns messages beginning at the start time and finishing when either the number of messages specified in the max count or the end time is reached. All the command line options for pigetmsg are listed in the following table. Argument -st -et -mc -id -pn -msg -dc Description Start time. This should be entered in PI time format. End time. This should be entered in PI time format. Max count. This is an integer the total number of messages to return. This is an integer that represents the specific message identification number from the text-file: 0 for the free-format messages. The default is all messages. This is the message source, either a specific program name (pinetmgr) or a wildcard mask (* for all programs, *arc* for all archive related sources, etc.). The default is all programs. A string mask selection for the message text. The default is * (show everything). Number of message to display at one time. The default is to display all messages. Using pigetmsg in Interactive Mode When you run pigetmsg without specifying at least two of the required parameters (-st, -et, and -mc), the pigetmsg utility goes into interactive mode. In the interactive mode, you are prompted to enter these parameters. The standard defaults for the parameters are obtained by entering a carriage return, <Enter>, after each prompt. In the interactive mode, there is a default for the start time, end time, and max count. The default for the start time is *-15m, which is 15 minutes prior to the current time. The end time is * which is the current time and the max count default is no limit. Searching and Sorting the Messages To list all messages received from a specific subsystem such as the Totalizer for today, use: pigetmsg -st t -et "*" -pn pitotal On UNIX systems, * on the command line is expanded to perform a directory search. Thus for pigetmsg it must be specified either as \* or *. The same is true for any mask containing *. To list the last 100 messages that have the word error as part of the message and then display these messages 10 at a time, use: PI Server System Management Guide Page 19

42 Chapter 2 - Monitoring PI System Health pigetmsg -et "*" -mc 100 -msg "*error*" -dc 10 To send pigetmsg results to a file, use the standard output redirection operator (>): pigetmsg -st "*-1h" -et "*" > myfile.txt Using pigetmsg Remotely The pigetmsg utility also supports remote logins to other PI Server systems. An interactive remote pigetmsg session is initiated with the -remote argument: pigetmsg -remote pigetmsg prompts the user for the required connection details: node name, TCP/IP port, user name, and password. The term node refers to the TCP/IP host name or TCP/IP Address of the PI Server. Alternatively, you can initiate a remote passing all arguments on the command line. Parameter -username -port -node -password Description Remote PI Server username TCP/IP port number Remote PI Server node name Remote PI Server password For example, to begin an interactive session as the user, piadmin, with the password, buddy on a remote NT host named Samson, use: pigetmsg -remote -node Samson -username piadmin -port password buddy Viewing Messages When the Message Subsystem Goes Down If the Message Subsystem is not available, messages are written to the Windows error log (or to standard output on UNIX). On Windows, use the Event Viewer to see messages when PI is running as Services, or check the command windows if running interactively. The PI Server messages that went to the event log messages are stored back to the PI Message Log on system startup and on regular time intervals every 3 minutes. On UNIX, you will find a log file for each subsystem in the PI/log directory. On both platform types, some startup messages may be written to these locations before the PI Message Subsystem is active Viewing Message Log Files Generated on other Servers There are times when it is useful to read message log files that were generated on a different PI Server. The PI Message Subsystem executable can be run in an offline mode for this purpose. Running the PI Message Subsystem in offline mode is very similar to pigetmsg; the significant difference is that the log file must be specified. Message log files are created Page 20

43 2.2 - Viewing System Messages daily; each file covers one day. The file naming convention contains the year month and date. The log files are created in the PI\log directory. Even though all messages will be read from the log file, pinetmgr must be running. The PI Message Subsystem executable, pimsgss, is located in the PI\bin directory. Here is an example of running pimsgss offline to view messages from February 27, 2003: D:\PI\bin>pimsgss -file..\log\pimsg_ dat Message ID [A], (0-n) (A)ll (H)ead (T)ail (Q)uit (?): Once the log file is specified, the behavior is identical to pigetmsg. Only messages in the time period covered by the specified file can be viewed. Offline use also allows specifying all arguments on the command line, just like pigetmsg. Messages that match the command line arguments are immediately displayed. Here is an example: D:\PI\bin>pimsgss -file..\log\pimsg_ dat -st "27-feb-03 12:00" - et "27-feb-03 15:00" 0 pinetmgr 27-Feb-03 12:07:16 >> Connection accepted: Process name: piconfig(1696) ID: 88 0 pinetmgr 27-Feb-03 12:11:54 >> Deleting connection: Piconfig(1696), Shutdown request received from piconfig(1696) (8), ID: 88 0 pinetmgr 27-Feb-03 12:35:20 >> Connection accepted: Process name: Piconfig(1484) ID: 89 0 pinetmgr 27-Feb-03 12:38:08 >> Deleting connection: Piconfig(1484), GetQueuedCompletionStatus error: Broken Pipe [109] The pipe has been ended. Connection: Piconfig(1484) (14, 109), ID: 89 0 pinetmgr 27-Feb-03 13:24:08 >> PInet accepted TCP/IP connection, cnxn ID 90 Hostname: olive.osisoft.com, Interpreting Error Messages (pidiag) Sometimes the message log includes error messages. Use the pidiag utility to interpret error codes: pidiag -e errorcode This will display the associated error message. For example, if the error code is 10722, you would type: pidiag -e [-10722] PINET: Timeout on PI RPC or System Call You can also use the pidiag utility to translate operating system error codes, which are always positive numbers. Note that the error code translation takes place on the operating system running pidiag. For example, on Windows, you could issue the command: pidiag -e 2 [2] The system cannot find the file specified. PI Server System Management Guide Page 21

44 Chapter 2 - Monitoring PI System Health Avoid reading error codes from the PI Server message log on one operating system and translating them with pidiag -e on another Subsystem Healthchecks (RPC Resolver Error Messages) Every few minutes, the pinetmgr sends a healthcheck message to each of the PI subsystems. If one doesn t respond within a given amount of time, pinetmgr will report the following message and the subsystem (RPC resolver) is marked off-line. >> Deleting connection: pisnapss, Subsystem Healthcheck failed. If an RPC is made to a subsystem that is marked off-line, the following message is generated. [-10733] PINET: RPC Resolver is Off-Line The following message indicates that the first part of a message was retrieved. This contains the message length. The pinetmgr attempted to retrieve the rest of the message but a timeout occurred. >> Deleting connection: pisnapss, Connection lost(1), [-10731] PINET: incomplete message 2.3 Monitoring Snapshot Data Flow Windows-based PI Server exposes the Snapshot data displayed by piartool -ss as Windows Performance Counters. These counters may be viewed with the Windows Performance Monitor or recorded to the PI Server with the OSI Performance Monitor Interface Listing Current Snapshot Information with piartool -ss The piartool -ss command lists the current Snapshot information every 5 seconds until you type <CTRL-C>. This utility is a quick way to view the overall state of the PI System. It shows the total number of events received and sent to the archive. (The third column gives the difference in the counters over 5 second periods.) Under normal steady state conditions, the Snapshot event count as well as archive posts should increase regularly. Events in overflow queue and Event Queue record count should be zero. Output of piartool -ss should look similar to listing below. $ piartool -ss Counters for 7-Aug-03 14:35:56 Point Count: Snapshot Events: Out of Order Snapshot Events: Snapshot Event Reads: Events Sent to Queue: Events in Queue: 0 0 Number of Overflow Queues: 0 0 Total Overflow Events: 0 0 Primary Capacity Remaining: Page 22

45 2.3 - Monitoring Snapshot Data Flow Each of the counters in the output is explained in the following sections. Note: The piartool utility can monitor a remote PI Server by specifying the -remote argument after all other arguments. piartool prompts the user for remote connection information. Point Count The Point Count is the number of points that are currently defined in the Point Database. It is incremented when a point is created and decremented when a point is deleted. Snapshot Events Counter An Event is the fundamental PI data element. It represents a value or status of a unique data source at a specific time. Specifically an event is a Value, Timestamp, and PointID. Most events come from PI API- or PI-SDK-based interfaces. The PI Subsystems ( Applications ) PI Batch, PI Performance Equations, PI Total, and PI Alarm, as well as manual input and laboratory systems are also event sources. Every Snapshot event increments the Snapshot Events Counter. The PI Snapshot Subsystem applies a compression algorithm to every event. The compression algorithm determines if the previous Snapshot event is passed on to the archive. Out of Order Snapshot Events Counter Events older than the current Snapshot event are out-of-order events. These events bypass compression and are sent directly to the archive. This counter shows the number of times this has occurred. The Archive Subsystem maintains a similar counter; see the Monitor Archive section in this chapter. Large out of order event counts might indicate a problem with the PI Server; this may lead to poor performance. A common cause is an erroneous system clock changes of the server machine or a data source machine. To identify the tags receiving out of order data use: piartool -ooo This gives a list of all the tags with any out of order events since the Snapshot Subsystem started or since the out of order flags reset. To reset the flags use: piartool -ooo -r When the -r flag is used, only tags that received an out-of-order event since the last piartool - ooo query are listed. The utility can run repeatedly with or without the -r flag by specifying a wait time (in seconds) between repeats. This is useful for catching the offending tags as new events come in: piartool -ooo 10 Whenever a repeat time is specified, a current timestamp appears in the output each time the utility writes data. When using repeats, the utility is stopped with <CTRL-C>. $ piartool -ooo -r 10 PI Server System Management Guide Page 23

46 Chapter 2 - Monitoring PI System Health 7-Aug-03 14:42:12> The following points had out-of-order Snapshot events: 15: pitot1 17: pitot3 18: pitot4 19: pitot5 20: pitot5run 21: pitot5ramp 22: pitot5est 23: pitot_avg 24: pitot_max 25: pitot_min 26: pitot6count 27: pitot6time 28: pitot6timene 29: pitot_1 7-Aug-03 14:42:22> No out-of-order Snapshot events found 7-Aug-03 14:42:32> No out-of-order Snapshot events found Snapshots Events Read Counter Count of all Snapshot reads. This is a simple measurement of how many Snapshot values are read by all applications. Events Sent to Queue Counter Events that pass compression, or are out of order are sent to the Event Queue, and thus increment this counter. Under normal operating conditions this count indicates the number of events that passed the compression test (that is, the events were different from existing events and could not be eliminated) and are being sent to the archive. The ratio of Snapshot events to Events Sent to Queue is the system aggregate compression ratio. This ratio gives a quick view of overall system compression tuning. Ratios less then 2:1 indicate low compression; a compression tuning evaluation should be performed. Ratios greater than 10:1 indicate over compression; a compression tuning evaluation should also be performed. Three Point Database attributes affect compression: CompDev, CompMin, and CompMax. These are known as the compression specifications. If a point has its Compressing point attribute set to FALSE, all new events are sent to the Archive Subsystem. Events in Queue Counter Events passed to the EventQueue are put in First-In-First-Out order. The Events in Queue Counter is incremented when the event is put in the Queue; it is decremented when the Archive Subsystem successfully retrieves and processes an event. When the system is shutdown, the Event Queue is preserved in the file PI\dat\pimapevq.dat. This assures no data loss when the system shuts down or when the archive subsystem is not processing events at the same rate as they come in. Page 24

47 2.4 - Monitoring the Event Queue Number of Overflow Queues Counter If the queue PI\dat\pimapevq.dat becomes completely full, a new queue is created. This should not occur under normal circumstances and this number will be 0. However, if the archive is not processing events, a number of such queues (up to 65536) can be created. This counter shows how many queues were created. These additional queues are automatically deleted after the archive subsystem processes them. Note: When multiple Event Queues exist, the file pimapevq.dat is renamed to pimq0000.dat and additional queues are named pimq<id>.dat where id is the queue number in hexadecimal representation (from 0000 to FFFF). The piartool -qs command always shows information from the queue to which the Snapshot Subsystem is writing (primary queue). Total Overflow Events Counter This is the total number of events in all Overflow Queues. The sum of this counter and the Events in Queue counter are all the events not yet processed by the archive. Primary Capacity Remaining Counter This counter shows the estimated number of additional events that can be placed in the primary queue. 2.4 Monitoring the Event Queue After installation and regularly after significant changes, the System Manager should verify the proper sizing and functioning of the Event Queue. The command piartool -qs allows you to look at internal counters and statistics about the queue activity piartool -qs The piartool -qs command lists the Event Queue statistics every 5 seconds until you type <CTRL-C>. The column at the right margin gives the difference in the count since the previous 5 seconds. The counters are reset to 0 when the Archive Subsystem is started. $ piartool -qs Counters for 7-Aug-03 17:22:45 Physical File Size (MB): 64 0 Page Size (KB): Total Data Pages: 63 0 Write Page Index: 0 0 Read Page Index: 0 0 Total Page Shifts: 0 0 Available Pages: 63 0 (100.0%) Average Events per Page: Estimated Remaining Capacity: (2.2 hr) Total Bytes Written (MB): 0 0 Total Event Writes: (579/sec) PI Server System Management Guide Page 25

48 Chapter 2 - Monitoring PI System Health Total Event Reads: (579/sec) Current Queue Events: 0 0 Overflow Queues: 0 0 Total Overflow Events: 0 0 Current Queue Id: 0 0 Queue Size The Physical File Size shows the current size of the Event Queue on disk (the file pimapevq.dat or any overflow queues). The Page Size is the portion of the file that is loaded into memory for faster access. The Event Queue is a circular buffer of pages and each page is a circular buffer of events. When a page is full, the Snapshot Subsystem tries to write into the next one and the Archive Subsystem reads the pages in the same order they were written. The Total Data Pages shows the number of pages, obtained by dividing the queue size by the page size (minus one for the queue header). Page Activity The Write Page Index shows the page the Snapshot Subsystem is currently writing to. Similarly, the Read Page Index indicates the page from which the Archive Subsystem is reading. Under normal conditions, these two numbers are identical. If the Archive Subsystem is not reading at the same pace the Snapshot is writing, page shifts will occur and the Total Page Shifts counter will increment. At any time, the Available Pages counter shows how many free pages are left in the current queue. Queue Capacity Based on the average size of all events, the Snapshot Subsystem maintains the number of Average Events per Page. From this it derives an Estimated Remaining Capacity in number of events (also shown by piartool -ss). Total Bytes Written shows the volume of data that transited through the Event Queue since the Snapshot Subsystem was last started. Note: A System Manager should try to configure the queue so that it is big enough to hold a few days worth of data. To configure the queue size, see Tuning the PI Server in Chapter 3, Troubleshooting and Repair. Event Rates Every time the Snapshot sends an event to the archive, the Total Event Writes counter gets incremented. Similarly, when events are read by the Archive Subsystem, the Total Event Reads is incremented. The difference between these counters shows the Current Queue Events (total events per queue). Overflow Queues When the current queue is entirely full, the Snapshot Subsystem creates additional queue files of the same size. The Overflow Queues counters and Total Overflow Events (same as in piartool -ss) indicates how many queues exist and how many events they hold. The Current Page 26

49 2.5 - Monitoring the Archive Queue Id shows the sequence number of the primary queue (always 0 under normal conditions). 2.5 Monitoring the Archive On a daily basis, the System Manager should look at the internal counters for the Archive Subsystem. This enables you to predict the next archive shift as well as to monitor ongoing system behavior and performance. You can use piartool -as and piartool -al for this purpose. Other piartool commands are discussed in the chapter, Managing Archives. Note: Windows-based PI Server exposes the Archive data displayed by piartool -as as Windows Performance Counters. These counters may be viewed with the Windows Performance Monitor or recorded to PI Server with the OSI Performance Monitor Interfaces. This subject is covered in detail later in this chapter piartool -as The piartool -as command lists the Archive Subsystem (piarchss) internal counters every 5 seconds until you type <CTRL-C>. The column at the right margin gives the difference in the count since the previous 5 seconds. The counters are reset to 0 when the Archive Subsystem is started. $ piartool -as Counters for 7-Aug-03 14:51:10 Archived Events: Out of Order Events: 0 0 Events Cascade Count: 0 0 Events Read: 5 0 Read Operations: 0 0 Cache Record Count: 0 0 Cache Records Created: 6 0 Cache Record Memory Reads: 5 0 Cache Clean Count: 0 0 Archive Record Disk Reads: Archive Record Disk Writes: Unflushed Events: Unflushed Points: Point Flush Count: Primary Archive Number: 5 0 Archive Shift Prediction (hr): 1 0 Archiving Flag: 1 0 Archive Backup Flag: 0 0 Archive Loaded Flag: 1 0 Shift or System Backup Flag: 0 0 Failed Archive Shift Flag: 0 0 Overflow Index Record Count: 0 0 Overflow Data Record Count: These counters are explained below. PI Server System Management Guide Page 27

50 Chapter 2 - Monitoring PI System Health The piartool utility can run remotely by specifying some additional parameters on the command line as described in Table 3 1. Options for Use with piartool on page 37. Archived Events Counter The Archived Events counter is incremented for every new event written to the archive (via the archive cache). This count includes delete and edit events. Out-of-Order Events Counter The Archive Subsystem receives events from the Snapshot Subsystem. If the timestamp of the event is older than the last event in the target record, it is considered an out-of-order event and is added to this counter. Excessive out-of-order events might lead to system problems such as excess CPU consumption, excessive disk I/O, and archives filling faster than expected. Events Cascade Count Out of order events are inserted into the target record. The insert requires moving other events within the record. If the record is full, one or more events are forced out of the record into the adjacent record. This counter is incremented each time an insertion forces an event out of a record. This counter is an indication of the impact of out of order events on the archive. Events Read Counter Number of events read by all applications. For example, a trending application requests an array of events over a specified time period. This counter is incremented for each event returned. Read Operations Counter Number of archive read requests. Each archive read request increments this counter once, regardless of the number of events returned. Archive Memory Cache Counters The Archive Subsystem uses a memory cache when handling events sent to the archive disk file. During routine operation, the cache is automatically flushed to disk at least every 15 minutes. Abrupt system shutdowns, such as power loss, should lose no more than the last 15 minutes of data. This value may be changed via a configurable timeout table parameter. The data archive cache architecture provides large performance gains over reading and writing directly to disk. The cache even provides significant performance over the Operating System file cache. As with all file cache designs, the disk image will often be slightly inconsistent; therefore archive backup cannot be performed without coordination with the Archive Subsystem. The utility piartool -bs places the archive in a safe consistent state for backups; piartool -be returns the archive to normal operation. This is discussed in detail in the system backup section in Chapter 3, Troubleshooting and Repair. Page 28

51 2.5 - Monitoring the Archive The cache consists of archive records loaded into memory. Records are added as needed; they are deleted when unused for a certain length of time. Cache Record Count yields the current count. Cache Records Created is incremented when memory is allocated for a new record. When archive data is requested (for example, the user is trying to trend a point in PI ProcessBook), the Archive Subsystem goes to the cache to retrieve the event data. If the record is not there, the Archive Subsystem loads the record from disk to the cache; Archive Record Disk Reads is incremented. When writing events to the archive, they are stored first in memory. Unflushed Events Counter indicates the total number of events not yet flushed to disk. Unflushed Points counter indicated the number of points with any number of events not yet flushed. Archive Record Disk Writes is incremented each time a record is written to disk. This occurs during the regular cache flush every 15 minutes. It also occurs when the number of unflushed events for a point exceeds the configured maximum. Cache Record Memory Reads is incremented for each read access. Cache Clean Count indicates the number of records that were removed from the cache. The archive cache contains a finite number of records. Old or low use records are removed from memory to make room for most recently accessed records. Primary Archive Number The Primary Archive Number is an internal identifier and should be ignored. It is not to be confused with the sequence number of the archive, as listed by piartool -al. Archive Shift Prediction Archive Shift (hr) estimates the predicted time to the next archive shift. Use piartool -al to list the target archive file for shift. The target archive will be initialized on shift; if it contains data, make sure it is backed up. If this data is required to remain online, a new archive of adequate size should be created and registered. When the current archive is less then 20% full, the estimate is 0. In order to determine whether a zero estimate means the archive is nearly full or not, run piartool -al. The message will tell you if there is not enough data for a prediction. Shift Time: Not enough information for prediction The shift prediction in piartool -as differs slightly from the one in piartool -al. The piartool -al figure is calculated when called. piartool -as shows the latest 10 minutes average. The latter number is available as a Windows Performance Counter. Archiving Flag Indicates whether or not events may be written to the archive; a value of 1 indicates events may be written, a value of 0, events may not be written. The Archiving Flag is set to 1 when there is a mounted Primary Archive. A Primary Archive may be registered but not mounted, for example during an archive shift. In this case, the Archiving Flag would be set to 0. This flag is also set to 0 when in backup mode. PI Server System Management Guide Page 29

52 Chapter 2 - Monitoring PI System Health All registered archives may be viewed using piartool -al. The Archive Flag is set to 0 if the Primary Archive becomes full and there is no other archive file available into which to shift. Note that the Primary Archive will never overwrite itself. Archive Backup Flag This flag is set to 1 when the archive is in backup mode. Backup mode indicates the archive file is in a consistent state unlocked state and may be backed up. The value is 0 when the archive is available for normal access. Backup mode is entered and exited by running piartool -bs and piartool -be, respectively. Archive Loaded Flag This flag is 1 when a valid primary archive is mounted. 0 if the primary archive is not mounted. Shift or System Backup Flag This flag is 1 when the archive is in shift mode or the Archive Subsystem has been placed in backup mode. Shifts occur automatically or can be forced via piartool -fs. System backup mode is entered with piartool -systembackup. Failed Archive Shift Flag Set to 1 when a shift should occur but no shiftable archive exists. Under normal conditions this flag is 0. Overflow Index Record Count Number of index records. Index records speed up access to overflow records. Index records are created when two overflow records for a point are full and third one is being created. This counter is a measurement of archive file consumption. Overflow Data Record Count Number of non-primary data records. Each archive has a primary record for each point. When this record is full, data is written to overflow records. This counter gives a measurement of archive consumption. 2.6 Monitoring the Update Manager The Update Manager provides change notification of Snapshot events, Point Database, Module Database, Batch Database and Archive changes for applications such as PI ProcessBook, Interfaces, and other PI API or PI-SDK-based applications. For example, a trend application can request Snapshot update notification on points being trended. The Update Manager queues all new Snapshots for these points. Periodically the trend application retrieves and plots the new events. Page 30

53 2.7 - System Date and Time Adjusting the Pending Update Limit By default, the PI Update Manager maintains up to 4095 pending updates per consumer. Similarly, TotalUpdateQueue parameter sets the maximum events queued in the entire update-manager database. The default is 100,000. If either these limits is reached, a message is sent to the PI Message Log. Another message sent when the level goes back below 99% of the limit and queuing is resumed. Messages for consumers using less then 0.1% of the TotalUpdateQueue limit (100 for the default) are still queued even when the total limit is reached. It is possible to change this number by adding an entry named MaxUpdateQueue to the PITimeout Table using piconfig: C:\PI\adm>piconfig * (Ls - ) pi_gen,pitimeout * (Ls - PI_GEN) create * (Cr - PI_GEN) name,value * (Cr - PI_GEN) Piconfig> maxupdatequeue,6000 *> maxupdatequeue,6000 * (Cr - PI_GEN) When to Change the Pending Update Limit The default is suitable for most systems, with the following exceptions: The number should be increased on systems with very large physical memory, high frequency of updates (normally snapshots) and applications that might retrieve these updates slowly. Changes to the MaxUpdateQueue parameter take effect only after the PI Server restarts. If a PINet node or PItoPI interface is connected to the PI Server, the default MaxUpdateQueue value is too small. It should be increased to at least the point count of the PI Server. This value ensures that all point updates requested by PINet can be queued on the PI Server if a system manager performs an edit operation on every point. How to Change the Maximum Number of Events in the Queue It is possible to change this number by adding an entry named TotalUpdateQueue parameter in the PITimeout Table sets the maximum events queued in the entire updatemanager database. The default is 100,000. You can use piconfig to change the limit. 2.7 System Date and Time The PI server uses the Windows clock, including the time zone and Daylight Savings Time (DST) settings to track time. If the system clock isn't right, the PI data isn't right either. You might even lose data if the system clock is wrong. As the PI System Manager, you must: Check the system clock regularly PI Server System Management Guide Page 31

54 Chapter 2 - Monitoring PI System Health Adjust the clock toward the correct time Adjust the clock only in small increments (for example, one second per minute) Keep a record of all adjustments you make Internally, PI stores all data in UTC time and translates back to local time when serving the data to a client application. If you set all the Windows machines involved to the correct time, time zone, and DST settings, PI can seamlessly handle clients in multiple time zones as well as DST transitions. Challenges a PI system administrator may face when configuring the clock on a PI server include irreconcilable differences in the clocks of the PI server, the data systems from which the data is being collected, and the clocks of the PI users on the corporate LAN or WAN. Complications will most commonly arise when data is collected from legacy systems with clocks that have been configured inaccurately or allowed to drift. The best thing to do in this case is to set all clocks to the correct time. If this is not possible, you need to decide on a workaround. may be available depending on the data collection interface process being used. The most common workaround is for the interface process to read the current values from the legacy system and send them to the PI with the current PI server time as the timestamp. Page 32

55 Chapter 3. MANAGING ARCHIVES 3.1 Tasks for Managing Archives PI Archive Management includes significant tasks for the PI System Manager, including: Archive creation and deletion Archive sizing Archive shifting Archive backups Archive splitting/combining/compressing Archive repair 3.2 About Archives The PI System stores your data in Archives, which are just files for holding PI data. Archive files can be either fixed or dynamic. Fixed archive files are always the same size, regardless of how much data they contain. Dynamic archive files grow in size as they get data. (See About Fixed and Dynamic Archives for a complete explanation.) The archive receiving current data is called the Primary Archive. When the Primary Archive becomes full, an Archive Shift occurs and the next available archive becomes the new Primary Archive About Archive Shifts PI actually performs the archive shift before the Primary Archive is completely full. This leaves some extra space in the archive file so that you can add data later, if you need to. For an archive file to be eligible to be the new Primary Archive, it must be writeable, and large enough to handle the current size of the Point Database and it must also be registered. Registering an archive file is how you make an archive file accessible by PI. When an archive file is registered, it is made visible to the PI Server. The data in unregistered archives are not accessible by the PI Server or its client applications. See Registering an Archive on page 56 for more information. PI Server System Management Guide Page 33

56 Chapter 3 - Managing Archives If no eligible archives are available for an Archive Shift, PI uses the oldest available filled archive as the new Primary Archive, overwriting the data in the old archive. For example in the preceding illustration, after the shift from piarch.003 to piarch.004, no empty registered archives are left on the Server. If the System Manager does not create new archives on this PI Server, then at archive shif piarch.001 becomes the next Primary Archive and the PI Server overwrites the existing data in that archive. It takes PI a few minutes to complete an Archive Shift. During that time, you are not allowed to add, edit or delete points. PI stores incoming data in the Event Queue until the shift is complete and then writes the queued events into the new Primary Archive About Archive Files Each archive file contains events for a time period specified by the archive start time and end time. The archive files on each PI Server should cover all time ranges, without overlapping time ranges or gaps in time. Archives range in size from 2 megabytes to 2 terabytes (2,048 gigabytes) on Windows NT. On UNIX, the maximum size is 2 gigabytes About Archive Gaps Archive gaps are times for which no archive file exists. If an event is sent to the Archive and no archive file with the appropriate time range exists, the event is discarded and an error is logged. If data retrieval is attempted for a time range that is not covered by the set of registered archives, an error or a no data status is returned. About Archive Records PI archive files stores events as data records. Data records are either primary records or overflow records. Each point in the Point Database has one primary record allocated in every archive file. Primary records are stored at the very beginning of the archive file. When the primary record for a point fills up, the data for that event goes to an overflow record in the Page 34

57 3.2 - About Archives archive file. Overflow records start at the end of the archive file and are filled backwards. Each record is one kilobyte. A point can have as many overflow records as needed. Points that receive events at a slow rate might never need to allocate an overflow record, whereas points that receive events at a fast rate might need to allocate many overflow records. (PI uses index records to keep track of multiple overflow data records for a point). Note: When the Archive allocates a new overflow record for a point, it writes the new record to disk immediately, along with any existing records that reference the new record. About Index Records Index records are records that index a point s data records by time. After one overflow record is full for a point, PI creates an index record for the point, along with a new overflow record. An index record can hold between record points. When the record index is full, PI creates a second record index and these are chained. Archive access for points with chains of index records are marginally slower than for points with zero or one index record About Primary Archives The Primary Archive is the archive file that covers the current time range. The Primary Archive has a defined start time but no defined end time (it is always assumed to be now. ) The end time for the Primary Archive is defined when an archive shift occurs. An archive shift is the process of replacing the primary archive with a new or cleared archive. If it exists, an empty archive is selected to be the new Primary. If no empty archive exists, then the oldest archive becomes the Primary and its existing data is overwritten. Another option is automatic creation of archives. If this is in effect, a new archive file with the same characteristics as the current Primary Archive is created during the shift. PI ensures that some space is still available at the time of the shift. This way, out-of-order events can still be stored in the archive after it is no longer the primary archive. For more information, see the Archive Shift section in this Chapter About Fixed and Dynamic Archives There are two types of Archive files, fixed and dynamic. PI Server System Management Guide Page 35

58 Chapter 3 - Managing Archives Fixed Archives The default archives that are installed with a PI System are fixed archives. They have the following characteristics: Fixed archive files have all of their disk space allocated at creation time. An empty archive and a full archive take the same amount of disk space. Fixed archives may or may not participate in archive shifts, depending on the point count to archive size ratio and the state of the shift and write flags. New points may be added to a primary fixed archive. Use fixed archives for all normal operations. Filling up Fixed Archives It is possible for a fixed (non-primary) archive to get completely filled up. Once an archive is full, incoming data events for that time range have nowhere to go, and are discarded. This can occur if a large quantity of old data is to be added to the Data Archive, and go to an old archive which is already full. In such cases it is best to stop the Archive Subsystem to prevent any further data loss. Then create a new, larger, fixed archive with the same time range of the full archive and copy the contents of the full archive to the new large archive using the Offline Archive Utility. When the new large archive is registered in place of the full archive, incoming data events for that time range is no longer discarded. Dynamic Archives Dynamic archives have the following characteristics: Like fixed archives, dynamic archives are for a specific time range. Dynamic archives that already contain data are not eligible for Archive Shift. Newlycreated dynamic archives participate in the standard shift algorithm, if they are registered. Dynamic archives do not contain unallocated space waiting to be used for overflow records. Rather, the file grows as overflow records are added. Dynamic archives have a maximum archive size (specified at archive creation). The default maximum archive size is 1 Gbyte or 10% less than the maximum available disk space, whichever is less. Dynamic archives are initialized with a fixed number of primary point records. Once a dynamic archive is created, the number of primary records cannot grow beyond the initial allocation, even if there is space in the file. Note: You can create dynamic archives with a number of primary records higher than the current number of points. This allows users to create additional points in a dynamic primary archive. Users can add new points as long as the number of points does not exceed the number of primary records specified when you created the dynamic archive. To create this type of dynamic archive, use the piarcreate -acd command. Page 36

59 3.3 - Tools for Managing Archives Using Dynamic Archives To understand the usage of dynamic archives, consider this example. A PI System Manager wishes to combine the data in two old archives into a single archive file. The Offline Archive Utility is run twice: once to copy data from the earlier archive and again to add data from the second archive. Assuming that the Offline Archive Utility is allowed to create the archive file on its first pass (or piarcreate was used to create a dynamic archive in advance), the resulting archive contains data from the two input archives and has no free records. This potentially makes the new archive smaller than the combined size of the input archives and able to accommodate additional data as long as the maximum growth size is not exceeded About Read-Only Archives Archive files that have a read-only file-system attribute, or files on a read-only device (CD ROM) are mounted as read-only. Their status will show up on the piartool -al display as not writable. Read-only files cannot participate in archive shifts. New events in the time range of such an archive go into the archive cache, but when flushed to disk, an error message is logged for each event. This includes attempts to edit, delete or annotate events in a read-only archive. 3.3 Tools for Managing Archives There are three main command line tools for managing archives: piartool: The main archive management utility. You can use it to create, register and unregister archives, force archive shifts, list details of archive files, and much more. You can use piartool only when PI is running. piarchss: The Archive Subsystem, piarchss, includes an Offline Archive Utility. You use this utility to process existing offline archives. For example, you use piarchss to combine multiple archives into one, to divide large archives into multiple archives, to recover corrupted archives, and so on. piarcreate: Use thie piarcreate utility to create new archives Using the piartool Utility The following table lists the command line options for the piartool utility. You can only use piartool when PI is running. Table 3 1. Options for Use with piartool Option Name Action -aag Archive Activity Grid Enable/Disable the archive activity grid, and list the Archive access information. -ac Archive create Create an archive for specified period -acd Dynamic archive create Create a dynamic archive for specified period. PI Server System Management Guide Page 37

60 Chapter 3 - Managing Archives Option Name Action -ads -aes Archive disable shift Archive enable shift Removed specified archive from shift participation. Add specified archive to shift participation. -al Archive list List all registered archives -ar <path> Archive register Register a specified archive -as Archive statistics Archive Subsystem activity monitor and statistics -au <path> Archive unregister Unregister a specified archive -aw Archive walk List details of the records in an archive file -Backup ['path'] [- component <comp>] [-numarch <number>] [More Options listed under Action] Perform a System Backup Start/End/Query a backup of the PI system. The path points to where the resulting backup files are placed. The optional component specifies which part of the PI system is backed up. The numarch option specifies how many archives (starting from the current archive and working backupwards) are backed up. By default all archives are backed up. Additional options include: -query [-verbose] Inquire about the current backup status. -abort Abort a running backup. -identify [-numarch <number>] [-cutoff <date>] [- verbose] Identify files able to be backed up. In verbose mode the individual components are listed. The numarch and cutoff options override default until next backup. -test <freeze,component thaw> Test but don t actually perform a PI system backup. -block Block Wait for a specified subsystem to become responsive. Used in PI start scripts. -cad 'tagname' [-reset] -cas ['tagmask'] -cs [For troubleshooting only] -de <path> [-pt tagname] [recno] Archive cache diagnostics Archive cache summary PINetmgr connection stress test Dump eventqueue Archive cache diagnostics for a specified point. Display the events sitting in the read and write cache. Display a summary of the contents of the archive cache, including the number of events in the Read and Write caches for every point that matches the tagmask. PI Network Manager connection stress test. Dump specified Event Queue file. Optionally select a specific tag and/or specific record in the file. Page 38

61 3.3 - Tools for Managing Archives Option Name Action -disconnect - subsystem <name> [- id <id>] [For troubleshooting only] Force Subsystem Disconnect Force the specified subsystem to disconnect from pinetmgr, or if pinetmgr is specified, instruct pinetmgr to disconnect the connection based on the connection ID passed. The -graceful option causes a disconnection notice to be first sent by pinetmgr or the target subsystem. -fs Force shift Force an archive shift. -idci <in file> -idco <outfile> -lic [Options listed under Action] -mpt <0 1> [For troubleshooting only] -msg "message" [-pro "procname"] [-nrep m] [-nbuf l] [-nmps n] [For troubleshooting only] -msgtest <startsize in bytes> <endsize in bytes> [For troubleshooting only] ID conversion file creation Licensing Information Message protocol trace Message Subsystem Test PINetmgr Communications Test Create ID conversion file from specified input file. Usage List options for viewing license info Def List all licenses User List all licenses in use Lic List all licenses and users AllowedApps <-List <type,type...> -Check <app,app,...>> List the types of licenses or check whether a specific feature is licensed Log all communication coming to and from the server. To enable tracing run with -mpt 1. Call a second time with -mpt 0 to stop tracing. The resulting output file appears in the \pi\temp directory with the.mpt extension. The file can be read with the mptconsolveview utility, which OSISoft provides on request, e.g.: Mptconsolveview.\24-Aug-05_ mpt Sends a series of test messages to the PI message log. Can emulate sending messages from any process. Nrep sets how many messages are sent, nbuf sets how many messages are sent at a time, nmps attempts to throttle how many messages are sent per second. Send a series of test messages to the PInetmgr. Message size increases by one byte increments starting from startsize to endsize. PI Server System Management Guide Page 39

62 Chapter 3 - Managing Archives Option Name Action -netstress [-SendBlocks 1] [- RecvBlocks 1] [-loops 1] [For troubleshooting only] -ooo <-r><sec> PINetmgr stress test. Out of order snap events Test PINetmgr subsystem by sending and receiving specified 4k blocks. Show tags with Out of Order events. Optional Reset and Repeat. -qs Queue statistics Monitor Event-Queue activity and statistics -re [-subsystem <name> -pid <#> ] [For troubleshooting only] Raise Exception Raise exception in specified subsystem (force a crash). This call only works locally; remote is not supported. -remote Remote system Run utility against a remote system. When this argument is included as the last argument in any valid command the utility prompts for remote system login information. If successful the utility runs against the remote system. -rpctest <subsystem> <count> [For troubleshooting only] Inter-process Communication Test -ss Snapshot statistics Snapshot Subsystem activity monitor and statistics -standalone <n> Standalone mode Place PI Server in standalone mode. Possible values for n are: 1 Enter standalone 0 Exit standalone 2 Query current state -systembackup System Backup Start/End backup for a specified subsystem. Deprecated in favor of -backup. -thread 'subsystem' [Options listed under Action] Subsystem Thread Control Times the RPC round trip to the specified subsystem. -info List a subsystem's threads -id <Thread ID> <-disable -enable -suspend -resume -terminate hang -add -break -priority <Direction>> Perform an action on a particular thread -v Version Get version and build information Using the Offline Archive Utility (piarchss) The Offline Archive Utility is actually the same Archive Subsystem executable, piarchss that is a part of a running PI system. To use the archive utility functions of piarchss, you run it in Page 40

63 3.3 - Tools for Managing Archives console mode using special command line arguments. The offline archive utility can be used even while PI is running. You typically use the piarchss utility to work with archive files outside the context of a running PI Server. This enables you to continue archiving current data on your PI Server, while manipulating other archives offline. Typical applications of the offline tool include: Combining a number of archives together Dividing a big archive file into smaller archives Extracting specific time period from an archive Recovering a corrupted archive Recovering events from an Event Queue file Converting PI2 archive file to the PI3.x format. Note: The archives that are created by the Offline Archive Utility may be either fixed or dynamic. Their format is not different from any other archives created by piartool -ac. Running the Offline Archive Utility When you run piarchss as the offline archive utility, you give it an input archive file and an output archive file, along with relevant command parameters. The basic format is: piarchss -if inputpath -of outputpath where inputpath is the full path and filename of the input archive file and outputpath is the full path and filename of output archive file. The piarchss utility takes the input file, processes it according to the command parameters, and then outputs the processed file to whatever location you specified. The piarchss utility does not modify the input file under any circumstances. The piarchss Utility Command-Line Parameters This section provides a list of all the command line parameters for the piarchss offline archive utility. Some of these options are discussed in more detail in the following subsections. The parameters may be specified in any order. Parameter Name Notes -if <full path name> -of <full path name> -ost <option> -oet <option> Input Archive File Output Archive File Output file Start Time Output file End Time Required. The full path, including drive letter is required. This is true for all file arguments passed. Required. Options: Input, Event, <time>, NFE See Specifying a Start Time for the Output File (-ost). Options: Input, Event, <time>, NFE, Primary, NoChange PI Server System Management Guide Page 41

64 Chapter 3 - Managing Archives Parameter Name Notes See Specifying an End Time for the Output File (-oet). -f <size in Mbytes> Make output archive a fixed archive If size = 0, the input file size is used. Default is dynamic archive. -tzf <full path name> TimeZone specification file Use when PI2 input different from standard DST - see also the PI Server Reference Guide, Appendix D. -filter <start end> Filter Process events only within the time range specified. Both timestamps must be provided. See Time Filtering (-filter). -dup Duplicate Records Allow input archive records with duplicate times. By default duplicates are ignored. -tfix Time Fix Apply a specified time transformation to input data. See Transforming Event Timestamps (-tfix). -silent Silent Mode Suppresses warning messages. -vah Validate Annotation Handles Apply a validation algorithm. Multiple events referencing a single annotation are detected and fixed. Batch Database annotations are checked for consistency. Specifying a Start Time for the Output File (-ost) The -ost flag specifies the start time for the output file. The usage is as follows: -ost option Where option is one of the following: input event time (where time is specified in absolute PI time format) NFE Sets the start time to the start time of input. This is the default behavior. Sets the start time to time of first event in input. Sets the start time to the specified time string. Times are specified in absolute PI time format. Relative times are not supported. Times must be enclosed in double quotes when containing spaces. If only date is specified the time defaults to 00:00:00 (midnight) For example: 22-JAN-02 23:59:59 23-JAN Feb Output file start and end times must differ by at least one second. Sets the start time to time of first event which passes the time filter. Page 42

65 3.3 - Tools for Managing Archives Specifying an End Time for the Output File (-oet) The -oet flag specifies the end time for the output file. The usage is as follows: -oet option Where option is one of the following: input event time (where time is specified in absolute PI time format) NFE primary NoChange Sets the end time to the end time of input file.this is the default behavior. Sets the end time to the time of last event in input file. Sets the end time to the specified time string. Times are specified in absolute PI time format. Relative times are not supported. Times must be enclosed in double quotes when containing spaces. If only date is specified the time defaults to 00:00:00 (midnight). For example: 22-JAN-02 23:59:59 23-JAN Feb Output file start and end times must differ by at least one second. Sets the end time to time of last event which passes the time filter. Sets the end time to indicate the archive is a primary archive. End time is not altered. Time Filtering (-filter) The -filter flag specifies a time range (inclusive) beyond which events are discarded. The usage is as follows: -filter starttime endtime Start time must be before end time. Specifying an ID Conversion File (-id) Use the -id option to specify an ID conversion file (used to move a PI archive to a different PI Server). The ID conversion file is a binary file that maps the source archive point ID into the target system point ID. When ID conversion file is used, only points included in this file are converted. This is always necessary when archives are migrated from a PI2 system, or when data is brought from another PI3 system. The binary file is created from an input text using the piartool utility. piartool -idci <id conversion input file> -idco <id conversion binary file> The ID conversion input file is the full path and file name to the input text file. The ID conversion binary file is the full path and name to the binary file to be created. piartool reports any point in the input file that does not exist in the target system. PI Server System Management Guide Page 43

66 Chapter 3 - Managing Archives ID Conversion Input File Format Every record of the input file must have this format: Point ID, Recno, TagName On a foreign PI3 system you can create this file as follows: e:\pi\adm>piconfig * (Ls - ) pipoint * (Ls - PIPOINT) pointid, recno, tag * (Ls - PIPOINT) Note: The piartool -idci input file does not allow for comment characters. The comment character ( * ) generated by piconfig must be removed. Transforming Event Timestamps (-tfix) Offsets, as a function of time, are defined in the time conversion file. This can be used to apply corrections to times on some systems that had incorrect timestamps due to run-time library problems, or non standard DST setting. This adds a given time offset to every event: -tfix <conversion file> Time Conversion File Format Lines starting with # are comments. Empty lines and white spaces are ignored. Data lines have the format: Starttime, offset Start-time may be expressed as UTC - seconds since 1/1/70 or as PI local timestamp string: dd-mmm-yyy hh:mm:ss * *-1s UTC timestamps and strings cannot be intermingled, the first format is assumed for all entries. Offset is a number of seconds added to the timestamp of every event within the time range. Fractional seconds are not supported. Offset applies from timestamp up to, but not including the next timestamp. Time Conversion File Examples Move entire archive ahead by 1 hour: 0, ,3600 Move entire archive ahead by 1 hour (another format): Page 44

67 3.3 - Tools for Managing Archives 01-Jan-70 00:00:00, Jan-10 00:00:00,3600 Applies a missed DST conversion to an archive that covers the summer of 1997: 01-Jan-02 00:00:00,0 06-Apr-02 02:00:00, Oct-02 02:00:00,0 31-Dec-02 23:59:59,0 A series of UTC values and offsets: , , , , Tips for Using the Offline Archive Utilty If you re working with the piarchss offline archive utility, note the following: The full path name of the input archive must be specified. (Note that piartool -al lists only registered archives.) If the input file is registered, the Offline Archive Utility un-registers it when processing begins. If the input archive is the Primary Archive, it cannot be unregistered. To work around this, force an archive shift using piartool -fs or temporarily shut down the Archive Subsystem. If the output file does not exist, the utility creates it. If a full path name is not specified for the output archive, the utility places the output archive in the current directory. At the end of processing, neither the input nor the output archives are registered. By default, the piarchss offline archive utility creates dynamic archives. Use the -f argument to specify a fixed archive. Note that dynamic archives become nonshiftable once a single overflow record is generated, but remain shiftable if no overflow records are generated. You can run the piarchss offline archive utility while the PI System, including piarchss itself, is running. At a minimum, the pinetmgr, pibasess, and pisnapss subsystems must be running, because the utility needs to access the Point Database during offline operations. How the Offline Utility Works During processing, two passes are made through the input archive file. On the first pass, the utility checks all records in the input archive file. Every record containing valid data, and within the specified time range, is added to a sorted list. The list is indexed by time and point ID. This assures loading in chronological order. The point ID of the input record is verified. Any required point ID conversion is performed using the specified conversion table. For example, conversion is required when migrating archives from a PI2 or from another PI3 PI Server System Management Guide Page 45

68 Chapter 3 - Managing Archives System to your PI Server. An error message is issued for every record for which a point could not be found in the local PI Server. These messages can be suppressed if desired. Statistics are displayed after every 200 records are processed, at the end of the first pass, and at the end of the second pass. During the second pass, records from the sorted list are written to the output file. The input data is optionally filtered or modified. If the output archive file does not exist, it is created. The archive header is initialized based on command line specifications. If the output file already exists, the input data is added. If a primary record does not exist for a given point ID, the data for that point ID is discarded. The input data is converted to the output point type, if different from the input point type. All auxiliary data, such as index records and record chaining, are recreated during the load. Only actual data are read from the input, and thus, any errors in the input file auxiliary data are repaired. Piarchss Offline Archive Utility Exit Codes To facilitate batch file processing, the offline utility returns an exit code to the operating system: Code What it Means 0 No errors at least one input record processed 1 Errors during input phase 2 No processing errors 0 records processed possibly an empty input file <0 An error returned from the output processing check log messages 3.4 Listing the Registered Archives Archives must be registered before the PI Server can use them. If an archive is not registered, its data are not accessible. The piartool -al command lists the registered archives. Archives are listed in reverse chronological order archives with the newer data before archives with older data. The first archive listed is the Primary Archive, which covers the current time range. The dates that are spanned by each archive are also shown. There cannot be an overlap in dates between archives. Attempting to register an archive that overlaps an already registered archive will fail. Unused archives have start and end times shown as Current Time. The display order of unused archives is arbitrary and may change. Here is a sample archive listing: D:\PI\adm>piartool -al Archive shift prediction: Shift Time: 5-Oct-05 19:42:01 Target Archive: e:\pi\arc\piarch-2gb.1 Archive[0]: e:\pi\arc\piarch-2gb.3 (Used: 53.4%) PIarcfilehead[$Workfile: piarfile.cxx $ $Revision: 101 $]:: Page 46

69 3.4 - Listing the Registered Archives Version: 7 Path: e:\pi\arc\piarch-2gb.3 State: 4 Type: 0 Write Flag: 1 Shift Flag: 1 Record Size: 1024 Count: Add Rate/Hour: Offsets: Primary: / Overflow: / Annotations: 1/65535 Annotation File Size: 2064 Start Time: 5-Oct-05 06:11:09 End Time: Current Time Backup Time: Never Last Modified: 5-Oct-05 13:26:21 The piartool -al, command displays the following information for every currently mounted archive: Label Shift Prediction Used Version Path State Type Write Flag Shift Flag Description Provides estimated time for the next shift and the target archive. This is important for backup planning. Percentage of archive records used. This is an estimate, as only empty records are considered in the calculation. Identifier of the archive's internal architecture. This label allows PI Server to mount and upgrade archives from older versions of PI. Full path of the archive file. Current condition of the archive. In piartool -al, this will always be displayed as 4, which means mounted and ready. All other states correspond to unmounted states, in which case the archive is not visible in piartool -al. 0 = Fixed, 1 = Dynamic. 1 = Archive is currently writeable, 0 otherwise. 1 = Archive is potentially a target for archive shift, 0 otherwise. Record Size Size in bytes of one record. This is always Count Add Rate Offsets: Primary Offsets: Overflow Annotations Number of records in the archive file. Average number of overflow-records added per hour, over the archive lifetime. Start and end record numbers for primary records. The end record number is always 1/2 of the Record Count. Start and end record numbers for overflow records. Number of annotations used and the maximum number available. The archive shifts when this number is reached Determining an Archive Sequence Number from a Listing Some piartool commands require an archive sequence number; for example, archive backup (piartool -backup) and archive walk (piartool -aw). The archive sequence number is PI Server System Management Guide Page 47

70 Chapter 3 - Managing Archives chronologically assigned with zero being the primary archive. The archive sequence number is shown in with the archive list command (piartool -al); it is the number in the brackets immediately following Archive. Here is a sample archive listing: C:\pi\adm>piartool -al Archive shift prediction: Shift Time: 27-Sep-05 14:46:56 Target Archive: g:\pi\arc\piarc.144 Archive[0]: g:\pi\arc\piarc.045 (Used: 72.2%) PIarcfilehead[$Workfile: piarfile.cxx $ $Revision: 101 $]:: Version: 7 Path: g:\pi\arc\piarc.045 State: 4 Type: 0 Write Flag: 1 Shift Flag: 1 Record Size: 1024 Count: Add Rate/Hour: Offsets: Primary: 19273/98304 Overflow: 55751/ Annotations: 10/65535 Annotation File Size: 1623 Start Time: 11-Aug-05 12:59:35 End Time: Current Time Backup Time: Never Last Modified: 9-Sep-05 22:26:55 Archive[1]: g:\pi\arc\piarc144.arc (Used: 16.2%) PIarcfilehead[$Workfile: piarfile.cxx $ $Revision: 101 $]:: Version: 7 Path: g:\pi\arc\piarc144.arc State: 4 Type: 0 Write Flag: 1 Shift Flag: 1 Record Size: 1024 Count: Add Rate/Hour: Offsets: Primary: 19273/65536 Overflow: / Annotations: 1/65535 Annotation File Size: 1552 Start Time: 11-Aug-05 09:12:35 End Time: 11-Aug-05 12:59:35 Backup Time: Never Last Modified: 16-Aug-05 19:08:48 Archive[2]: g:\pi\arc\piarc145.arc (Used: 99.8%) PIarcfilehead[$Workfile: piarfile.cxx $ $Revision: 101 $]:: Version: 7 Path: g:\pi\arc\piarc145.arc State: 4 Type: 0 Write Flag: 1 Shift Flag: 1 Record Size: 1024 Count: Add Rate/Hour: 77.9 Offsets: Primary: 19273/65536 Overflow: 19511/ Annotations: 1/65535 Annotation File Size: 1552 Start Time: 2-Jun-05 09:21:00 End Time: 11-Aug-05 09:12:35 Backup Time: Never Last Modified: 7-Sep-05 09:41:50 Archive[3]: g:\pi\arc\piarch.011 (Used: 99.8%) PIarcfilehead[$Workfile: piarfile.cxx $ $Revision: 101 $]:: Version: 7 Path: g:\pi\arc\piarch.011 State: 4 Type: 0 Write Flag: 1 Shift Flag: 1 Record Size: 1024 Count: Add Rate/Hour: 36.8 Offsets: Primary: 19473/98304 Overflow: 19740/ Annotations: 1/65535 Annotation File Size: 1552 Start Time: 5-Jan-05 08:15:06 End Time: 2-Jun-05 09:21:00 Page 48

71 3.5 - Listing Archive Record Details Backup Time: Never Last Modified: 7-Sep-05 09:41:50 Archive[4]: g:\pi\arc\piarc.144 (Used: 99.3%) PIarcfilehead[$Workfile: piarfile.cxx $ $Revision: 101 $]:: Version: 7 Path: g:\pi\arc\piarc.144 State: 4 Type: 0 Write Flag: 1 Shift Flag: 1 Record Size: 1024 Count: Add Rate/Hour: Offsets: Primary: 18472/65536 Overflow: 19440/ Annotations: 1/65535 Annotation File Size: 1552 Start Time: 2-Jan-05 10:43:06 End Time: 5-Jan-05 08:15:06 Backup Time: Never Last Modified: 7-Sep-05 09:41:50 C:\pi\adm> Archive sequence numbers are arbitrarily assigned to unused archives. Unused archives can be recognized by both start and end time set to Current Time. When unused archives are unregistered or specified for a backup, the assigned number will likely change on subsequent reregister or backup end. Generally, there is no reason to back-up unused archives. 3.5 Listing Archive Record Details Use piartool -aw to read the contents of an Archive directly from file. The key to reading archive data this way is to understand that every PI point has a unique Record Number (RecNo) which corresponds to a primary record in the Archive. This can be found through piconfig or PI-SMT tools. When a new archive is created, data for each point flows into its own separate primary record (archives are divided into fixed size records, the number of which is either static or can grow dynamically.) When this primary record fills up, then an overflow record is set aside for the data to flow into. The primary record points to the first overflow record which can point to a second and so forth. It is following this chain of records that is referred to as an Archive Walk. Finally when the number of free unused overflow records in an archive gets down below a certain number, this is when an automated archive shift will occur Performing an Archive Walk with piartool -aw This command gives a detailed listing of archive records. After issuing this command, you are prompted for the target archive number and the target record. The record is displayed. You are then prompted to select another archive record to view. You can determine the archive number by issuing piartool -al. Archives are listed in order, starting with 0 for most current data. Example Displaying Archive Records with Record Headers Only The example below demonstrates how to use piartool to display archive record headers. C:\pi\adm>piartool -aw Enter Archive Number: 0 Enter Record Number: 40 PI Server System Management Guide Page 49

72 Chapter 3 - Managing Archives Point ID: 18 Record Number: 40 Chain Record Mumber - Next: Prev: 0 Index: 0 Record Version: 3 Data type: 11 Zero: 600 Span: 500 Flags - Index:0 Step:0 Del:0 Dirty:0 Sizes - Record 1024 Data: 998 Parent Archive Data Head: 26 Event Count: 214 Storage (bytes) - Available: 990 Used: 987 Enter Record #, <CR> next rec (p)rev (e)vents (a)rchive # (q)uit: Understanding Event Data Displayed by piartool -aw By default, the piartool -aw command displays only the record header. If you wish to view the data in the record, enter the letter "e" when prompted for the next record ID. This toggles on the display of event data. To toggle off the display, enter "h" at the Enter Record #, <CR> next rec (p)rev (e)vents (a)rchive # (q)uit: prompt. Event data will be displayed as shown in this example. Every archive record must contain at least one event. Enter Record #, <CR> next rec (p)rev (e)vents (a)rchive # (q)uit: e PIarcrecord[$Workfile: piarrec.cxx $ $Revision: 68 $]:: Point ID: 4 Record Number: Chain Record Mumber - Next: 0 Prev: Index: 4 Record Version: 3 Data type: 101 Digital State Set: 3 Flags - Index:0 Step:1 Del:0 Dirty:0 Sizes - Record 1024 Data: 998 Parent Archive Data Head: 26 Event Count: 121 Storage (bytes) - Available: 994 Used: Event(s): 0: 9-Sep-05 18:57:04, S,O,A,S,Q [3,1,0,0,0]: 1: 9-Sep-05 18:58:14, S,O,A,S,Q [3,2,0,0,0]: 2: 9-Sep-05 18:59:24, S,O,A,S,Q [3,3,0,0,0]: 3: 9-Sep-05 19:00:34, S,O,A,S,Q [3,2,0,0,0]: 4: 9-Sep-05 19:01:44, S,O,A,S,Q [3,1,0,0,0]: 5: 9-Sep-05 19:05:14, S,O,A,S,Q [3,2,0,0,0]: 6: 9-Sep-05 19:06:24, S,O,A,S,Q [3,3,0,0,0]: etc. The S,O,A,S,Q array indicates values as shown in the table below. Event Type Coding S O A S Q Meaning StateSet Offset in StateSet; 248 corresponds to "No Data" Annotated (0=no, 1=yes) Substitute (0=no, 1=yes) Questionable (0=no, 1=yes) Page 50

73 3.6 - Estimating Archive Utilization Index shows whether the values in the records are data values or pointers to data records, where 0 indicates that it is NOT an index record. If they are pointers, the pointers are actually record numbers corresponding to the start time. When events for a point exceed two records in a single archive, an index record is created. An index record holds about 150 pointers to data records. RecNo (Record Number) is a read-only point attribute which contains a pointer to the point's primary record in the archive. This is useful when using tools such as piartool -aw to examine the archives. Do not confuse RecNo with the PointID attribute. If the point is deleted, the RecNo will be reused but the PointID will not. Data Type Meaning 8 Int32 (all records which have the index flag set will also show datatype 8) 12 Float digital 102 Blob Broken Pointers In rare cases of hardware failure, record chains can become inconsistent. The archive check utility pidiag -archk 'path' can be used to examine archive integrity. For more details on this pidiag command, refer to Verify the Integrity of Archive Files on page 272. The archive offline utility will repair any chaining problem. 3.6 Estimating Archive Utilization The space that a fixed archive uses is completely allocated at archive creation time. Use piartool -al to list the registered archives. For each archive an estimate of the used space is displayed. 3.7 Editing Archives The piconfig utility, the PI API and the PI-SDK may be used to add, edit, or delete archive values. Note: Contrary to the above title Editing an Archive, all archive edits are first handled by the Snapshot Subsystem. The Snapshot Subsystem performs some security and data checks then, if appropriate, forwards the edit events to the Archive Subsystem. PI Server System Management Guide Page 51

74 Chapter 3 - Managing Archives For large scale changes and repair use the Offline Archive Utility (piarchss). For example, inadvertently moving the computer system clock ahead may cause considerable problems. You can configure a time limit on insertion and editing of events. The Snapshot rejects events with timestamps earlier than the limit. By default there is no limit, which is consistent with earlier versions of PI. To configure, add an entry to PITimeout Table: EditDays, nnn where nnn is the number of days (prior to current time) in which changes and additions are allowed. The Snapshot Subsystem loads PI Timeout table parameters on startup, therefore, this subsystem must be restarted for the changes to take effect. 3.8 Creating Archives To create a new archive, use piarcreate, piartool -ac, or piartool -acd. These utilities allow you to name the new archive, specify its location, create it, and initialize it. In general, use piarcreate for all archive creation, unless you need to specify start and end times. piartool - ac creates a fixed size archive, piartool -acd creates a dynamic archive. With piarcreate, you may specify the size of the archive, but you will need to do a second step to register it. This utility creates a dynamic archive if you use the -d flag. With piartool -ac, the created archive size matches the current Primary Archive size, and registration is automatic. The piartool utility allows you to optionally specify the start and end times of the new archive. Option -ac specifies a fixed size archive, -acd specifies a dynamic archive. Every archive has a parallel annotation file that has the extension.ann. The file is created automatically by either utility. It must remain in the same directory as its archive file at all times Naming Archives There are no limitations on the archive file name, other than being a valid file name for the underlying operating system. The default archive file names are piarch.xxx, where xxx is 001, 002, 003 and so on. The location must have sufficient disk space. The associated annotation file has the same full path name as its archive file with.ann appended. For example, the annotation file for the piarch.001 archive file would be piarch.001.ann Choosing an Archive Size Archives have a maximum and minimum size: Minimum Archive Size: Determined by point count. The minimum archive size, in megabytes, is twice the number of points divided by For example, to allow for 20,000 configured points, the minimum archive size is: (20,000 x 2) / 1000 = 40 MB Page 52

75 3.8 - Creating Archives Maximum archive size is 2 Terabytes (2000 Gigabytes). The archive size affects backups, frequency of shifting, and total number of points allowed The general rule is to size your archives such that they last three to five weeks before shifting. How Archive Size Affects Shifting Frequency If the PI Server has larger archive files, the shift will occur less frequently. To decide what archive size is optimal for your system, consider the backup device, available disk space and average incoming data rate. These will determine the shift frequency. How Archive Size Limits Point Count It is important to note that the archive size limits the number of points that may be created. No more than 1/2 of the archive records of a fixed archive can be primary records. If the allotment of primary records gets used up, you will get an error if you try to create an additional point, even though the primary archive is not full. To resolve this, force the archives to shift into a larger archive in order to create additional points. Archive shifting can be forced using the command piartool -fs Selecting an Archive Type: Fixed or Dynamic By default, a fixed archive is created. If you specify the -d parameter, a dynamic archive will be created instead. Dynamic archives grow as they get filled, up to the specified maxsize, but not above two Terabytes. The difference between fixed and dynamic archives is discussed in the section on About Fixed and Dynamic Archives Specifying the Number of Points in the Archive This number, specified by the maxpoints parameter, should be taken from the piartool -al listing for the Primary Archive. The primary archive is always listed first. Point count is equal to the number of used primary records; in the example below it would be 253,063. One half of all records are reserved for primary records. This also determines the maximum number of points that can be created. The listing below is for a 2048 Mb archive; the maximum number of configurable points for the archive is 1,048,576. D:\PI\adm>piartool -al Archive shift prediction: Shift Time: 5-Oct-05 19:42:01 Target Archive: e:\pi\arc\piarch-2gb.1 Archive[0]: e:\pi\arc\piarch-2gb.3 (Used: 53.4%) PIarcfilehead[$Workfile: piarfile.cxx $ $Revision: 101 $]:: Version: 7 Path: e:\pi\arc\piarch-2gb.3 State: 4 Type: 0 Write Flag: 1 Shift Flag: 1 Record Size: 1024 Count: Add Rate/Hour: Offsets: Primary: / Overflow: / Annotations: 1/65535 Annotation File Size: 2064 Start Time: 5-Oct-05 06:11:09 End Time: Current Time Backup Time: Never Last Modified: 5-Oct-05 13:26:21 PI Server System Management Guide Page 53

76 Chapter 3 - Managing Archives Specifying the Maximum Archive Size The parameter maxsize is usually specified to be equal to the size of the Primary Archive, in megabytes, but it doesn t have to be the same. A dynamic archive will not grow larger than maxsize. 3.9 Creating Data Archives Prior to the Installation Date Some sites may find it useful to make data available in PI that was collected prior to the PI installation. Several applications can be used for backfilling, including a PI API or PI-SDK application, piconfig, or the batch file interface. This depends mainly on the way the data to be entered into PI is currently stored. Backfilling Data with Compression The installation procedure is the following: 1. Install PI, start PI, create all points, Stop PI. 2. Isolate the PI Server from all incoming process data. This means shutting down PI interfaces on all PI API and PINet nodes. Another way to do this is to disallow all PI API connections at the server. To due this, start piconfig without starting PI (ignoring messages about failure to connect) and issue the following hostmask,value "*.*.*.*",DISALLOW Note: Entries that allow access to specific names or addresses override the above disallow. Edit all other entries to Disallow. Local connections are not affected by pifirewall entries; make applications that may write data are not running. 3. Start PI with the -base parameter. This ensures that PI starts up with only the minimum required subsystems. On Windows hosts, issue the command: pisrvstart.bat -base 4. Create and register archive files for the backfilling period (using piartool -ac or - acd). 5. Insert one event for every point at the earliest time on-line. 6. Delete all the PointCreated events from the snapshot. This will bring into the snapshot the old events. Steps 5,6 can be done with a PI API or PI-SDK program or using the piconfig utility. Make sure that the old event is in the snapshot. Page 54

77 Creating a New Primary Archive 7. Backfill the archives by reading in the data In Time Sequential Order. This way the data is compressed. Caution: The Archive Subsystem assumes the archive end time is the most recent time stamp written to any point. It is important to keep all current data sources from writing to the PI Server. This is why Random, Ramp Soak, Performance Equations, PI Total, PI Alarm, or any other interfaces cannot be running. 8. If you used the technique of modifying the PIFIREWALL table in Step 4 above, run piconfig to either change the hostmask value to Allow or to delete the above hostmask altogether. 9. Start the interfaces using pisitestart.sh or pisrvsitestart.bat. Backfilling Data without Compression 1. Install PI Server and create all points. The points that you want to backfill must be created prior to the archive initialization, otherwise the archive has no primary records for these points. Note: An archive can be created with a start time prior to point creation time, as long as the point exists when that archive is created. Reprocessing an old archive with the offline utility adds to that archive all existing points, while preserving all the old data. 2. Use piartool -ac to create and initialize back fill archives. The start and end times must be specified (that is why piarcreate cannot be used). The start time should be the timestamp of the oldest data to be backfilled; the end time should be just prior to the oldest archive time specified using piartool -al. 3. Backfill the data. The data that you backfill will not be compressed, since it is prior to the snapshot time. 4. If the backfill archive is filled before all of the backfill data is entered, you must delete the backfill archive, and create two backfill archives, dividing the target time range between the two, or creating a single larger archive, and then retry the data backfilling Creating a New Primary Archive In some situations it is useful to create and register a primary archive with specific start-time, for example, when recovering from setting the time into the future, or when backfilling archives, or after using pidiag -ar to recover from any situation of corrupted archives. To create a new primary archive, use piartool -ac and specify the start time as required and the end time as *. Note the following restrictions: Registering a new primary archive will fail whenever there is a current valid primary archive registered. PI Server System Management Guide Page 55

78 Chapter 3 - Managing Archives A valid primary archive must have a specified start time and null end-time, which signifies current time Registering an Archive The PI Server Archive Registry contains the name, location, size, count of records, and record size for each archive file. This information is stored in the binary file, PI\dat\piarstat.dat. piartool -ar <path> This command allows new or old archives to be added to the list of registered archives. Once an archive is registered it is available to the system, participating in shifts and storage and retrieval of event data. The specified path must be a complete (not relative) path of an existing archive file Unregistering an Archive piartool -au <path> Use this command to drop an archive from the list of registered archives. Any archive may be unregistered except the Primary Archive (archive number 0), which stores the current data Deleting an Archive To delete an archive, first unregister it using the piartool -au command, discussed below. Delete the archive using the normal file deletion commands available on your operating system. Then delete the related annotation file Moving an Archive To move an archive you must unregister it, move it to a new directory, and then re-register it. Remember to move the associated annotation file as well. Moving the primary archive requires some additional steps, since you cannot unregister it while the PI archive process is running. To move the primary archive, do the following: 1. If any empty archives exist in the original directory, move them to the new directory and register them there. One of these archives will become the new primary archive. 2. Verify that there is at least one empty archive registered in the new directory (create one if there is not one). 3. Force an archive shift using piartool -fs. This will result in a new primary archive, which is one of the empty archives in the new directory. 4. Move the previous primary archive to the new directory by the usual method: unregister it, copy it to the new directory, and then re-register it. Note: Renaming archive files is generally not necessary. If you must do this, remember to rename the annotation file as well. Check the release notes for a description of issues associated with archive file renaming. Page 56

79 Managing Archive Shifts 3.11 Managing Archive Shifts When the primary archive file becomes full, the next empty (shiftable, writeable) archive is used to store the new data. If none of the archives are empty, the archive file with the oldest data (which is also shiftable and writeable) is cleared and used for new data. The start time of this archive is set to the end time of the archive file that just became full. The process of clearing the oldest archive and preparing it to be the new primary archive is called an archive shift. It is important that all eligible archives are backed up prior to the archive shift to ensure that no data is lost. When an archive is assigned a start time, it is initialized. Archives are only initialized at installation and during an archive shift. The archive shift process takes a few minutes to initialize a new archive file. Adding, editing, or deleting points is not allowed during archive shift. Events sent to the Archive during an archive shift are queued. When the shift is complete, the queued events are written to the Archive. Which Archives are Shiftable For an archive file to be eligible to be the new primary archive, it must be registered, shiftable, writeable, and large enough to handle the current size of the Point Database. If an archive does not meet these criteria, it will be skipped over during an archive shift. By making an archive non-shiftable or read only, an archive may be excluded from the shift cycle. Predicting Time of Next Archive Shift The command piartool -as is used to monitor archive activity, performance and to estimate the next archive shift The utility piartool -as predicts the time for the next archive shift. The prediction is based on the average number of archive records consumed per hour, plus the rate of consumption. If the primary archive is less than 20% full, the prediction is based on the previous archive rates. Archive Shift Enable Flag Fixed archives of varying sizes can be shifted. However, archives that are too small to accommodate the number of points in the Point Database are automatically excluded from archive shifts. Used dynamic archives are never shiftable. Both fixed and dynamic archives have a shift-enable flag. If the flag is 0 then the archive will not participate in archive shifts. A user can view or set this flag using the piartool utility. Failed Archive Shifts If an archive shift fails for any reason, all further shifts are disabled and a message is sent to the log. When the cause of the failure is resolved, restart the Archive Subsystem to enable archive shifts. If the cause of failure was a lack of available archive to shift into, then PI Server System Management Guide Page 57

80 Chapter 3 - Managing Archives registering a new empty archive automatically resolves the situation and a shift into the new archive will occur. Failed shifts do not cause any data loss since the archive goes into read-only mode. All incoming data is queued in the Event Queue by the Snapshot Subsystem Archive Shift Enable Flag Each archive has a Shift Enable Flag. If the Shift Enable Flag is set to 1 then the archive is eligible to participate in archive shifts. The status of the flag may be viewed using piartool - al. Archives can be excluded from shift participation by running piartool -ads 'path'. Shift disabled archives can be re-enabled by running piartool -aes 'path'. Dynamic archives that contain data do not participate in archive shifts. That is, newly created dynamic archives may be shifted into, but they are shift disabled after the first archive event is written to it. piartool -aes does not re-enable dynamic archives for shifting. Archive shifts happen automatically whenever the Primary Archive nears full. An archive shift normally begins when the Archive Subsystem detects that the primary archive is almost full. It dismounts the last empty archive, or oldest shiftable archive if no empty archives are available. It then initializes this archive. This can take some time, depending on the current point count. Messages are written to the log during the initialization to indicate progress. Once the new archive cleared, it is initialized to start at the current time and is mounted as the primary archive. Note: The oldest shiftable archive is the oldest writable archive large enough for the current point count. Also, dynamic archives containing data are not shiftable Forcing Archive Shifts The command piartool -fs forces an immediate archive shift. During normal operations, the piartool -fs command should not be used. However, it may be useful to force an archive to shift while testing your system or to do advance archive management. For example, this command is sometimes useful if you are building a large number of new points. Since primary records may occupy no more than one half of the records in an archive file, it may be necessary to build a larger primary archive. You can then force an archive shift to make the larger archive your primary archive. For systems with large point counts, archive shifts may require a long time. As soon as the shift starts, messages are written to the PI message log, such as: 0 piarcmgr 2-Apr-03 14:32:39 >> Forced archive shift requested. 0 piarcmgr 2-Apr-03 14:32:39 >> Beginning Archive Shift. Current Primary Archive: d:\pi\dat\piarch piarcmgr 2-Apr-03 14:32:39 >> Target Archive for Shift: d:\pi\dat\piarch piarsrv 2-Apr-03 14:32:39 Page 58

81 Combining and Dividing Archives >> Clear and Initialize archive file: d:\pi\dat\piarch piarsrv 2-Apr-03 14:32:48 >> Archive clear progress: records cleared. 0 piarsrv 2-Apr-03 14:32:58 >> Archive clear progress: records cleared. 0 piarsrv 2-Apr-03 14:33:08 >> Archive clear progress: records cleared. 0 piarsrv 2-Apr-03 14:33:19 >> Archive clear progress: records cleared. 0 piarsrv 2-Apr-03 14:33:28 >> Archive successfully cleared records 0 piarsrv 2-Apr-03 14:33:28 >> Archive successfully initialized points. 0 piarsrv 2-Apr-03 14:33:28 >> Archive file Clear and Initialize completion status[0] Success 0 piarcmgr 2-Apr-03 14:33:28 >> Completing Archive Shift. Current Primary Archive: d:\pi\dat\piarch piarcmgr 2-Apr-03 14:33:28 >> Archive d:\pi\dat\piarch.001 shifted successfully. New Primary Archive is d:\pi\dat\piarch.003 Do not shut down the PI Server until the shift has completed. To determine when this has occurred, check the message log for a message like: 0 piarcmgr 2-Apr-03 14:33:28 >> Archive d:\pi\dat\piarch.001 shifted successfully. New Primary Archive is d:\pi\dat\piarch Combining and Dividing Archives From a user perspective, archive files are organized according to their size and the time ranges that they span. It is useful sometime to change the initial file organization of the archive. The Offline Archive Utility can be used to accomplish each of the following tasks: Combining archives with overlapping dates into one archive Combining archives with adjacent time ranges into one archive Dividing an archive into smaller archives, each covering a portion of the original time span Combining Archives into a Single Archive To combine several archives, invoke the Offline Archive Utility once for each input file, using the same output file for all these inputs. Start from the oldest input, going in ascending time order. Note: The Offline Archive Utility will not work in descending or random time order. PI Server System Management Guide Page 59

82 Chapter 3 - Managing Archives The end-time of the output file can be moved forward as required, but the start-time cannot be changed after creation. Archives with an unknown end time should be processed into a new archive to determine the actual end time. The resulting archive can then be merged chronologically. Merging a series of archives with overlapping dates requires processing the archive with the oldest start time first, then process the remaining in chronological order based on the archive end times. Example of Combining Archives piarchss -if D:\pi\dat\oldest.dat -of D:\pi\dat\bigfile.dat piarchss -if D:\pi\dat\newer.dat -of D:\pi\dat\bigfile.dat piarchss -if D:\pi\dat\newest.dat -of D:\pi\dat\bigfile.dat In this example, bigfile.dat does not exist prior to the operation. It is created in the first session and events are added to it in the second and third session. It is created as a dynamic archive by default. After it is created, it may be registered using piartool -ar, and then events may be added to the archive through the Snapshot Subsystem. Any of the three input files that were registered prior to the operation are unregistered during the operation. When the operation is complete, they may be registered using piartool -ar. Dynamic archives, which is the default type created by the offline utility, are not shiftable. Dividing an Archive into Smaller Archives To break a single archive into smaller archives, invoke the Offline Archive Utility once for each output file, using the same input file for all the outputs. Each time, a different start and end time is specified. These times are specified in absolute PI time format. Example of dividing an archive into two smaller archives: piarchss -if D:\pi\dat\bigfile.dat -of D:\pi\dat\january.dat -filter "1-jan" "31-jan-02 23:59:59" -ost "1-jan" -oet "31-jan-02 23:59:59" piarchss -if D:\pi\dat\bigfile.dat -of D:\pi\dat\february.dat -filter "1-feb" "28-feb-02 23:59:59" -ost "1-feb" -oet "28-feb-02 23:59:59" In this example, january.dat and february.dat do not exist prior to the operation. They are created as dynamic archives by default. After they are created, they may be registered using piartool -ar, and then events may be added to the archive files in the usual way. Both output archives are not shiftable because they were created by the Offline Archive Utility as dynamic archives. The filter start time of january.dat is specified as 1-jan. This defaults to 1-jan, of the current year, at 00:00:00. The filter end time is enclosed in double quotes because of the embedded space character. The output archive start and end times are explicitly specified. Failing to include the -ost and -oet flags will get the default behavior. See the above discussion of output archive time settings for more information. If the input file were registered prior to the operation, it would be unregistered during the operation. When the operation is complete, it may be registered again using piartool -ar. Page 60

83 Combining and Dividing Archives Event Queue Recovery It might be desirable sometimes to remove an Event Queue file from the system. For example when the system cannot manage the load of a large backlog of events. To do this: 1. Stop the Snapshot and Archive Subsystems. 2. Rename PI\dat\pimapevq.dat to PI\dat\ pimapevq.save 3. Restart the Snapshot and Archive Subsystems. Later, the renamed Event Queue file can be loaded into an offline archive. The input file is the saved Event Queue data file. The argument -evq indicates the input file is an Event Queue. The resulting output archive might have dates that overlap existing archives. Offline processing, as discussed above, is required to combine these archives. Here is an example command line using piarchss to recover an Event Queue: piarchss -if D:\pi\dat\pimapevq.save -of D:\pi\dat\piarch099.dat -evq Note: In most cases the Event Queue is the above file. But, it is possible to have multiple Event Queues. The utility, piartool -qs, will indicate if your system uses multiple queues. The queue naming convention, if multiple queues are used is pimapnnnn.dat where NNNN is a four digit integer. Prior to version 3.4, the Event Queue was pi\dat\pieventq.dat. PI 3.4 and newer does not support processing this format. These files should be processed before upgrading. Also, the memory mapped file approach introduced in version 3.4 and other enhancements allows PI to handle out of order data much more efficiently in most cases offline processing of the Event Queue will not be necessary. PI Server System Management Guide Page 61

84

85 Chapter 4. BACKING UP THE PI SERVER It s important to back up the PI Server at least once a day, so that you don't lose data and configuration information if something goes wrong with your equipment. All backups of PI that are done while the PI System is running are managed by the PI Backup Subsystem (PI\bin\pibackup.exe). 4.1 Planning for Backups Choosing the Backup Platform (VSS vs. Non-VSS) Volume Shadow Copy Services (VSS) is available on Windows 2003 Server and Windows XP. The role of the PI Backup Subsystem during an actual backup depends upon whether a VSS or a non-vss backup is being performed. Non-VSS backups are the only option on UNIX, Windows NT Server, and Windows 2000 Server. During a VSS backup, the PI Backup Subsystem takes appropriate actions by responding to VSS events, but the actual files are backed up by a separate application such as NTBackup (NTBackup.exe). During a non-vss backup, the Backup Subsystem itself backs up the files of the PI System. Windows XP can be used to test VSS backups for staging purposes, but you should always run PI on a server platform. For Windows 2003 Server, it is highly recommended to upgrade to Windows 2003 Server Service Pack 1 which includes a large number of bug fixes related to backups Choosing a Backup Strategy The easiest backup strategy is to set up PI to automatically run the backup scripts every day (see Automating PI Backups, on page 66). You can also run the scripts manually (see Customize Your Backup, on page 68). The backup script initiates backups via NTBackup on platforms that support VSS, and/or with the piartool -backup command on platforms that do not support VSS. It is highly recommended that you run PI on a platform that supports VSS because VSS backups cause minimal disruption to the operation of PI. On Windows 2003 server, as an alternative to using the backup scripts, you can backup PI with any third-party backup application that supports VSS. Third-party backup applications may support features that are not supported by NTBackup. For example, NTBackup currently supports only non-component mode backups (see Selecting Files or Components for Backup, on page 75). PI Server System Management Guide Page 63

86 Chapter 4 - Backing up the PI Server 4.2 Other Backup Considerations While PI is running, PI cannot be backed up with standard operating system commands such as copy (Windows) or cp (UNIX) because PI opens its databases with exclusive read/write access. This means that the copy commands will outright fail. PI prevents access by the operating system because a lot of the information that is needed to backup the databases of PI is in memory and a simple file copy would most likely lead to a corrupt backup. When PI is not running, PI can be backed up with standard operating system commands such as copy (Windows) or cp (UNIX). Do not try to include the PI folder in your daily system backup. The PI archives typically consist of a large number of huge files that undergo frequent small changes. The PI backup scripts are designed to back up the archive files efficiently. Make sure you have enough space on the disk where PI creates the backup files. Check the disk space regularly. Run a trial backup and restore to make sure everything works correctly. Test your backups in this way periodically. See the section, Restoring Archives from Backup. To avoid losing incoming data while the backups are running, turn on PI API buffering for your interfaces wherever possible. On VMS PINet nodes, buffering is done automatically, so buffering does not need to be turned on for VMS PINet nodes. After PI Server installations or upgrades, shut down the PI Server and make a complete backup of all PI directories and archives. Note that archives may not be located under the PI\dat directory. On Windows, include the registry entries under HKEY_LOCAL_MACHINE/SOFTWARE/PI System/PI. On UNIX, include the /etc/piparams.default file. When you make a major change to PI, such as a major edit of the Point Database or User Database, consider immediately making a backup that includes those changes, rather than waiting for the automated backup Guidelines for VSS Backups All archives to be backed up must be on the PI Server Node. The VSS backup will fail if the archive to be backed up is on remote drive, such as a mapped network drive. Once a subsystem registers for backup, the subsystem must remain online during the next VSS backup or else the backup will fail. The subsystems that are currently registered for backup can be listed with piartool -backup -query. The list of registered subsystems is reset when the Backup Subsystem is restarted. If the backup fails because a subsystem is offline, the PI System Administrator should do one of the following. Fix the problem with the faulty subsystem and do a backup manually. VSS backups can be done during the middle of the day because they are not disruptive to the PI Server. Page 64

87 4.3 - Guidelines for Backing Up Before Upgrading Restart the PI Backup Subsystem, wait for all of the subsystems except for the problematic subsystem to register for backup, and then do a backup manually. During VSS backups, PI databases are unavailable for writes for a very brief period of time, typically on the order of milliseconds. This time period is less than the timeout period for write operations such as point edits. This means backups could potentially be done several times a day without disrupting normal server operation. Backups should not be done while configuration changes are being made since the changes may not take effect properly. Although the disruption from a Freeze/Thaw cycle is relatively small, unnecessary Freeze/Thaw cycles should be avoided. It is possible to unintentionally put PI through a Freeze/Thaw cycle if you are using a non-component mode VSS backup application such as NTBackup to do backups on your system. If any file on a volume is backed up with a non-component mode backup, any VSS writer that has files on that volume will go through a Freeze/Thaw cycle. This means that PI may think that it is being backed up when you are really backing up a file that is completely unrelated to it but happens to share a volume that is common to the PI databases Guidelines for non-vss Backups Try to prevent users from making changes to the PI System during non-vss backups. At the very least, schedule backups to occur at a time when users do not typically make changes to the PI System. This is because the PI databases are briefly unavailable for writes during non-vss backups, which could cause operations, such as point edits, to fail. Also, non-vss backups backup up one component at a time. This means that a point edit could occur between backing up the primary archive and the point database, which could cause an inconsistency between the PI databases in the backup. 4.3 Guidelines for Backing Up Before Upgrading Before and after an upgrade, do a complete backup of the PI Server by shutting PI down and copying all PI files to a backup medium. Follow these steps. Make sure that your Interface Nodes are buffering your data. Shut down PI, so that all files are closed. Back up all files in all subdirectories under the PI directory. Since PI is not running, you can use any standard operating system utility such as copy or tar. On UNIX, include the /etc/piparams.default file. On Windows, include the registry entries under HKEY_LOCAL_MACHINE/SOFTWARE/PI System/PI. PI Server System Management Guide Page 65

88 Chapter 4 - Backing up the PI Server 4.4 Automating PI Backups The exact instructions for automating PI backups depend on the operating system on which your PI Server is installed. Refer to the appropriate section for your PI Server: Automating Backups on Windows Automating Backups on Windows with a 3rd Party Backup Application Automating Backups on UNIX Automating Backups on Windows The procedure supplied below is an out-of-the box solution that works on Windows NT, Windows 2000, and Windows 2003 server without needing to install any 3rd party software. If you wish to implement a backup solution from a particular vendor, then follow the guidelines under Automating Backups on Windows with a 3rd Party Backup Application on page 72. The procedure for automating the PI backup script can be summarized as follows. Install PIBackup.bat as a scheduled task View and edit the scheduled tasks Customize your backup Do a test backup Do a test restore Install PIBackup.bat as a scheduled task The pibackup.bat script in the PI\adm directory can be used to start a backup from the command line or to install backup task. The default backup task will run automatically every day at 2 AM. On Windows NT and Windows 2000, the default task name will be Atn, where n is an integer. On Windows 2003 Sever, the default task name will be PI Server Backup. When the scheduled task is run, the PI\adm\pibackuptask.bat script is executed. The pibackuptask.bat script calls the backup script pibackup.bat and redirects the standard output to a log file with a name similar to pibackup_dd-mmm-yy_hh.mm.ss.txt in the PI\backup directory. The pibackup.bat backup script will automatically determine whether or not VSS is supported, and the script will perform either a VSS or non-vss backup as appropriate. The syntax for using the pibackup.bat file is as follows. PIbackup.bat <path> [number of archives] [archive cutoff date] [-install] where < > indicates a required parameter and [ ] indicates an optional parameter. The command line parameters must be specified in the above order. If the -install flag is not specified, a backup is performed immediately. The more restrictive of [number of archives] and [archive cutoff date] takes precedence. Regardless of [number of archives] and [archive cutoff date] archives that do not contain data are not backed up. Page 66

89 4.4 - Automating PI Backups Parameter Example Description <path> E:\PI\backup Path must be the complete drive letter and path to a directory with sufficient space for the entire backup. [number of archives] 2 The number of archives to backup. For example, "2" will backup the primary archive and archive 1. [archive cutoff date] *-10d The cutoff date is specified in PI time format. For example, "*-10d" restricts the backup to archives archives that contain data between 10 days prior to current time and current time. The more restrictive of [number of archives] and [archive cutoff date] takes precedence. [-install] Installs a scheduled task to run pibackup.bat daily at 2:00 am. If the -install flag is not specified, then a non-vss backup is performed immediately. For example, the following command installs a task to backup the primary archive, archive1, and archive2. PIbackup.bat e:pi\backup 3 1-Jan-70 -install All archives contain data later than midnight on 1-Jan-70, so the number of archives to be backed is not restricted by the cutoff date. Note, however, only archives that contain data are actually backed up. This means that if archive1 and archive2 are empty archives, then these archives are not backed up. The next example installs a task to backup all archives that contain data for the last 60 days to the current time. PIbackup.bat e:pi\backup *-60d -install The assumption above is that less than archives are mounted, so does not restrict the number of archives to be backed up. The next example installs a task to backup PI using the default number of archives and the default cutoff date. PIbackup.bat e:pi\backup -install The default number of archives for backup is 3, unless otherwise specified by the Backup_NumArchives timeout parameter (see Timeout Parameters on page 87). The default cutoff date is 1-Jan :00:00, unless otherwise specified by the Backup_ArchiveCutoffDate timeout parameter. View and Edit the Scheduled Tasks After installing the scheduled backup tasks with the pibackup.bat script, you might want to edit the scheduled task to change the task name or to set the Run As user to a different account. For example, renaming the task is necessary if you want to install multiple scheduled tasks via the pibackup.bat script. For VSS backups, it is recommended to change the Run As user for the PI Server Backup scheduled task from NT AUTHORITY\SYSTEM to the account of the user who will be PI Server System Management Guide Page 67

90 Chapter 4 - Backing up the PI Server administering the backups. This recommendation is simply for convenience purposes for viewing the NTBackup log file. This is discussed further in Do a Test Backup on page 69. Scheduled tasks can be viewed and edited via the Scheduled Tasks control panel. Alternatively, tasks can be viewed and edited via the command line. On Windows NT and Windows 2000, the scheduled tasks can be displayed with the at command. C:\pi\adm>at Status ID Day Time Command Line Each M T W Th F S Su 2:00 AM C:\PI\adm\pibackuptask.bat c:\pi\backup." 3 "*-60d"" On Windows 2003 Server, the scheduled tasks can be displayed and edited with the schtasks command. Although the at command is still available on Windows 2003 Server, the schtasks command should be used instead. For example, any task created with the at command on Windows 2003 server cannot be edited. e:\pi\adm>schtasks TaskName Next Run Time Status ==================== ======================== =============== PI Server Backup 02:00:00, 7/29/2005 Customize Your Backup Backups are customized by creating the pisitebackup.bat and pintbackup.bat file in the PI\adm directory. These files do not exist by default. You should never customize your backup by editing the pibackup.bat or the pibackuptask.bat files because these files are overwritten during an installation or upgrade. The pisitebackup.bat File If the pisitebackup.bat file exists, then the pibackup.bat backup script calls it right before exiting. If you have any tasks you want pibackup.bat to execute each day after the backup, then add these tasks to a file called pisitebackup.bat in the PI\adm directory. Typically, PI System Managers use the adm\pisitebackup.bat file to move the backup directory to tape. PI System Managers may also use the script to back up specific files that are not included in the PI Server backup. The pintbackup.bat File If the pintbackup.bat file exists, the pibackup.bat backup script will execute pintbackup.bat instead of executing NTBackup. By default, the backup script will execute NTBackup with the following command line. ntbackup.exe backup "@%PISERVER%dat\pibackupfiles.bks" /d "PI Server Backup" /v:yes /r:no /rs:no /hc:off /m normal /j "PI Server Backup" /l:f /f "%BackupPath%\PI_Backup.bkf" This command line causes the files of PI to be backed up to PI\backup\PI_Backup.bkf. Since the /a flag is not on the command line, the PI_Backup.bkf will be overwritten every time the Page 68

91 4.4 - Automating PI Backups backup is performed. The second argument on the command tells NTBackup to backup the files listed in the PI\dat\pibackup.bks backup selection file. The backup selection file is re-created every time that the piartool -backup -identify command is run. If you want to use different command line options for NTBackup or if you want to execute a different backup command, create a pintbackup.bat file in the PI\adm directory. Note: If you want to do a non-vss backup on a platform that supports VSS, you would alter the pintbackup.bat file to specify your preferred backup tool, either piartool -backup or some other backup tool Do a Test Backup Unless overridden by a pintbackup.bat file, the PI backup script will do a VSS backup when it is executed on Windows XP or Windows 2003 server. The PI backup script will perform a non-vss backup on Windows NT and Windows The following procedure applies to both VSS and non-vss backups except where noted. 1. Open a command prompt and type the command pigetmsg -f so that you can view messages that are written to the message log during the course of the backup. 2. Open the Scheduled Tasks control panel, right-click on the PI Server Backup scheduled task, and select Run. For VSS backups, if the Run As user for the scheduled task is the same as your account, you will see NTBackup being launched and you will be able to monitor the progress of the backup via the NTBackup GUI. 3. Run the command piartool -backup -query. You should see information about the current state of the backup. If the query command indicates that the backup was not launched, the backup script may have failed to launch the backup. The output of the script is written to a log file in the PI\backup directory with a name of the form pibackup_dd-mmm-yy_hh.mm.ss.txt. If the backup script log does not reveal the source of the error, there are two additional logs that can be examined as explained in steps 5 and After the backup is complete, run the piartool -backup -query command. The command should indicate that the backup completed successfully. 5. Examine the PI Message Log for backup related messages. Run pigetmsg and use pibacku* as a mask when prompted for Message Source. If the backup started and completed, you should at least see Backup operation started and Backup operation completed in the log file. 6. For non-vss backups skip to step 7. For VSS backups, examine the NTBackup backup log for errors. The procedure for examining the NTBackup log is described under Troubleshooting Backups. 7. After the backup is complete, verify that the files were successfully backed up. Files are backed up to PI\backup\. For VSS backups, the backed up files will be in a file called PI_Backup.bkf, which can only be opened by NTBackup. For non-vss PI Server System Management Guide Page 69

92 Chapter 4 - Backing up the PI Server backups, the actual files will be copied to subdirectories of PI\backup\. For example, if the original file was located in PI\dat\, the file will be backed up to PI\backup\dat\. 8. For VSS backups, run vssadmin list writers. The output should indicate that the state of the OSISoft PI Server VSS writer is stable Do a Test Restore General Considerations The following files require special treatment during a restore. File Name Pisubsys.cfg Piarstat.dat Special Treatment This file contains the full path name to the rendezvous file piv3.rdz. This path may be incorrect if the restore is not to the original location. There is no need to restore the pisubsys.cfg file after a new installation because this file is installed by the installation kit. The piarstat.dat file contains the full path names to all of the registered archives and database security information for the archives. If the destination directory of the archives to be restored is different than the original, then you may need to generate a new piarstat.dat file with the piartool - ar command. You will need to re-register your archives and re-create the database security. Restoring from NTBackup The NTBackup application keeps track of the files that is has backed up via a catalog that is stored under \Documents and Settings\All Users.WINDOWS\Application Data\Microsoft\Windows NT\NTBackup\catalogs51 All NTBackup catalogs are stored in the above directory no matter which user account the backup is run under. When NTBackup is run in advanced mode, these catalogs can be browsed from the Restore and Manage Media tab. Page 70

93 4.4 - Automating PI Backups To test the restore, use the following procedure. 1. Change the Restore files to location to Alternate Location and typing in an alternate location. Typically, it is a good idea to temporarily restore files to an alternate location even if the final destination of the restored files is the original location. 2. Review the files to be restored by double-clicking on the Backup Identification Label for your backup. Select all files for restore except possibly those files discussed under General Considerations above. PI Server System Management Guide Page 71

94 Chapter 4 - Backing up the PI Server 3. Review the options for the restore from the Tools > Option menu. 4. Click Start Restore. The files should be restored to the alternate location. 5. After the restore is complete, click Report The report should indicate that all files were restored successfully. Restoring from NTBackup if the Catalog was Lost All that is required to restore from NTBackup is the original PI_Backup.bkf file that was saved during the backup. The catalogs are not essential for restoring from backup. 1. Run NTBackup in advanced mode. 2. Select Restore and Manage Media. 3. Select Tools > Catalog a backup file 4. Browse to the PI_Backup.bkf file that you saved and click OK. The catalog should be added to the Restore and Manage Media tab. Once the bkf file is cataloged again, you can restore the files with the restoration procedure described above Automating Backups on Windows with a 3rd Party Backup Application If your PI Server is installed on Windows 2003 Server operating system, then you can do your PI backups with any third-party backup software that supports VSS. On Windows 2003 server, the PI Backup scripts use NTBackup to launch a VSS backup. The advantage of Page 72

95 4.5 - Automating Backups on a Windows Cluster using NTBackup is that it comes with Windows free of charge, which allows OSISoft to provide a default backup solution without requiring a third-party backup application. However, you might want to use a third-party backup application if, for example, you already use a third-party backup application and you want to use the same backup strategy for all of your applications. A second reason may be that the third-party backup application has particular features that make it easier for you to maintain backups. However, in order to use any third-party backup application, the backup application must support VSS backups. One limitation of NTBackup is that it only supports non-component mode VSS backups, which means that NTBackup cannot use the components of PI to select files for backup. A list of file names must be provided to NTBackup via the backup selection file PI\dat\pibackup.bks. A big disadvantage of non-component mode backups is that if any file is backed up on a volume where there is a PI database, PI will think that it is being backed up even though none of its files are actually affected by the backup. Most third-party backup applications support component mode backups, which mean that the backup application will be able to detect the PI Backup Subsystem as a registered VSS writer and will be display the components of the PI System that are available for backup. The backup components of PI might appear in a third-party backup application as discussed in Selecting Files or Components for Backup on page Automating Backups on a Windows Cluster If you are using the PI Backup Scripts for your backups, then the backups must be scheduled to run on both nodes in the cluster. That is, the command PIbackup.bat <path> [number of archives] [archive cutoff date] [-install] must be run on both nodes in the cluster. Installing the backup scripts as a scheduled task is discussed in detail under Automating Backups on Windows on page 66. Only one of the scheduled backup tasks will succeed at any given time because the pibackuptask.bat and pibackup.bat files are on the shared drive. Other than the need to schedule the backups on both nodes, backups on clustered and nonclustered Windows nodes are the same. 4.6 Automating Backups on UNIX Contact OSIsoft technical support for updated instructions, at PI Server System Management Guide Page 73

96 Chapter 4 - Backing up the PI Server 4.7 How The PI Backup Subsystem Works Principles of Operation VSS Backups Overview VSS backups are initiated by VSS requestors, which are backup applications such as NTBackup. A VSS provider forwards the backup request in the form of COM+ events to the appropriate VSS writer or writers that have registered with the provider. Windows XP and Windows 2003 Server come with a default VSS provider, and a VSS writer is an application that performs the necessary actions to back up a particular system. The PI Backup Subsystem is an example of a VSS writer. Information is passed between the VSS requestor and the VSS writer(s) over the course of the backup via a sequence of VSS events. The most important of these VSS events for understanding the significance of VSS backups for backing up the PI System are the Freeze and Thaw events. During the Freeze event, the PI Backup Subsystem requests each of the PI Subsystems to suspend data writes to disk. After all subsystems have suspended data writes, the PI Backup Subsystem responds to the VSS provider. The VSS provider then takes a Snapshot of all of the local disk volumes that are affected by the backup. The Backup Subsystem then receives the Thaw event, which means that it is OK for all PI Subsystems to resume writing to their databases. Although data writes are suspended between the freeze and thaw events, all PI databases remain readable by client applications. This means that historical data, configuration information, etc., can be read by client applications without any disruption during a VSS backup even between the Freeze and Thaw events. Even on busy systems, however, the disruption to data writes is minimal because the total time between the Freeze and Thaw events is typically on the order of a few milliseconds, and the duration must be less than 60 seconds or else the backup will be aborted. The disruption to data writes is so short that users should not even notice that a backup has occurred. For example, users should be able to edit, create, or delete PI Points without disruption because the timeout period for these operations is less than the typical time period between the freeze and thaw events. After the Freeze event, the backup application begins to backup the files that were requested for backup. Although data is being written to the files that are being backed up, the state of the file at the time of the Snapshot is preserved via a difference file. After all files are backed up, which may take hours, the difference file is discarded. More information about Volume Shadow Copy Services can be found online at then browsing the tree as follows. WIN32 AND COM DEVELOPMENT SYSTEM SERVICES FILE SERVICES VOLUME SHADOW COPY SERVICE Page 74

97 4.7 - How The PI Backup Subsystem Works Non-VSS Backups Overview Whereas VSS backups are initiated via a backup application, non-vss backups are initiated via the piartool -backup commands, which are discussed in the next section. For non-vss backups, the PI Backup Subsystem itself is the backup application. As with VSS backups, all PI Subsystems remain online, and all PI databases remain readable over the entire course of the backup. However, some databases will remain in a read-only state for a significantly longer period of time than for VSS backups. For example, archives and annotation files can be very large and writes to all archives and annotation files must be suspended for the duration of time that it takes to copy them for backup. Due to the architecture of the PI Archive Subsystem, writes must be suspended to all archives no matter which archive is being backed up Selecting Files or Components for Backup The files in the PI System are divided into backup components. A backup component is a convenient way to select a group of files for backup. Typically, there is one component per subsystem, but the PI Archive Subsystem has multiple components because there is one component per archive/annotation file pair. Some subsystems have no components either because the subsystem has no files that need to be backed up (such as the Totalizer and Alarm Subsystems) or because the files for the subsystem do not require a Freeze/Thaw cycle for the backup (such as the SQL and Shutdown Subsystems). Non-VSS Backups (Component Mode) All non-vss backups are considered to be component mode backups. The piartool - backup commands allow you either to backup a particular component or to backup all currently identify components. Typically, the PI Archive Subsystem identifies only a subset of its archives, as determined by timeout parameters, by arguments to the piartool -backup commands, and by the actual number of archives that contain data. This is discussed in more detail below. VSS Backups (Component Mode or Non-Component Mode) VSS backups are either component-mode backups or non-component mode backups. Individual files are selected for backup with non-component mode backups. To a certain extent this is an advantage because you can control exactly which files you want to backup. However, if any file is backed up on a disk volume where there is a PI database file, PI will think that it is being backed up even though none of its files are actually affected by the backup. This means if you backup a file that is completely unrelated to PI but shares a disk volume with a PI database, PI will undergo an unnecessary VSS Freeze/Thaw cycle. This is not as bad as it sounds because data writes are suspended only for a short amount of time during a VSS Freeze/Thaw cycle, but you should be aware that PI could be unintentionally put through a VSS Freeze/Thaw cycle when non-component mode backups are employed. The default backup solution for PI on Windows XP and Windows 2003 Server uses NTBackup, which supports only non-component mode backups. A second disadvantage to non-component mode backups is that all VSS writers that have a disk volume in common with PI will always be backed up at the same time as PI. This could lengthen the time period between the Freeze and Thaw events because the Thaw event cannot PI Server System Management Guide Page 75

98 Chapter 4 - Backing up the PI Server occur until all participating VSS writers have been frozen. Since the system drive is associated with several VSS writers, you should not install PI or any of its databases on the system drive, especially if you plan to use a non-component mode backup application to do your backups. You can use any third-party backup application to backup PI as long as the backup application supports VSS. Most third-party backup applications that support VSS also support component mode backups. At the request of a backup application the, PI Backup System identifies the files and components of the PI System. The backup application can use this information to display the components in a graphical user interface. If the component of a different application is selected for backup, then the PI system will not go through a Freeze/Thaw cycle, even if that component has files on a disk volume that is common to the PI databases. A backup application that supports component mode backups has the following information available to it for display purposes. The registered name of the PI VSS writer. The registered name is OSISoft PI Server. The friendly name of each component. Each component has a name and description. The component description is intended to be used as a friendly name for display purposes. A component path. The component path provides a means of logically organizing a group of components. The PI Archive Subsystem is the only subsystem that has a non-null component path. The PI System may appear as follows in a graphical user interface of a backup application that supports component mode backups. Each entry in the above tree view is a friendly name of a component except OSISoft PI Server, which is the registered name of the PI VSS writer, and PI Archive Subsystem, which is a component path. Typically, a backup application will use different icons to distinguish between a registered writer, a component path, and a component, but this is dependent on the particular backup application. Also, the component path PI Archive Subsystem may not be selectable in some backup applications because there is no component at that point in the tree. Page 76

99 4.7 - How The PI Backup Subsystem Works A user can select one or more components for backup. However, for a typical, automated backup one should configure the backup application to backup the OSISoft PI Server as a whole because new archive components are added after each archive shift. These new archives will not be selected for backup by default unless the OSISoft PI Server is selected for backup as a whole The Backup Components of PI The following table summarizes the backup components of PI. Component Name Component Path Friendly Name (Component Description) SettingsAndTimeoutParameters NULL Settings and Timeout Parameters Pilicmgr NULL PI License Manager Pimsgss NULL PI Message Subsystem Pibasess NULL PI Base Subsystem Pisnapss NULL PI Snapshot Subsystem Piarchss Piarchive_<UTCprimary>_<GUID> Piarchive_<UTCarch1>_<GUID> PI Archive Subsystem PI Archive Subsystem PI Archive Subsystem Archive Registry and Archive Audit Database Archive0 dd-mmm-yy hh:mm:ss to Current Archive1 dd-mmm-yy hh:mm:ss to ddmmm-yy hh:mm:ss... Piarchive_<UTCarchN>_<GUID> PI Archive Subsystem ArchiveN dd-mmm-yy hh:mm:ss to ddmmm-yy hh:mm:ss Pibatch NULL PI Batch Subsystem The Components in the Archive Subsystem The Archive Subsystem has one component for each archive/annotation file pair. The name has the form piarchive_<utctime>_<guid>, where <UTCTime> is the start time of the archive/annotation file in UTC seconds since midnight 01-Jan-70 and <GUID> is the GUID that uniquely identifies each archive/annotation file pair. For example, the name of an archive component with a start date of 5-Aug-05 17:56:08 might be: piarchive_ _{1a0fbff1-bfe4-45f3-82db-5cf5b64b088e} The description of an archive component has the form Archive0 dd-mmm-yy hh:mm:ss to Current for the primary archive and ArchiveN dd-mmm-yy hh:mm:ss to dd-mmm-yy hh:mm:ss for all other archives. The name of an archive component stays the same for the lifetime of an archive/annotation file pair, but the component description or friendly name changes every time there is an archive shift. For example, in the above example, the archive will begin its life as a primary archive with a friendly name of PI Server System Management Guide Page 77

100 Chapter 4 - Backing up the PI Server Archive0 5-Aug-05 17:56:08 to Current If the above archive shifts at 25-Aug-05 14:23:01, the friendly name of the archive becomes Archive0 5-Aug-05 17:56:08 to 25-Aug-05 14:23:01 The friendly names of all other components for the PI System do not change The Files and Components for each Subsystem PIBackup The PIBackup Subsystem has a single component called SettingsAndTimeoutParameters. The Settings and Timeout Parameters component contains files that are owned by various subsystems. The owning subsystems do not need to participate in backup because there is no special need to suspend data writes to these files during a backup. The subsystems that are associated with these files are not listed in the list of registered subsystems for backup with the piartool -backup -query command. The file names are hard-coded in the PI Backup Subsystem. The files backed up with this component are: File Name File Description Pe314.dfa PI-Performance Equation Scheduler parsing rules database (1 of 2) Pe314.llr PI-Performance Equation Scheduler parsing rules database (2 of 2) Pifirewall.tbl Pipeschd.bat pisql.ini pisql.res Pisubsys.cfg PISysID.dat Pisystem.res Pitimeout.tbl Shutdown.dat Firewall database PI-Performance Equation Scheduler command file Initialization settings for the SQL Subsystem Parsing resource for the SQL Subsystem Inter-process communication information file used by PI Network Manager Server ID data file PI-Performance Equation Scheduler equation parsing symbol database Timeout database Shutdown event configuration file used by the Shutdown Subsystem (pishutev) Plicmgr The Pilicmgr subsystem has a single component called Pilicmgr. The files backed up with this component are: File Name pilicense.dat File Description License Information Page 78

101 4.7 - How The PI Backup Subsystem Works Pimsgss The Pimsgss subsystem has a single component called Pimsgss. The PI Message system contains all message log files from the PI\log directory. The message file log names all begin with pimsg_ and end with.dat. The files backed up with this component are: File Name Pimsg_*.dat File Description PI Message Log Files Pibasess The Pibasess subsystem has a single component called Pibasess. The files backed up with this component are: File Name pidbsec.dat pidignam.dat pidigst.dat PIModuleDb.dat pipoints.dat piptalia.dat piptattr.dat piptclss.dat piptsrcind.dat piptunit.dat pitrstrl.dat piusr.dat piusrctx.dat piusrgrp.dat pibasessaudit.dat File Description PI Database Security Database Digital State Name Database Digital Set Database PI Module Database PI Point Database PI Point Aliases Database PI Point Attributes Database PI Point Class database PI Point Source Index Database PI Point Unit Database PI Trust Table PI User Database User Context Database PI User Group Database Base Subsystem Audit Database Pisnapss The Pisnapss subsystem has a single component called Pisnapss. The files backed up with this component are: PI Server System Management Guide Page 79

102 Chapter 4 - Backing up the PI Server File Name piarcmem.dat pisnapssaudit.dat File Description Snapshot Information Snapshot Subsystem Audit Database Piarchss The Piarchss subsystem has a component called Archive Registry and Archive Audit Database plus one component for each archive. The files backed up with the Archive Registry and Archive Audit Database component are: File Name piarstat.dat piarchssaudit.dat File Description PI Archive Manager Data File (Contains list of registered archives) Archive Subsystem Audit Database The following files are backed up with the archive components: File Name Archive and Annotation (.ann) files for all components File Description Archive file names can be anything. The annotation file name is the same as the archive file name with.ann appended to it. Each archive is contained in a separate component as described in The Components in the Archive Subsystem on page 77. PIBatch The Pibatch subsystem has a single component called Pibatch. The files backed up with this component are: File Name pibaalias.nt pibaunit1.nt File Description Batch Alias Database PI Batch Unit Database Lifetime of a Backup Lifetime of a VSS Backup During a backup, the PI Backup Subsystem receives a series of VSS events from the VSS provider. The Backup Subsystem takes the appropriate actions and then asynchronously forwards the VSS event to each subsystem that is participating in backups and then waits for all subsystems to reply. The Backup Subsystem can veto the backup if a fatal error occurs during any one of the events. If the backup is vetoed, then no further events will be sent to the Backup Subsystem. Page 80

103 4.7 - How The PI Backup Subsystem Works The Identify event always occurs at the beginning of every backup, but it can also occur at any time before or during a backup. The other VSS events always occur in the order specified in the following table. Backup Event Identify PrepareBackup PrepareSnapshot Freeze Thaw Description The PI Backup Subsystem requests a list of files and components from each subsystem that participates in backups. The PI Backup Subsystem returns the compiled list to the VSS provider. During a non-component mode backup, this information is simply not used by the backup application. A backup application can use the information from the Identify event to display the components of the PI System in its graphical user interface. If the PI Backup Subsystem takes a long time to reply to a particular VSS event, a backup application may send an Identify event to make sure that the PI Backup Subsystem is still alive. There is no way for the Backup Subsystem to be able to distinguish an identify event at the beginning of a backup from an Identify event that occurs merely for information gathering purposes. This event signifies the start of a backup. For a component mode backup, this event is received only if one or more components of the PI System have been selected for backup. The PI Backup Subsystem is told which of its components have been selected. The event is forwarded to the subsystems that are affected by the backup. For non-component mode backups, the event is forwarded to every subsystem that participates in backups. When this event is received by the PI Archive Subsystem, the subsystem sets a flag to indicate that a VSS backup is in progress. The archive backup flag as displayed by the piartool -as command will be set to 5. For all other subsystems, the PrepareBackup event is ignored. For a non-component mode backup, the PI Backup Subsystem determines if any PI databases are on one or more of the disk volumes that are affected by the backup. This cannot be determined before the PrepareSnapshot event. If none of the disk volumes corresponding to the PI databases are affected, then PI vetoes the backup and no other backup events are received. Otherwise, only those subsystems that are affected by the backup will receive subsequent VSS events. Each subsystem that is participating in the backup stops writing data to its files when it receives the freeze event. For example, any data that is sent to the PI archives will go to the queue and will not be readable until after the thaw event. Data that is already in the archive remains readable between the Freeze and Thaw events. Similarly, configuration information (such as point configuration and the module database) also remains readable. After the PI Backup Subsystem and all other VSS writers that are participating in the backup have indicated that all files are frozen, the VSS provider will take a snapshot of all disk volumes that are affected by the backup. Each subsystem that is participating in the backup resumes writing data to each of its files. The time between Freeze and Thaw is typically on the order of a few milliseconds and cannot be greater than 60 seconds without timing out. The time period between Freeze and Thaw is typically short enough so that database configuration operations should not time out. For example, a user should be able to edit PI points during a backup without even noticing that a PI Server System Management Guide Page 81

104 Chapter 4 - Backing up the PI Server Backup Event Description backup has occurred. The time period between the Freeze and Thaw events can be affected by a 3rd-party VSS writer that is being backed at the same time that the PI System is being backed up. The Thaw event will not occur until all VSS writers have indicated that their files have been frozen. This means that a misbehaving VSS writer or a VSS writer that simply takes a long time to freeze can significantly increase the time period between the Freeze and Thaw events. PostSnapshot BackupComplete BackupShutdown No actions are taken during the PI Backup Subsystem for the PostSnapshot event. Backup applications back up files between the PostSnapshot and the BackupComplete events. Although data is being written to the files that are being backed up, the state of the files at the time the snapshot is preserved via a difference file. The difference file is maintained by the operating system and is completely transparent to the PI system. After the backup is complete or aborted, the difference file is discarded. Backup completion may take hours if large files are being copied. This event indicates that a backup has completed successfully. The last backup time will be updated for each of the PI System s database files that keep track of such information. A summary of last backup times can be displayed with the piartool -backup -identify -verbose command. The last backup time for archive files are displayed by piartool -al command. When the PI Archive subsystem received this event, the output of piartool -as should indicate that the archive backup flag has been reset to 0. If the PI Backup Subsystem gets the BackupShutdown event without getting the BackupComplete event, then this means that the backup did not complete successfully. If the PI Archive Subsystem never receives a BackupComplete event, it will turn its archive backup flag off if it gets the BackupShutdown event. Lifetime of a non-vss Backup Data writes need to be suspended the entire time that a file is being backed up for non-vss backups. Like a VSS backup, data that is already on disk remains readable to all databases while the databases are being backed up. Unlike a VSS backup, each component is backed up one at a time, which means that there is one Freeze/Thaw cycle for each component. Archiving is turned off to all archives whenever any of the archives are being backed up. This is because there is no way to tell which archive will receive the data before the data is processed. Time is allotted for the queue to be emptied between each archive component that is backed up. The PI Backup Subsystem sends the same backup events to each subsystem for a non-vss backup as for a VSS backup, except for the PrepareSnapshot and PostSnapshot events, which the Backup Subsystem does not send. Another difference is that the Backup Subsystem generates the events instead of responding to events from a VSS provider. Like a VSS backup, the Identify, PrepareBackup, BackupComplete, and BackupShutdown events are sent to each subsystem asynchronously. It is only the Freeze and Thaw events that are sent one time for each component. Page 82

105 4.8 - Launching Non-VSS Backups with piartool -backup <path> The backup events are summarized in the following table. Backup Event Identify PrepareBackup Freeze Thaw BackupComplete BackupShutdown Description The PI Backup Subsystem requests a list of files and components from each subsystem that participates in backups. The Backup Subsystem uses the list of files to determine which files it should backup. When this event is received by the PI Archive Subsystem, the subsystem sets a flag to indicate that a non-vss backup is in progress. The archive backup flag as displayed by the piartool -as command will be set to 21. Currently, all other subsystems ignore the PrepareBackup event. Like a VSS freeze, each subsystem that is participating in the backup stops writing data to its files. Also like a VSS freeze, all PI databases remain readable after the freeze event. On Windows, each subsystem duplicates its handles for the files that it has open so that the Backup Subsystem can copy the files. On UNIX, the Backup Subsystem can copy the open files without a duplicate handle. There is one Freeze event per component. The Frozen component resumes writing data to each of its files. The time between Freeze and Thaw can be long, especially for large archive files that are being copied. If all selected components have been backed up, then the BackupComplete event follows the Thaw event. Otherwise, the Freeze event follows the Thaw event to backup the next component. There is a delay between archive components that are backed up to allow the queue to be emptied. After each component is successfully backed up, the last backup time is updated for the files of that component. This is different than for a VSS backup, where the last backup time is updated for every component during the BackupComplete event. A summary of last backup times can be displayed with the piartool -backup -identify -verbose command. The last backup time for archive files are displayed by piartool -al command. This event indicates that a backup has completed successfully. When the PI Archive subsystem received this event, the output of piartool -as should indicate that the archive backup flag has been reset to 0. No action is taken by other subsystems when the BackupComplete event is received. If the PI Backup Subsystem gets the BackupShutdown event without getting the BackupComplete event, then this means that the backup did not complete successfully. If the PI Archive Subsystem never receives a BackupComplete event, it will turn its archive backup flag off if it gets the BackupShutdown event. 4.8 Launching Non-VSS Backups with piartool -backup <path> The syntax of the piartool -backup command for starting a non-vss backup is piartool -backup <path> [-component <comp>][-numarch <number> [-cutoff <date>][-wait <sec>] where <> indicates a required parameter and [] indicates an optional parameter. PI Server System Management Guide Page 83

106 Chapter 4 - Backing up the PI Server Argument <path> -component <comp> -numarch <number> -cutoff <date> -wait <sec> Description Path must be the complete drive letter and path to a directory with sufficient space for the entire backup. Backup only component specified by <comp>. For example, piartool -backup c:\pi\backup -component pibasess will only backup the files that belong to the PI Base Subsystem. A full list of the components are available from the command piartool -backup -identify -verbose The -component flag overrides the -numarch and -cutoff flags, which are used only to restrict the number of archive components that are backed up. If the -component flag is not specified, all components are backed up except for the archive components that are restricted from backup by the -numarch and -cutoff flags. The number of archives to backup. For example, "2" will backup the primary archive and archive1, provided that the primary archive and archive1 contain data. In no case will an empty archive be identified for backup. The default number of archives for backup is 3, unless otherwise specified by the Backup_NumArchives timeout parameter. The cutoff date is specified in PI time format. For example, "*-10d" restricts the backup to archives that contain data between 10 days prior to current time and current time. The more restrictive of -numarch <number> and -cutoff <date> takes precedence. The default cutoff date is 1-Jan :00:00, unless otherwise specified by the Backup_ArchiveCutoffDate timeout parameter. Wait up to <sec> seconds for the non-vss backup to complete before returning from the piartool -backup command. The progress of the backup is reported every 15 seconds and when the backup is complete, the status of the backup is reported via a piartool -backup - query. 4.9 Managing Backups with piartool Backup Command Summary The piartool -backup commands are typically used for troubleshooting and for monitoring the course of a backup. The piartool -backup commands can also be used to start a non-vss backup. The pibackup.bat script, for example, uses the piartool -backup commands to start non-vss backups when the script is run on an operating system that does not support VSS backups. The basic syntax for the piartool -backup command is piartool -backup <Arg1> [Arg2] [Arg3]... where <> indicates a required parameter and [] indicates an optional parameter. If <Arg1> does not begin with a hyphen (-), then <Arg1> is assumed to be the destination directory for a Page 84

107 4.9 - Managing Backups with piartool non-vss backup. If <Arg1> begins with a hyphen (-), then <Arg1> is assumed to be a backup command. The following backup commands are valid. Backup Command -abort -query [-verbose] -identify [-numarch <number>] [-cutoff <date>] [-verbose] -trace <level> Description Abort a current running backup. Example: piartool -backup -abort The query command does the following. Reports a list of subsystems that are currently registered for backup. If a backup is not in progress, the query reports the status of the last backup. If a backup is in progress, the type of backup and the status of the backup is reported. Example: piartool -backup -query The identify command reports the list of files that PI will backup. If the - verbose flag is specified, PI will report a list of files and components. A component is a logical grouping of files. For example, all of the files for the base subsystem are grouped under the pibasess component. The purpose of a component is to identify a group of files for backup. The -numarch and -cutoff flags have the same meaning as the backup command for non-vss backups, which is described below. The identify command creates a \pi\dat\pibackupfiles.bks file. This file is used in the pibackup.bat backup script to specify the list of files to backup using NTBackup. NTBackup is used by the backup script for performing VSS backups on Windows 2003 Server. Example: piartool -backup -identify -verbose Debug messages are written to the log file when the trace level is nonzero. The higher the trace level, the more messages that are written. The maximum number of messages is written with a trace level of 100. Tracing should be off unless you are troubleshooting a problem to avoid unnecessary messages in the log file. If the trace level is non-zero, the trace level is displayed by the piartool -backup -query command. Example: piartool -backup -trace piartool -backup -query When the PI System if first started and whenever the PI Backup Subsystem is restarted, the output of a piartool -backup -query will appear as follows once all of the subsystems have registered for backup. e:pi\adm>piartool -backup -query Backup in Progress: FALSE Last Backup Start: NEVER VSS Supported: TRUE PI Server System Management Guide Page 85

108 Chapter 4 - Backing up the PI Server Subsystems Registered for Backup Name, Registration Time, Version, Status pibatch, 29-Jul-05 12:09:36, , [0] Success pilicmgr, 29-Jul-05 12:09:52, , [0] Success piarchss, 29-Jul-05 12:10:37, , [0] Success pibasess, 29-Jul-05 12:11:53, , [0] Success pisnapss, 29-Jul-05 12:11:54, , [0] Success pimsgss, 29-Jul-05 12:11:56, , [0] Success Last Backup Start will appear as Never when the backup subsystem is restarted because the backup subsystem does not keep track of previous backups between restarts. Pibatch may not appear in your list of subsystems that are registered for backup if you are not licensed to use the old batch subsystem. If the PI System is started normally, then subsystems should appear as registered within about 30 seconds of the PI Backup Subsystem startup time. Normal startup is, for example, starting the PI System with the pisrvstart.bat command file or letting the PI System services automatically start after a reboot. However, if the PI Backup Subsystem is shutdown and restarted, it may take up to 10 minutes for the individual subsystems to register for backup. All of the following subsystems must be running in order for a backup to succeed. PI Network Manager PI Backup Subsystem PI License Manager PI Message Subsystem PI Snapshot Subsystem PI Archive Subsystem PI Base Subsystem PI Batch Subsystem Registers for backup Registers for backup Registers for backup Registers for backup Registers for backup Registers for backup if you are licensed to use the old PI Batch Subsystem The other subsystems either do not have files that need to be backed up or they do not need to be running for a backup to succeed. The above subsystems that register for backup should appear in the list of registered subsystems in the output of the piartool -backup -query command. After doing a backup, the query will show information about the last backup. The following shows an example of a query that was done after a successful non-vss backup. e:pi\adm>piartool -backup -query Backup in Progress: FALSE Last Backup Start: 29-Jul-05 02:00:04 End: 29-Jul-05 02:01:11 Status: [0] Success Last Backup Type: FULL, NON-VSS Last Backup Event: BACKUPSHUTDOWN Last Backup Event Time: 29-Jul-05 02:01:22 Page 86

109 Timeout Parameters VSS Supported: TRUE Subsystems Registered for Backup Name, Registration Time, Version, Status pibatch, 28-Jul-05 16:09:18, , [0] Success pisnapss, 28-Jul-05 16:09:51, , [0] Success piarchss, 28-Jul-05 16:09:51, , [0] Success pilicmgr, 28-Jul-05 16:09:51, , [0] Success pibasess, 28-Jul-05 16:09:52, , [0] Success pimsgss, 28-Jul-05 16:09:53, , [0] Success piartool -backup -identify The piartool -backup -identify command displays the list of files that need to be backed up for the PI system. The output has the form. e:\pi\adm>piartool -backup -identify <FileName_1> <FileName_2> <FileName_3>... Whenever the backup identify command is run, a backup selection file, PI\dat\pibackup.bks, is created. This backup selection file can be read by NTBackup to determine which files it should backup. The piartool -backup -identify -verbose command identifies the components and files for backup. A component is simply a logical grouping of files. If a component is selected for backup, all of its associated files are backed up. The verbose output of the backup identify has the following form. e:\pi\adm>piartool -backup -identify -verbose FileList Name, ComponentName, LastBackup <FileName_1>, <ComponentName_A>, <BackupDateFile_1> <FileName_2>, <ComponentName_A>, <BackupDateFile_2> <FileName_3>, <ComponentName_B>, <BackupDateFile_3>... ComponentList Name, ComponentDescription, ComponentPath <ComponentName_A>, <Description_A>, <ComponentPath_A> <ComponentName_B>, <Description_B>, <ComponentPath_A>... The output should correspond to the expected components listed in the section The Backup Components of PI on page 77 and the expected files listed in the section on page Error! Bookmark not defined Timeout Parameters The timeout parameters that are specifically related to backup operations are reproduced here for convenience. For a full list of all timeout parameters, see PI Server Reference. PI Server System Management Guide Page 87

110 Chapter 4 - Backing up the PI Server Subsystem(s) Name Default Value Min Max Read Description Archive archive_back upleadtime 1800 sec On backup startup Number of seconds before the predicted archive shift that a non-vss archive backup may start. The PI Backup Subsystem waits up to 30 minutes for the archive shift to complete. This timeout parameter has no effect on VSS backups. Archive Archive_BSTi meout 1800 sec Once a minute This timeout parameter is obsolete. It is for internal use only. Backup Backup_Num Archives Before every backup If the number of archives to be backed up is not explicitly specified in arguments to the pibackup.bat backup script, then this timeout parameter defines the default number of archives to back up. Backup Backup_Archi vecutoffdate 01-Jan Jan- 70 N/A Before every backup If the cutoff date is not explicitly specified in arguments to the pibackup.bat backup script, then this timeout parameter defines the default cutoff date. Archives that contain any data between Backup_ArchiveCutoffDate and current time are backed up. For example, if "*-30d" is specified, then at least 30 days of data is backed up. Both Backup_NumArchives and Backup_ArchiveCutoffDate restrict the number of archives for backup. Whichever timeout parameter is most restrictive takes precedence. Backup Backup_Trac elevel Startup only The default backup trace level. Nonzero backup trace levels cause debug messages to be written to the PI Message Log. The default trace level can be overridden while the PI Backup Subsystem is running with the piartool -backup -trace <level> command. Page 88

111 Troubleshooting Backups Subsystem(s) Name Default Value Min Max Read Description All <subsysname >_WriteModTi mestofileper iod 60 sec Startup only PI internally keeps track of the last modified date of most of the files that it needs to back up. The last modified times for each subsystem are updated every WriteModTimesToFilePeriod. The smaller the period, the more accurate the last modified time is. A complete list of backed up files along with their last modified dates can be listed with the piartool - backup -identify -verbose command. For archives, the last modified date can also be viewed with piartool -al. Note for archive files: When a value is written to a PI point, the value is not actually written to the archive until the archive cache is flushed. The maximum period between archive flushes is set by the Archive_SecondsBetweenFlush timeout parameter. After the cache is flushed, the last modified time is updated within WriteModTimesToFilePeriod seconds. All <subsysname >_BackupTim eout 1800 sec Periodically when in system backup mode The maximum number of seconds to remain in system backup mode. This timeout parameter only pertains to the piartool -systembackup command, which is used to take the audit databases offline. The parameter has no effect for VSS backups or non-vss backups that are started with the piartool -backup command Troubleshooting Backups Log Messages The following log files should be examined for errors. The backup script log. The backup script log is written to the target directory of the backup and the log file has a name of the form pibackup_dd-mmm-yy_hh.mm.ss.txt. An example backup script log file name is pibackup_5-aug-05_ txt. The PI Message Log. Messages from the PI Backup Subsystem have a message source of pibackup. Backup-related messages from all other subsystems have a message source of pibackup<subsysname>, such as pibackup_piarchss. You can search for all backup related messages in the log by using pibacku* as a text mask for Message Source. PI Server System Management Guide Page 89

112 Chapter 4 - Backing up the PI Server The NT Event Log. From the Application Log, look for messages from VSS, COM+, and ntbackup sources. The NTBackup.exe log file (VSS Backups only). If there was a problem creating a VSS shadow copy during a backup, the reason for the failure is logged at the beginning of the NTBackup.exe log file. If the Run As user for the PI Server Backup scheduled task is the same as your account, then you can view the NTBackup log from the Tools > Report menu of NTBackup. Launch NTBackup from a DOS command prompt and choose to run in advanced mode VSSADMIN If the PI Server Backup scheduled task was run under the system account, you must browse to the NTBackup.exe log file with Windows explorer to the following directory. C:\Documents and Settings\Default User.WINDOWS\Local Settings\Application Data\Microsoft\Windows NT\NTBackup\data If the scheduled task is run under a user name other than the system account, then replace Default User.Winodows with the specific user name to get the path to which the NTBackup.exe log file is written. Vssadmin.exe is the Volume Shadow Copy Service administrative command line tool. You can use the tool to view the status of the VSS writers, VSS Providers, and VSS shadow copies on the system. Page 90

113 Troubleshooting Backups VSSADMIN LIST SHADOWS A shadow copy is created during the freeze event. If a backup is not currently in progress, the output of vssadmin list shadows should look like the following output that was generated on Windows XP. e:\pi\adm>vssadmin list shadows vssadmin Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001 Microsoft Corp. No shadow copies present in the system. If there were problems create the shadow copy, look for errors in the NTBackup.exe log file and run vssadmin list shadows to check the status of the failed shadow copy. VSSADMIN LIST PROVIDERS Windows XP and Windows 2003 server comes with a default VSS provider. The following is sample output generated on Windows XP. e:\pi\adm>vssadmin list providers vssadmin Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001 Microsoft Corp. Provider name: 'MS Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b b9f-4925-af80-51abd60b20d5} Version: VSSADMIN LIST WRITERS The following is sample output for the vssadmin list writers command. When a backup is not in progress, the status of all writers should appear as stable. The name of the writer for the PI System is OSISoft PI Writer. The PI Backup Subsystem registers as a VSS writer with this name. The following is sample output from Windows XP. e:\pi\adm>vssadmin list writers vssadmin Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001 Microsoft Corp. Writer name: 'OSISoft PI Server' Writer Id: {0fd0891d-b731-4e59-a35d-48f } Writer Instance Id: {cf8d5ded-e a0-6b3064c756c3} State: [1] Stable Writer name: 'Microsoft Writer (Bootable State)' Writer Id: {f2436e37-09f5-41af-9b2a-4ca2435dbfd5} Writer Instance Id: {c89ae65c-9f26-4bd db66757defc7} State: [1] Stable Writer name: 'MSDEWriter' Writer Id: {f8544ac fa5-b04b-f7ee00b03277} Writer Instance Id: {190c16a5-d378-43d6-bdb abea2b} State: [1] Stable PI Server System Management Guide Page 91

114 Chapter 4 - Backing up the PI Server Writer name: 'WMI Writer' Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0} Writer Instance Id: {26a1f92f-779d-45d1-900a-21b8af6f0590} State: [1] Stable Writer name: 'Microsoft Writer (Service State)' Writer Id: {e38c2e3c-d4fb-4f4d-9550-fcafda8aae9a} Writer Instance Id: {f8dd1e16-4e36-4ed2-9a53-ea4e05bacdb6} State: [1] Stable Page 92

115 Chapter 5. MANAGING INTERFACES 5.1 Introduction This chapter is split into two sections. The first section covers basic principles of interface operation. The second section describes the basic steps involved with installing, configuring, and administering a typical interface. There are several manuals in addition to this document that provide general information with regard to interface configuration and management. Manual Name PI API Installation Instructions UniInt End User Manual PI Interface Configuration Utility User Manual PI Performance Monitor Interface Manual PI Interface Status Utility PI AutoPointSync User Manual Notes On Windows, this manual is installed into the pipc\bin directory by the PI SDK installation kit. The manual provides several important post-installation details for configuring the PI API and buffering. Most interfaces are based on the OSIsoft Universal Interface (UniInt) and therefore share a common set of features. Certain UniInt features may be described in more detail in the UniInt End User Manual document than in the interface-specific documentation. However, not all features that are described in UniInt End User Manual are supported by all UniInt interfaces. The Interface Configuration Utility provides a graphical user interface for configuring the interface command line, interface services, and various PI points that are useful for monitoring interface performance. The PI Performance Monitor interface, PIPerfMon, obtains Microsoft Windows performance counter data and sends it to the PI System. The PI Interface Status Utility is an interface that runs on the PI Server node. The utility writes events such as I/O Timeout to PI Points that have not received values for a configurable period of time from a particular interface. Some interfaces, such as the OPC Interface, support auto-point synchronization. PI AutoPointSync (PI APS) is a utility that synchronizes the PI Server points for an interface with the tag definitions on the interface s data source. PI Server System Management Guide Page 93

116 Chapter 5 - Managing Interfaces 5.2 General Interface Principles For the most part, this section discusses general interface principles that apply to interfaces running on Windows, UNIX, and VMS. If a particular topic in this section applies to a particular operating system, then this is mentioned up front. For example, interfaces can only use the PI Software Development Kit (PI-SDK) on Windows, but the PI-SDK is such a fundamental part of the PI System that it is discussed under general principles About PI Interfaces PI interfaces are software modules for collecting data from any computing device with measurements that change over time. Typical data sources are Distributed Control Systems (DCSs), Programmable Logic Controllers (PLCs), OPC Servers, lab systems, and process models. However, the data source could be as simple as a text file. Most interfaces can also send data in the reverse direction, from PI to the foreign system. Typically, interfaces use the PI Application Programming interface (PI API) to retrieve configuration information from the PI Server and to write data to the PI Server. Many interfaces also use the PI Software Development Kit (PI-SDK) to retrieve configuration information from the PI Server and to create PI Points, Digital States, etc. A handful of interfaces use the PI-SDK to write batch data to PI, the most notable of which is the PI Batch Generator interface (PIBaGen). The PI API and the PI-SDK are described in more detail below. Most interfaces written by OSISoft are written using UniInt, OSISoft s so-called Universal Interface. UniInt performs many tasks that need to be performed by most interfaces such as loading points, parsing command line, arguments, scheduling scans for data, etc. As a result, most OSISoft interfaces have a common set of features that are configured in the same way. UniInt uses the standard PI API and the PI-SDK to write and read data from the PI Server. Although UniInt itself is not publicly available, customers could use the PI-SDK and PI API to write their own custom interfaces that do the same tasks as UniInt About PI Interface Nodes A PI Interface Node is a computer that runs one or more interfaces to collect data from a foreign system and send that data to a PI Server. The Interface Node might be a computer that is a part of the foreign data system, such as a Foxboro AW51 workstation; it might be a stand-alone dedicated interface computer; or it might be the PI Server itself. Typically, you should avoid running interfaces on the PI Server node. Running interfaces on a separate node allows the PI Server to be taken down for maintenance while data is still collected and buffered on the Interface Node. Also, you do not want interfaces competing for computer resources with the PI Server. As discussed later in this document, there are a few interfaces that are intended to run on the PI Server, but these interfaces are the exception. From an administrative standpoint, the best thing about PI Interface Nodes is that they are typically configured once, backed up, and then left to run indefinitely without human intervention. Exceptions to this include software upgrades, security patches, network infrastructure changes, and some configuration changes driven by a change in the foreign data system. Interface nodes are the first line focus for data reliability and availability, so user interaction with interface nodes is usually restricted to PI system administration only. Page 94

117 5.2 - General Interface Principles Interface Nodes on VMS An Interface Node on an OpenVMS-based VAX or Alpha computer is also known as a PINet Node. PINet is a stripped-down version of a PI 2 server. PINet does not contain a data archive, but it does contain a local Snapshot Subsystem and a local point database. In addition, PINet provides utilities to access the point configuration information and data that reside on the PI Server. PINet automatically buffers data when it cannot connect to the PI Server. PINet sends data to PI over a TCP/IP connection. PIonPINet is similar to PINet in that it is a subset of PI that runs on OpenVMS. PIonPINet includes all of the functionality of PINet. In addition, it includes analysis, reporting and graphical display utilities About Data Buffering Data flow from a typical interface to the PI Server can be summarized by the following diagram. When buffering is enabled, the data flows through the interface to the buffering service and from there to the Snapshot Subsystem on the PI Server. On Windows and UNIX interface nodes, data is buffered with the bufserv service, which must be installed and configured separately from the interface. On VMS, data buffering comes built-in with the interface node. The above diagram shows the buffering application sending data to only one PI Server node. As of PI API version 1.6, buffering supports collecting data from one interface and distributing the data to multiple PI Servers. On VMS Interface Nodes, buffering supports data sends to one PI Server only. Some interfaces do not require buffering because the data source itself is buffered. For example, the Batch File Interface and the Event File interface do not require buffering. The interface-specific documentation should be consulted to see if your interfaces require buffering. If the PI Server is not available for some reason (such as an upgrade on the Server) then the data is stored in a file buffer on the Interface Node. The size of the file buffer is configurable (up to nearly 2GB on Windows and UNIX). When the PI Server becomes available again, the buffering application sends all the stored data from the buffer, in chronological order, back to PI Server System Management Guide Page 95

118 Chapter 5 - Managing Interfaces the PI Server. If you then look ProcessBook, for example, your data appears as a continuous flow of data, with no gaps About the PI API The PI Application Programming Interface (PI API) is a library of functions that allow you to read and write values from the PI Server, and also let you retrieve point configuration information. OSIsoft has used the API to create interfaces that run on a variety of platforms. The PI API also provides the ability to buffer data that is being sent to PI. This allows PI to be taken offline for software or hardware upgrades without compromising data collection. When PI becomes available again, the buffered data are then forwarded to PI. API Nodes are UNIX or Windows workstations that run programs such as interfaces that are based on the PI API. In practice, the term API Node is sometimes used as a synonym for Interface Node, because historically, most interfaces are API based. This document does not use the term API Node. You can call PI API from C, Visual Basic, or other languages. A complete list of supported platforms and functions is available in the PI API Manual About the PI SDK The PI Software Development Kit, (PI-SDK), is an ActiveX in-process server that provides COM access to OSI historians. The product provides an object-oriented approach to interacting with PI systems in contrast to the procedural methods used in the PI API. The PI-SDK can only be installed on Windows. Only interfaces that run on Windows can take advantage of the functionality provided by the PI-SDK. All interfaces written for UNIX or VMS must use the PI API exclusively for all communication with the PI Server. Some interfaces use the PI-SDK because certain functionality is not provided via the PI API. For example, the PI-SDK allows interfaces to create points, digital sets. Also, any interface that writes batch data to PI, such as the PI Batch Generator Interface (PIBaGen), must use the PI-SDK to write its data. Any data that is written to PI via the PI-SDK is not buffered via the bufserv service. For this reason all interfaces write time-series data to the PI Server via the PI API. Interfaces that connect to the PI Server with both the PI-SDK and the PI API must maintain two separate connections to the PI Server About UniInt-Based Interfaces There are hundreds of different PI interfaces and each interface is fully documented in its own dedicated manual. However, most interfaces are based on UniInt therefore share a common set of features. UniInt stands for Universal Interface. UniInt is not a separate product or file; it is an OSIsoftdeveloped template used by the OSI developers, and is integrated into many interfaces. The purpose of UniInt is to keep a consistent feature set and behavior across as many of our interfaces as possible. It also allows for the very rapid development of new interfaces. In any UniInt based interface, the interface uses some of the UniInt supplied configuration Page 96

119 5.3 - Basic Interface Node Configuration parameters and some interface specific parameters. UniInt is constantly being upgraded with new options and features. 5.3 Basic Interface Node Configuration The steps outlined in this section describe the basic steps for interface configuration. The steps are provided in a logical sequence for you to follow. However, there is nothing preventing you from doing the many of the steps in a different order than specified below. This discussion applies to VMS, UNIX, and Windows interface nodes. However, the majority of the details are provided for Windows and UNIX Interface Nodes. Differences between the platforms are pointed out as necessary. Also, for the most part, the configuration steps apply whether the interface is on the PI Server Node or on a remote node. Distinctions are made as necessary. These basic interface configuration steps can be summarized as follows. Install the PI-SDK and/or the PI API Connect to PI with apisnap.exe Connect with AboutPI-SDK.exe Configure PI Trusts for the Interface Node Install the Interface Set the Interface Node Time Connect with the Interface Configure Points for the Interface Configure Buffering Configure Interface For Automatic Startup Configure Site-Specific Startup Scripts Configure the PI3 PointSource Table Monitor Interface Performance Configure Auto Point Synchronization Configure the Interface Status Utility Install the PI SDK and/or the PI API On a Windows Interface Node you must install the PI-SDK. The PI-SDK setup kit installs the PI API. After installing the PI-SDK, you should consult the PI API Installation Instructions manual that is installed in the pihome\bin directory. Look for the file API_Install.doc. This manual has many helpful suggestions for post-installation configuration of the PI API and buffering. On a UNIX Interface Node you must install the PI API. See the document entitled PI API Installation Instructions. On a VMS Interface Node, the PI API comes installed with PINET. PI Server System Management Guide Page 97

120 Chapter 5 - Managing Interfaces Connect to PI with apisnap.exe On Windows or UNIX Interface Nodes, one of the first things you should do is to try to connect with apisnap.exe to the PI Server Node. On VMS, the corresponding program is snap.exe. See the PI 2 documentation for details on use of snap.exe. The apisnap.exe program is installed with the PI API in the pihome\bin directory. On Windows, the pihome directory is determined by the pihome entry in the pipc.ini file, which is always located in the Windows directory. On UNIX, the pihome directory is determined by the $pihome environment variable. The syntax for running the apisnap.exe program is apisnap.exe HostName:5450 or apisnap.exe IPAddress:5450 where HostName is the fully-qualified host name for the PI Server Node (i.e., apollo.osisoft.com). The :5450 after the host name or IP address is optional. It specifies the port for the connection. If you can connect with apisnap.exe, you should be able to establish a PI API connection with your interface. If you cannot connect to the PI Server Node with apisnap.exe, try the following troubleshooting steps. Make sure the computer running the PI Server is accessible. Ping by name in both directions. Try adding the Interface Node to the hosts file on the PI Server node if you are having trouble connecting with apisnap.exe. For PI API connections, the PI Server uses reverse name lookup on the IP Address in the connection to look up the host name of the interface node. Most commonly, the lookup is done from a Domain Name Server (DNS). Alternatively, the Interface Node name can be resolved from the hosts file on the PI server node. Check the Firewall security on the PI Server node (see the section, Managing Security) Connect with AboutPI SDK.exe This step applies only to interfaces on Windows that require both PI API and PI-SDK connections. You can start the AboutPI-SDK.exe program from Start >All Programs> PI System >AboutPI-SDK Configure PI Trusts for the Interface Node Once you establish that you can connect with apisnap.exe and AboutPI-SDK.exe, you must configure PI Trusts for the interface node. An interface that connects without a PI Trust is granted world access rights or no access rights depending on the DefaultUserAccess timeout parameter. Since world access rights are read only by default, an interface that sends data to PI without being granted a PI Trust will generate the following message in the interface log on the Interface Node. Page 98

121 5.3 - Basic Interface Node Configuration [-10401] No Write Access - Secure Object Connect with apisnap.exe from the Interface Node to the PI Server node. You will see messages similar to the following in the PI Server Message Log on the PI Server Node: and New Pinet 1 connection: snape No Trust established for: apollo.osisoft.int snape using default login Access Denied: [-10413] No trust relation for this request ID: 64. Address: Host: apollo.osisoft.int. Name: snape The log messages tell us the following. The application name of the connection was snape. The IP Address was The host name was a fully-qualified host name (apollo.osisoft.int). The exact application name, IP Address, and host name as displayed in the message log must be used when configuring PI Trusts. Note that that application name is snape, not apisnap.exe. Also note that using apollo in the PI trust for the above connection would not work. The fully qualified name apollo.osisoft.int must be used. Connect with AboutPI-SDK.exe from the Interface Node. You will a message similar to the following. Trust request from: OSI\MATZEN apollo AboutPI SDK.exe failed: [-10413] No trust relation for this request (110 ms) Unlike the connection from apisnap, host name is apollo, not apollo.osisoft.int. This example illustrates that configuring trusts is sometimes experimental in nature. You must examine the credentials as displayed in the PI Message Log. Below are basic guidelines for creating simple trusts for interface connections. For a comprehensive understanding of trusts and security, read the chapter Managing Security in the PI Server System Management Guide. The discussion below assumes that you are familiar with the information in that chapter. Interface Trusts for PI API Connections When configuring a trust for an API connection, do not specify the Windows Domain or the Windows Username because these credentials are not passed in the PI API connection credentials because PI API connections can come from UNIX and VMS nodes as well as Windows nodes. Trusts with these fields configured will not work for PI API connections. If you use the PI System Management Tools to configure a trust for a PI API application, then the trust configuration wizard will not let you specify these fields. That is, the wizard will prevent you from over specifying the PI API trust. Although it is possible to use the application name in trusts for PI API applications, it is not part of the default recommendations. See the section Using the Application Name in Interface Trusts if you are interested in using the application name in your trusts. We recommend one of two options for configuring trusts from an API application. Option 1 involves configuring the following two trusts. One trust based on IP address and netmask One trust based on fully-qualified host name (i.e., apollo.osisoft.com) * PI Server System Management Guide Page 99

122 Chapter 5 - Managing Interfaces Option 2 provides slightly tighter security. Instead of configuring the above two trusts, set up the following trust. One trust based on IP address, netmask, and fully-qualified host name In Option 2 if the IP address or fully-qualified host name changes, the trust will fail, whereas in Option 1, it will not fail. Interfaces Trusts for PI API and PI SDK Connections Check your interface-specific documentation to see if your interface requires PI-SDK connections in addition to PI API connections. We recommend configuring trusts in one of two ways for applications that require both connection types. Option 1 involves setting up the following 3 trusts. One trust based on IP address and netmask. This trust works for both PI API and PI- SDK connections. One trust based on fully-qualified host name (i.e., apollo.osisoft.com). The PI API always uses the fully-qualified host name as opposed to an abbreviated form of the name. One trust based on the simple host name (i.e. apollo). The SDK typically uses the abbreviated simple host name. To verify the host name the PI-SDK will use, run pidiag -host on the Interface Node. With these 3 trusts established, the interface first attempts to connect by IP address. If the IP address changes, then a trust is granted based on the host name. Because the PI-SDK and PI API host name processing may differ, you should set up two trusts, one for each form of the host name. Option 2 provides slightly tighter security. Set up just 2 trusts, one for the PI API and one for the SDK. In each trust, include host name, IP address, and netmask as qualifications for connection. One trust based on IP address, netmask, and fully-qualified host (i.e., apollo.osisoft.com). This trust works for the PI API connection. One trust based on IP address, netmask, and simple host name (i.e. apollo). This trust works for the PI-SDK connection. To verify the host name the PI-SDK will use, run pidiag -host on the Interface Node. Trusts for Interface Nodes with Multiple NICs If your Interface node has multiple NICs, you must set up a trust for each IP address, even if you just see the connection with one. To see all IP addresses for a given computer, type ipconfig -all at a command prompt. Using the Application Name in Interface Trusts Although it is possible to use the application name in trusts for PI API applications, the additional security gains is typically not warranted by the increase in complexity. Before adding complexity to your trusts be aware that interfaces is only as secure as the room the Page 100

123 5.3 - Basic Interface Node Configuration interfaces are in. Interfaces are configured to allow writing data to the PI Server, so physical access to the machine allows any user who can log on to that machine to run PI applications that could write data to the PI Server and a malevolent user could use an application name for which a trust is already configured. Part of the complexity arises when buffering is configured on the Interface Node because a PI Trust must be configured for the buffering application as well as the interface. For Windows and UNIX, the buffering application (bufserv.exe) has an application name of APIBE. For VMS, the buffering application has an application name of EXCP plus a 4-digit process identifier. There is additional complexity for interfaces that connect with both the PI API and PI-SDK because the PI API and PI-SDK pass different Application Names in their connection credentials. For example, the OPC interface uses opcie for the API and opcint.exe for the SDK. The application name for PI API interface connections can be determined by examining connection messages in the PI Message Log on the server, whereas the PI-SDK uses the actual file name of the executable Install the Interface The basic next step after configuring PI Trusts is to install the interface. The particulars of interface installation are highly platform dependent. For example, on Windows most interfaces can be installed from an installation kit. On VMS, most interfaces come preinstalled on the VMS PINet node. Consult the interface-specific documentation for details. On Windows the installation kit typically installs the PI Interface Configuration Utility (PI- ICU), which is a GUI for making several interface configuration tasks easier. You can start the PI-ICU from the Start > All Programs > PI System > PI-Interface Configuration Utility. On Windows interfaces are installed by default in a subdirectory of the pihome\interfaces directory. As mentioned above, the pihome directory is defined by the pihome entry in the pipc.ini configuration file. This pipc.ini file is an ASCII text file, which is located in the Windows directory. A typical pipc.ini file contains the following lines: [PIPC] PIHOME=D:\Program Files\PIPC The above lines define the D:\Program Files\PIPC directory as the root of the pihome directory tree. For example, an interface called MyInterface would be installed to D:\Program Files\PIPC\interfaces\MyInterface by default Set the Interface Node Time In order for PI to accurately store and retrieve data, interface computers must have the correct settings for time, time zone, and Daylight Savings Time (DST). Most interfaces send correct timestamps even if the Interface Node is located in a different time zone than the PI Server. That is, for most interfaces you should set the clock to its correct local time, time zone, and DST setting. Also, you should configure the clock to automatically adjust for daylight saving changes. PI Server System Management Guide Page 101

124 Chapter 5 - Managing Interfaces If the interface-specific documentation does not give specific instructions for setting time, time zone, and DST, these guidelines should typically result in the correct timestamps being set to PI. Exceptions are noted in the documentation for each interface. The most common exception is for Foxboro Interface nodes, which must have their time zone settings set to GMT. Caution: In all cases, the PI Server clock should be set to its correct local time, time zone, and DST setting Connect to the PI Server with the Interface The next step is to connect to the PI Server with the interface. PI Points do not need to be configured before this is done. On Windows, it is best to try to connect by running the interface interactively instead of trying to run the interface as a Service. Successful execution of the interface requires that a certain minimum number of required command line parameters be specified. The PI Interface Configuration Utility makes the task of configuring these command line parameters considerably easier. Although the interfacespecific documentation must be consulted, the following command line parameters should be mentioned because they are fundamental to most UniInt-based interfaces. Parameter /ps=x required /id=x sometimes required /host=host:port recommended parameter Description The /ps flag specifies the point source for the interface. x is not case sensitive and can be any single character. For example, /ps=p and /ps=p are equivalent. The point source that is assigned with the /ps flag corresponds to the PointSource attribute of individual PI Points. The interface will attempt to load only those PI points with the appropriate point source. Consult the PI3 PointSource table to make sure you are not using a PointSource that is already configured for another interface. The /id flag is used to specify the interface identifier. For example, /id=1 The interface identifier is a string that is no longer than 9 characters in length. UniInt concatenates this string to the header that is used to identify error messages as belonging to a particular interface. Many interfaces also use the /id flag to identify a particular interface copy number that corresponds to an integer value that is assigned to one of the Location code point attributes, most frequently Location1. For these interfaces, one should use only numeric characters in the identifier. The /host flag is used to specify the PI Home node. host is the IP address of the PI Sever node or the domain name of the PI Server node. port is the port number for TCP/IP communication, which should always be specified as 5450 for communication to a PI 3 server. Page 102

125 5.3 - Basic Interface Node Configuration Parameter /f=ss or /f=ss,ss or /f=hh:mm:ss or /f=hh:mm:ss,hh:mm:ss required for scan-based interfaces /stopstat recommended parameter Description Each occurrence of the /f flag on the command line specifies a scan class for the interface. A particular PI Point is associated with a scan class via the Location4 PI Point attribute. The /f flag defines the time period between scans in terms of hours (HH), minutes (MM), and seconds (SS). The scans can be scheduled to occur at discrete moments in time with an optional time offset specified in terms of hours (hh), minutes (mm), and seconds (ss). If HH and MM are omitted, then the time period that is specified is assumed to be in seconds. If the /stopstat flag is present on the startup command line, then the digital state Intf Shut will be written to each PI Point when the interface is stopped. After starting the interface, do the following. Examine the PI Message Log to make sure that the interface connects with a successful trust logon. Examine the message log on the interface node for error messages. On Windows, the message log is pihome\dat\pipc.log. On UNIX, the message log is $pihome\dat\pimesslogfile. On VMS, the message log is PIsysmgr:pimesslog.txt Configure Points for the Interface Configure PI Points for the interface. Although you must consult the interface-specific documentation to configure your PI Points, there are a few attributes that are configured in the same way fore most UniInt-based interfaces. Point Attribute PointSource Location1 Description The PointSource is a single or multiple character sequence that is used to identify the PI point as a point that belongs to a particular interface. For example, you may choose the letter R to identify points that belong to the Random interface. Note that multi-character PointSources are supported only with PI API 1.6 and greater. The PointSource attribute must correspond to the /ps flag in the startup command line of the interface. Consult the PI3 PointSource table to make sure you are not using a PointSource that is already configured for another interface. The Location1 attribute is used by many interfaces to identify an interface number. When used in this fashion, the Location1 attribute must correspond to the /id flag on the startup command line of the interface. PI Server System Management Guide Page 103

126 Chapter 5 - Managing Interfaces Point Attribute Location4 Scan Shutdown Description For interfaces that support scan-based collection of data, Location4 defines the scan class for the PI point. For example, a PI Point with Location4=1 will be scanned according to the first occurrence of the /f flag in the startup command file. A PI Point with Location4=2 will be scanned according to the second occurrence of the /f flag, and so on. If the Scan attribute is set to OFF for a UniInt based interface, the point will be removed from the interface. This can be useful when troubleshooting the interface. When PI is restarted, the default behavior of the PI Shutdown Subsystem is to write Shutdown events to all PI Points that have their Shutdown attribute set to 1 (true). Typically, it is undesirable for the PI Server to write shutdown events for Interfaces Nodes that have a buffered data source on a node that is remote to the PI Server Node. The Shutdown attribute should be set to 0 for PI Points that belong to these interfaces. These remote, buffered interfaces should write their own shutdown events when the interfaces are shut down. This is typically configured with the /stopstat command line parameter for UniInt-based interfaces. After configuring a few PI Points, do the following. Restart the interface and examine the log file to make sure that the points are loaded by the interface. For UniInt-based interfaces, you will see a message similar to the following. 07-Nov-05 23:10:44 Total Number of points matching pointsource 'R' is 5 Look for error messages in the interface log. If PI Trusts are not configured correctly for the interface you will see the following error when the interface tries to write data to PI. [-10401] No Write Access - Secure Object Use apisnap.exe to verify that the interface is writing data to your PI Points Configure Buffering Once you have demonstrated that you can collect data with your interface, it is time to configure buffering. Some interfaces do not require buffering or work better without buffering. You must consult the interface-specific documentation to determine this. On VMS, there is no need to configure buffering because it comes as part of PINet. The information below applies only to UNIX and Windows Interface Nodes. For more details see the PI API Installation Instructions. On Windows, the PI API Installation Instructions is installed by the PI-SDK setup kit in the pihome\bin directory. The file name is API_install.doc. If buffering is enabled on UNIX and Windows, the pihome\dat\piclient.ini file will contain the following two lines. [APIBUFFER] BUFFERING=1 Page 104

127 5.3 - Basic Interface Node Configuration The file can be edited with a text editor. Setting buffering=0 turns buffering off. Any changes that are made to the pihome\dat\piclient.ini will only take effect when bufserv is restarted. The default and maximum size limit of buffer files is 2,000,000 KB (~2GB). If there is not 2GB of disk space available, then the buffer will grow as large as possible before failing. The maximum size limit can be set to a value less than 2,000,000 KB with the maxfilesize parameter in the piclient.ini file. For a full list of configuration options, refer to the PI API Installation Instructions. The following applies to buffering for PI API version 1.6 and greater on Windows and UNIX. In PI API version 1.6 and greater, data that is sent via the PI API can be buffered to multiple PI server Nodes. The PI Server Nodes for which buffering is enabled is configured in the pihome\dat\piclient.ini file in the [BUFFEREDSERVERLIST] section. In addition to the parent process, one buffer server process will be spawned for each buffered server specified in the list. See PI API Installation Instructions for more details. The connection name for the interface (usually specified in the /host parameter) must exactly match the buffer server name configured in the pihome\dat\piclient.ini file. For example, in order for buffering to work for a UniInt-based interface, the IPAddress or HostName specified by the /host command line parameter would need to be identical to a PI Server listed in one of the entries specified in the [BUFFEREDSERVERLIST] section in the pihome\dat\piclient.ini file. The IPAddress and HostName are not interchangeable in this case, if the IPAddress is used in the interface and the HostName in the pihome\dat\piclient.ini file, then interface connection via the IPAddress would be unbuffered. Bufserv now supports event distribution to multiple PI servers. This is sometimes referred to as n-way buffering or buffering to replicated servers. Event distribution to multiple PI servers consists of the taking data sent to one buffered server distributing of the same event to multiple buffer servers. The list of PI servers to receive distributed events is configured in the [REPLICATEDSERVERLIST] section in the piclient.ini file. The distributed event is not manipulated in any way, which means that the event does not have the point ID or the timestamp altered. Therefore, the replicated PI servers must all have synchronized point databases. PI servers in the [REPLICATEDSERVERLIST] section must also be in the [BUFFEREDSERVERLIST] section. The following applies to versions of the PI API prior to version 1.6. Prior to PI API version 1.6, buffering could only be enabled for the default PI Server node. The default PI-Server node is specified in the pihome\dat\pilogin.ini on Windows and in the pihome\dat\piclient.ini node on UNIX. The connection name for the interface (usually specified in the /host parameter) must exactly match the name of the default PI-Server node. configured in the pihome\dat\piclient.ini file. The following applies to Windows only, but it applies to all version of the PI API. PI Server System Management Guide Page 105

128 Chapter 5 - Managing Interfaces Buffering should be installed as an automatic service. This can be accomplished though the Tools > API Buffering menu in the PI Interface Configuration Utility. You can enable and disable buffering from the PI-Interface Configuration Utility from the Tools > API Buffering... menu. The PI-Interface Configuration Utility makes all of the necessary changes to the piclient.ini file. The interface service must be configured to depend on bufserv to ensure that buffering starts first. Otherwise the interface might establish an unbuffered connection to PI before bufserv starts. Monitoring Buffering To ensure that buffering is functioning properly, check the interface log each day. You can also use the buffering utility, bufutil.exe, to check the buffering service. For example, bufutil.exe can be used to list the number of events that are currently in the buffer. When connected to the PI server, the number of unprocessed events should generally be zero, but may show a finite number since events are streaming through the buffers. For version 1.6 and greater of the PI API, when bufutil.exe is started, the currently selected buffered server is listed. There is a menu option to change the selected server Configure Interface for Automatic Startup Once you have demonstrated that you can collect data while running the interface interactively, configure the interface for automatic startup. On Windows, this is done by configuring automatic services. On UNIX, this can be handled by adding your interface to sitestart.sh script and configuring pistart.sh for automatic startup as described in Chapter 1, Starting and Stopping PI. Site-specific startup scripts are discussed later in this chapter. It is recommended to install services with the Interface Configuration utility from the services tab. If you plan to enable buffering, you should install the interface service to depend upon the following services. bufserv tcpip You can start and stop your interface service either though the Interface Configuration Utility or though the command line. If the interface service is called MyInterfaceService you could run the following command from a command prompt. To start a service, type the command: net start MyInterfaceService To stop a service, type the command: net stop MyInterfaceServic When an interface starts as a service, the interface service opens the corresponding MyInterfaceService<ServiceID>.bat file to determine its command line arguments. ServiceID s and the manner in which interface services get their command line arguments are discussed in detail in the UniInt End User Manual. These details are managed by the PI Interface Configuration Utility. Page 106

129 5.3 - Basic Interface Node Configuration Configure Site-Specific Startup Scripts The following discussion has been limited to UNIX and Windows nodes. On each node, there is at least one shutdown script that calls a site-specific shutdown script. Likewise, there is at least one startup script that calls a site-specific startup script. You should only modify the site-specific scripts because the main startup and shutdown scripts are overwritten by the installation script when PI is upgraded. In all cases, the site-specific startup scripts must be edited manually with a text editor. PI Server Node on Windows If for some reason you must run an interface on the PI Server, you can configure the interface to start and stop with the PI Server by adding it to the site-specific startup and shutdown scripts. For interfaces that connect only via the PI API, the scripts provide a convenient means of shutting down all programs that use the PI API. However, for an interface that depends on the PI-SDK, it is more important to add the interface to the site-specific stop and start scripts because PI-SDK programs depend upon pinetmgr.exe, and pinetmgr.exe cannot be shut down until all services that depend upon it are shutdown. The most common PI-SDK interface that is configured to run on the PI Server node is the PI Batch Generator Interface. By default, the PI Batch Generator interface installed on the PI Server node, but the interface is configured as a manual service, and the interface is not configured to be stopped and started by the site-specific scripts. The startup scripts on the files are as follows. Startup script Corresponding shutdown scripts and site-specific scripts Shutdown script Site-Specific startup script Site-Specific stop script Script location pistart.bat None pisitestart.bat None \PI\adm pisrvstart.bat pisrvstop.bat pisrvsitestart.bat pisrvsitestop.bat \PI\adm On Windows the difference between pistart.bat and pisrvstart.bat is that the former is used to start PI interactively and the latter is used to start PI as a service. To start your interface interactively, add a line similar to the following to pisitestart.bat. program files\pipc\interfaces\myinterface\myinterface.bat To start your interface as a service, add a line similar to the following to pisrvsitestart.bat. net start myinterface PI Interface Node on Windows The site-specific startup scripts are named differently on an Interface Node, than on the PI Server node. The scripts provide a convenient means of shutting down all programs that use the PI API and PI-SDK that communicate to the PI Server node. PI Server System Management Guide Page 107

130 Chapter 5 - Managing Interfaces Startup script Corresponding shutdown scripts and site-specific scripts Shutdown script Site-Specific startup script Site-Specific stop script Script location pistart.bat pistop.bat sitestrt.bat sitestop.sh pihome\bin\ PI Server Node on UNIX The following are the startup and shutdown scripts on a UNIX PI Server node. Startup script Corresponding shutdown scripts and site-specific scripts Shutdown script Site-Specific startup script Site-Specific stop script Script location pistart.sh pistop.sh pisitestart.sh pisitestop.sh $pihome/adm/ Interface Nodes on UNIX The following are the startup and shutdown scripts on a UNIX Interface node. Startup script Corresponding shutdown scripts and site-specific scripts Shutdown script Site-Specific startup script Site-Specific stop script Script location pistart.sh pistop.sh sitestart.sh sitestop.sh $pihome/bin/ Configure PointSource Table When you create a PI Point with a given PointSource, the PointSource is automatically added to the PIPTSRC table. The PIPTSRC table tells you how many PI Points there are for each PointSource. As mentioned above, you should consult the PIPTSRC table before configuring points for a new interface because you want to make sure that you are not using a PointSource that is already being used by another interface. Automatically generated entries in the PIPTSRC table have a blank description. You should edit the PIPTSRC with the piconfig utility to add a description Monitor Interface Performance Most UniInt-based interfaces have built-in ways of monitoring interface performance. Most of these require PI Points to be configured. However, performance summary log messages are written to the interface log file by default. For interfaces that run on Windows, all of the performance statistics for an interface can be collected via Windows Performance Counters. That is, there are Windows performance counters that correspond to the performance summary log messages, I/O Rate Points, and Performance Points for Scan Classes. There are also additional Windows Performance Counters that extend the number of interface statistics beyond that which can be gathered on non-windows platforms. Page 108

131 5.3 - Basic Interface Node Configuration Windows Performance Counters Many UniInt based interfaces now support Windows Performance Counters. Without any PI Point configuration, the counters for the interface can be viewed from the Windows Control panel under Administrative Tools > Performance. The one restriction is that the interface must be running as a service in order for the counter to be visible. The following diagram shows how the counters may appear for the PI Random Interface on the PI Server node. In order to save the Windows Performance Counter data to PI, you must configure the PI Performance Monitor Interface, which is available as basic or full versions. The basic version is installed on the PI Server Node by default. You can collect performance counter data with the interface from local and remote interface nodes. For more information, see the PI Performance Monitor Interface documentation. The following performance counters are available for most UniInt-based interfaces that run as a service. Counter Name Interface up-time (seconds) IO Rate (events/second) Description The number of seconds since the interface has started. This counter is incremented once a second. The number of events per second received by the interface. If this counter is viewed from the NT performance monitor, one should increase the update time of the performance monitor to the minimum scan period for the interface. For example, say that the minimum scanning period for the interface is 5 seconds (/f=5). One can set the PI Server System Management Guide Page 109

High Availability and PI Server Replication

High Availability and PI Server Replication High Availability and PI Server Replication PI3 Server Version 3.4.375 2006 OSIsoft, Inc. All rights reserved OSIsoft, Inc. 777 Davis St., Suite 250 San Leandro, CA 94577 USA (01) 510-297-5800 (main phone)

More information

Using Secure4Audit in an IRIX 6.5 Environment

Using Secure4Audit in an IRIX 6.5 Environment Using Secure4Audit in an IRIX 6.5 Environment Overview... 3 Icons... 3 Installation Reminders... 4 A Very Brief Overview of IRIX System auditing... 5 Installing System Auditing... 5 The System Audit Directories...

More information

Installing and Configuring Microsoft SQL Server 2012 Express PI AF

Installing and Configuring Microsoft SQL Server 2012 Express PI AF Installing and Configuring Microsoft SQL Server 2012 Express PI AF OSIsoft, LLC 777 Davis St., Suite 250 San Leandro, CA 94577 USA Tel: (01) 510-297-5800 Fax: (01) 510-357-8136 Web: http://www.osisoft.com

More information

IBM WebSphere Application Server Version 7.0

IBM WebSphere Application Server Version 7.0 IBM WebSphere Application Server Version 7.0 Centralized Installation Manager for IBM WebSphere Application Server Network Deployment Version 7.0 Note: Before using this information, be sure to read the

More information

PI Server 3.4.380 and 3.4.385 Backups with Symantec Backup Exec. April, 2012

PI Server 3.4.380 and 3.4.385 Backups with Symantec Backup Exec. April, 2012 PI Server 3.4.380 and 3.4.385 Backups with Symantec Backup Exec April, 2012 OSIsoft, LLC 777 Davis St., Suite 250 San Leandro, CA 94577 USA Tel: (01) 510-297-5800 Fax: (01) 510-357-8136 Web: http://www.osisoft.com

More information

PI Web Services 2010 Release Notes

PI Web Services 2010 Release Notes PI Web Services 2010 Release Notes Version 1.0.6.0 2010 OSIsoft, LLC. All rights reserved Table of Contents Overview... 1 Known Issues... 1 Setup... 2 Operating Systems... 2 System Prerequisites... 2

More information

Administration guide. Host software WinCCU Installation. Complete gas volume and energy data management

Administration guide. Host software WinCCU Installation. Complete gas volume and energy data management Administration guide Host software WinCCU Installation Complete gas volume and energy data management Contents 1 Introduction... 1 Safety first... 1 Warning... 1 Typographic conventions... 1 Product versioning...

More information

Basic System. Vyatta System. REFERENCE GUIDE Using the CLI Working with Configuration System Management User Management Logging VYATTA, INC.

Basic System. Vyatta System. REFERENCE GUIDE Using the CLI Working with Configuration System Management User Management Logging VYATTA, INC. VYATTA, INC. Vyatta System Basic System REFERENCE GUIDE Using the CLI Working with Configuration System Management User Management Logging Vyatta Suite 200 1301 Shoreway Road Belmont, CA 94002 vyatta.com

More information

UPGRADE AND MIGRATION GUIDE

UPGRADE AND MIGRATION GUIDE UPGRADE AND MIGRATION GUIDE Rockwell Automation Publication HSE-RM001A-EN-E December 2012 Contact Rockwell Automation Copyright Notice Trademark Notices Other Trademarks Restricted Rights Legend Warranty

More information

PATROL Console Server and RTserver Getting Started

PATROL Console Server and RTserver Getting Started PATROL Console Server and RTserver Getting Started Supporting PATROL Console Server 7.5.00 RTserver 6.6.00 February 14, 2005 Contacting BMC Software You can access the BMC Software website at http://www.bmc.com.

More information

VERITAS NetBackup 6.0 for Microsoft Exchange Server

VERITAS NetBackup 6.0 for Microsoft Exchange Server VERITAS NetBackup 6.0 for Microsoft Exchange Server System Administrator s Guide for Windows N152688 September 2005 Disclaimer The information contained in this publication is subject to change without

More information

BrightStor ARCserve Backup for Windows

BrightStor ARCserve Backup for Windows BrightStor ARCserve Backup for Windows Agent for Microsoft SQL Server r11.5 D01173-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the

More information

VERITAS NetBackup TM 6.0

VERITAS NetBackup TM 6.0 VERITAS NetBackup TM 6.0 System Administrator s Guide, Volume II for UNIX and Linux N15258B September 2005 Disclaimer The information contained in this publication is subject to change without notice.

More information

CA ARCserve Backup for Windows

CA ARCserve Backup for Windows CA ARCserve Backup for Windows Agent for Sybase Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

Tivoli Access Manager Agent for Windows Installation Guide

Tivoli Access Manager Agent for Windows Installation Guide IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide Version 4.5.0 SC32-1165-03 IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide

More information

EMC RepliStor for Microsoft Windows ERROR MESSAGE AND CODE GUIDE P/N 300-002-826 REV A02

EMC RepliStor for Microsoft Windows ERROR MESSAGE AND CODE GUIDE P/N 300-002-826 REV A02 EMC RepliStor for Microsoft Windows ERROR MESSAGE AND CODE GUIDE P/N 300-002-826 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2003-2005

More information

System Monitoring and Diagnostics Guide for Siebel Business Applications. Version 7.8 April 2005

System Monitoring and Diagnostics Guide for Siebel Business Applications. Version 7.8 April 2005 System Monitoring and Diagnostics Guide for Siebel Business Applications April 2005 Siebel Systems, Inc., 2207 Bridgepointe Parkway, San Mateo, CA 94404 Copyright 2005 Siebel Systems, Inc. All rights reserved.

More information

HYPERION SYSTEM 9 N-TIER INSTALLATION GUIDE MASTER DATA MANAGEMENT RELEASE 9.2

HYPERION SYSTEM 9 N-TIER INSTALLATION GUIDE MASTER DATA MANAGEMENT RELEASE 9.2 HYPERION SYSTEM 9 MASTER DATA MANAGEMENT RELEASE 9.2 N-TIER INSTALLATION GUIDE P/N: DM90192000 Copyright 2005-2006 Hyperion Solutions Corporation. All rights reserved. Hyperion, the Hyperion logo, and

More information

TIBCO Administrator User s Guide. Software Release 5.7.1 March 2012

TIBCO Administrator User s Guide. Software Release 5.7.1 March 2012 TIBCO Administrator User s Guide Software Release 5.7.1 March 2012 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED TIBCO SOFTWARE IS SOLELY

More information

EMC NetWorker Module for Microsoft Exchange Server Release 5.1

EMC NetWorker Module for Microsoft Exchange Server Release 5.1 EMC NetWorker Module for Microsoft Exchange Server Release 5.1 Installation Guide P/N 300-004-750 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

TIBCO Hawk SNMP Adapter Installation

TIBCO Hawk SNMP Adapter Installation TIBCO Hawk SNMP Adapter Installation Software Release 4.9.0 November 2012 Two-Second Advantage Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR

More information

User Guide. Informatica Smart Plug-in for HP Operations Manager. (Version 8.5.1)

User Guide. Informatica Smart Plug-in for HP Operations Manager. (Version 8.5.1) User Guide Informatica Smart Plug-in for HP Operations Manager (Version 8.5.1) Informatica Smart Plug-in for HP Operations Manager User Guide Version 8.5.1 December 2008 Copyright 2008 Informatica Corporation.

More information

Scheduling in SAS 9.3

Scheduling in SAS 9.3 Scheduling in SAS 9.3 SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc 2011. Scheduling in SAS 9.3. Cary, NC: SAS Institute Inc. Scheduling in SAS 9.3

More information

CA ARCserve Backup for Windows

CA ARCserve Backup for Windows CA ARCserve Backup for Windows Agent for Sybase Guide r16.5 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

VERITAS Bare Metal Restore 4.6 for VERITAS NetBackup

VERITAS Bare Metal Restore 4.6 for VERITAS NetBackup VERITAS Bare Metal Restore 4.6 for VERITAS NetBackup System Administrator s Guide for UNIX and Windows N09870C Disclaimer The information contained in this publication is subject to change without notice.

More information

VERITAS NetBackup 6.0 High Availability

VERITAS NetBackup 6.0 High Availability VERITAS NetBackup 6.0 High Availability System Administrator s Guide for UNIX, Windows, and Linux N152848 September 2005 Disclaimer The information contained in this publication is subject to change without

More information

FioranoMQ 9. High Availability Guide

FioranoMQ 9. High Availability Guide FioranoMQ 9 High Availability Guide Copyright (c) 1999-2008, Fiorano Software Technologies Pvt. Ltd., Copyright (c) 2008-2009, Fiorano Software Pty. Ltd. All rights reserved. This software is the confidential

More information

EMC Avamar 7.2 for IBM DB2

EMC Avamar 7.2 for IBM DB2 EMC Avamar 7.2 for IBM DB2 User Guide 302-001-793 REV 01 Copyright 2001-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes the information in this publication

More information

GFI Product Guide. GFI MailArchiver Archive Assistant

GFI Product Guide. GFI MailArchiver Archive Assistant GFI Product Guide GFI MailArchiver Archive Assistant The information and content in this document is provided for informational purposes only and is provided "as is" with no warranty of any kind, either

More information

ADSMConnect Agent for Oracle Backup on Sun Solaris Installation and User's Guide

ADSMConnect Agent for Oracle Backup on Sun Solaris Installation and User's Guide ADSTAR Distributed Storage Manager ADSMConnect Agent for Oracle Backup on Sun Solaris Installation and User's Guide IBM Version 2 SH26-4063-00 IBM ADSTAR Distributed Storage Manager ADSMConnect Agent

More information

Verax Service Desk Installation Guide for UNIX and Windows

Verax Service Desk Installation Guide for UNIX and Windows Verax Service Desk Installation Guide for UNIX and Windows March 2015 Version 1.8.7 and higher Verax Service Desk Installation Guide 2 Contact Information: E-mail: sales@veraxsystems.com Internet: http://www.veraxsystems.com/

More information

Basic System Administration ESX Server 3.0.1 and Virtual Center 2.0.1

Basic System Administration ESX Server 3.0.1 and Virtual Center 2.0.1 Basic System Administration ESX Server 3.0.1 and Virtual Center 2.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

Net Protector Admin Console

Net Protector Admin Console Net Protector Admin Console USER MANUAL www.indiaantivirus.com -1. Introduction Admin Console is a Centralized Anti-Virus Control and Management. It helps the administrators of small and large office networks

More information

Basic System Administration ESX Server 3.0 and VirtualCenter 2.0

Basic System Administration ESX Server 3.0 and VirtualCenter 2.0 Basic System Administration ESX Server 3.0 and VirtualCenter 2.0 Basic System Administration Revision: 20090213 Item: VI-ENG-Q206-219 You can find the most up-to-date technical documentation at: http://www.vmware.com/support/pubs

More information

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015 Metalogix SharePoint Backup Publication Date: August 24, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this

More information

TIBCO Fulfillment Provisioning Session Layer for FTP Installation

TIBCO Fulfillment Provisioning Session Layer for FTP Installation TIBCO Fulfillment Provisioning Session Layer for FTP Installation Software Release 3.8.1 August 2015 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED

More information

How To Back Up Your Pplsk Data On A Pc Or Mac Or Mac With A Backup Utility (For A Premium) On A Computer Or Mac (For Free) On Your Pc Or Ipad Or Mac On A Mac Or Pc Or

How To Back Up Your Pplsk Data On A Pc Or Mac Or Mac With A Backup Utility (For A Premium) On A Computer Or Mac (For Free) On Your Pc Or Ipad Or Mac On A Mac Or Pc Or Parallels Plesk Control Panel Copyright Notice ISBN: N/A Parallels 660 SW 39 th Street Suite 205 Renton, Washington 98057 USA Phone: +1 (425) 282 6400 Fax: +1 (425) 282 6444 Copyright 1999-2008, Parallels,

More information

Enterprise Reporting Server v3.5

Enterprise Reporting Server v3.5 Enterprise Reporting Server v3.5 Administrator s Guide January 2001 Edition 2001 WebTrends Corporation Disclaimer WebTrends Corporation makes no representations or warranties with respect to the contents

More information

TIBCO Runtime Agent Domain Utility User s Guide Software Release 5.8.0 November 2012

TIBCO Runtime Agent Domain Utility User s Guide Software Release 5.8.0 November 2012 TIBCO Runtime Agent Domain Utility User s Guide Software Release 5.8.0 November 2012 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED TIBCO

More information

Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2)

Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2) Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2) Hyper-V Manager Hyper-V Server R1, R2 Intelligent Power Protector Main

More information

Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com

Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com CHAPTER: Introduction Microsoft virtual architecture: Hyper-V 6.0 Manager Hyper-V Server (R1 & R2) Hyper-V Manager Hyper-V Server R1, Dell UPS Local Node Manager R2 Main Operating System: 2008Enterprise

More information

Virtual CD v10. Network Management Server Manual. H+H Software GmbH

Virtual CD v10. Network Management Server Manual. H+H Software GmbH Virtual CD v10 Network Management Server Manual H+H Software GmbH Table of Contents Table of Contents Introduction 1 Legal Notices... 2 What Virtual CD NMS can do for you... 3 New Features in Virtual

More information

LogLogic Microsoft Dynamic Host Configuration Protocol (DHCP) Log Configuration Guide

LogLogic Microsoft Dynamic Host Configuration Protocol (DHCP) Log Configuration Guide LogLogic Microsoft Dynamic Host Configuration Protocol (DHCP) Log Configuration Guide Document Release: September 2011 Part Number: LL600026-00ELS090000 This manual supports LogLogic Microsoft DHCP Release

More information

SAP BusinessObjects Business Intelligence Suite Document Version: 4.1 Support Package 3-2014-05-07. Patch 3.x Update Guide

SAP BusinessObjects Business Intelligence Suite Document Version: 4.1 Support Package 3-2014-05-07. Patch 3.x Update Guide SAP BusinessObjects Business Intelligence Suite Document Version: 4.1 Support Package 3-2014-05-07 Table of Contents 1 Document History....3 2 Introduction....4 2.1 About this Document....4 2.1.1 Constraints....4

More information

IBM Sterling Control Center

IBM Sterling Control Center IBM Sterling Control Center System Administration Guide Version 5.3 This edition applies to the 5.3 Version of IBM Sterling Control Center and to all subsequent releases and modifications until otherwise

More information

DeltaV Event Chronicle

DeltaV Event Chronicle January 2013 Page 1 This document provides information on how to configure, use, and manage the. www.deltav.com January 2013 Page 2 Table of Contents Introduction... 3 Database Technology... 3 Database

More information

Version 5.0. MIMIX ha1 and MIMIX ha Lite for IBM i5/os. Using MIMIX. Published: May 2008 level 5.0.13.00. Copyrights, Trademarks, and Notices

Version 5.0. MIMIX ha1 and MIMIX ha Lite for IBM i5/os. Using MIMIX. Published: May 2008 level 5.0.13.00. Copyrights, Trademarks, and Notices Version 5.0 MIMIX ha1 and MIMIX ha Lite for IBM i5/os Using MIMIX Published: May 2008 level 5.0.13.00 Copyrights, Trademarks, and Notices Product conventions... 10 Menus and commands... 10 Accessing online

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

HP TippingPoint Security Management System User Guide

HP TippingPoint Security Management System User Guide HP TippingPoint Security Management System User Guide Version 4.0 Abstract This information describes the HP TippingPoint Security Management System (SMS) client user interface, and includes configuration

More information

Application Server Installation

Application Server Installation Application Server Installation Guide ARGUS Enterprise 11.0 11/25/2015 ARGUS Software An Altus Group Company Application Server Installation ARGUS Enterprise Version 11.0 11/25/2015 Published by: ARGUS

More information

WhatsUp Gold v16.2 Installation and Configuration Guide

WhatsUp Gold v16.2 Installation and Configuration Guide WhatsUp Gold v16.2 Installation and Configuration Guide Contents Installing and Configuring Ipswitch WhatsUp Gold v16.2 using WhatsUp Setup Installing WhatsUp Gold using WhatsUp Setup... 1 Security guidelines

More information

Prisma II Software Upgrade Program (SOUP) Installation Guide

Prisma II Software Upgrade Program (SOUP) Installation Guide Prisma II Software Upgrade Program (SOUP) Installation Guide Overview The Prisma II Software Upgrade Program (SOUP) is a user-friendly utility that allows users to perform firmware upgrades on Prisma II

More information

Wolfr am Lightweight Grid M TM anager USER GUIDE

Wolfr am Lightweight Grid M TM anager USER GUIDE Wolfram Lightweight Grid TM Manager USER GUIDE For use with Wolfram Mathematica 7.0 and later. For the latest updates and corrections to this manual: visit reference.wolfram.com For information on additional

More information

Scheduling in SAS 9.4 Second Edition

Scheduling in SAS 9.4 Second Edition Scheduling in SAS 9.4 Second Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. Scheduling in SAS 9.4, Second Edition. Cary, NC: SAS Institute

More information

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Administrator s Guide P/N 300-009-573 REV. A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

GFI Product Manual. GFI MailArchiver Outlook Addon

GFI Product Manual. GFI MailArchiver Outlook Addon GFI Product Manual GFI MailArchiver Outlook Addon The information and content in this document is provided for informational purposes only and is provided "as is" with no warranty of any kind, either express

More information

Rational Rational ClearQuest

Rational Rational ClearQuest Rational Rational ClearQuest Version 7.0 Windows Using Project Tracker GI11-6377-00 Rational Rational ClearQuest Version 7.0 Windows Using Project Tracker GI11-6377-00 Before using this information, be

More information

Quick Start SAP Sybase IQ 16.0

Quick Start SAP Sybase IQ 16.0 Quick Start SAP Sybase IQ 16.0 UNIX/Linux DOCUMENT ID: DC01687-01-1600-01 LAST REVISED: February 2013 Copyright 2013 by Sybase, Inc. All rights reserved. This publication pertains to Sybase software and

More information

WhatsUp Gold v16.1 Installation and Configuration Guide

WhatsUp Gold v16.1 Installation and Configuration Guide WhatsUp Gold v16.1 Installation and Configuration Guide Contents Installing and Configuring Ipswitch WhatsUp Gold v16.1 using WhatsUp Setup Installing WhatsUp Gold using WhatsUp Setup... 1 Security guidelines

More information

VRC 7900/8900 Avalanche Enabler User s Manual

VRC 7900/8900 Avalanche Enabler User s Manual VRC 7900/8900 Avalanche Enabler User s Manual WLE-VRC-20030702-02 Revised 7/2/03 ii Copyright 2003 by Wavelink Corporation All rights reserved. Wavelink Corporation 6985 South Union Park Avenue, Suite

More information

WhatsUp Gold v16.3 Installation and Configuration Guide

WhatsUp Gold v16.3 Installation and Configuration Guide WhatsUp Gold v16.3 Installation and Configuration Guide Contents Installing and Configuring WhatsUp Gold using WhatsUp Setup Installation Overview... 1 Overview... 1 Security considerations... 2 Standard

More information

SteelEye Protection Suite for Windows Microsoft SQL Server Recovery Kit. Administration Guide

SteelEye Protection Suite for Windows Microsoft SQL Server Recovery Kit. Administration Guide SteelEye Protection Suite for Windows Microsoft SQL Server Recovery Kit Administration Guide June 2013 This document and the information herein is the property of SIOS Technology Corp. (previously known

More information

BMC Impact Solutions Infrastructure Management Guide

BMC Impact Solutions Infrastructure Management Guide BMC Impact Solutions Infrastructure Management Guide Supporting BMC Impact Manager version 7.3 BMC Impact Administration Server 7.3 BMC Impact Explorer version 7.3 BMC Impact Portal version 7.3 February

More information

McAfee Endpoint Encryption for PC 7.0

McAfee Endpoint Encryption for PC 7.0 Migration Guide McAfee Endpoint Encryption for PC 7.0 For use with epolicy Orchestrator 4.6 Software COPYRIGHT Copyright 2012 McAfee, Inc. Do not copy without permission. TRADEMARK ATTRIBUTIONS McAfee,

More information

Scheduler Job Scheduling Console

Scheduler Job Scheduling Console Tivoli IBM Tivoli Workload Scheduler Job Scheduling Console Feature Level 1.3 (Revised December 2004) User s Guide SC32-1257-02 Tivoli IBM Tivoli Workload Scheduler Job Scheduling Console Feature Level

More information

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02

EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02 EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Practice Fusion API Client Installation Guide for Windows

Practice Fusion API Client Installation Guide for Windows Practice Fusion API Client Installation Guide for Windows Quickly and easily connect your Results Information System with Practice Fusion s Electronic Health Record (EHR) System Table of Contents Introduction

More information

Networking Best Practices Guide. Version 6.5

Networking Best Practices Guide. Version 6.5 Networking Best Practices Guide Version 6.5 Summer 2010 Copyright: 2010, CCH, a Wolters Kluwer business. All rights reserved. Material in this publication may not be reproduced or transmitted in any form

More information

HP A-IMC Firewall Manager

HP A-IMC Firewall Manager HP A-IMC Firewall Manager Configuration Guide Part number: 5998-2267 Document version: 6PW101-20110805 Legal and notice information Copyright 2011 Hewlett-Packard Development Company, L.P. No part of this

More information

CA Nimsoft Monitor. Probe Guide for E2E Application Response Monitoring. e2e_appmon v2.2 series

CA Nimsoft Monitor. Probe Guide for E2E Application Response Monitoring. e2e_appmon v2.2 series CA Nimsoft Monitor Probe Guide for E2E Application Response Monitoring e2e_appmon v2.2 series Copyright Notice This online help system (the "System") is for your informational purposes only and is subject

More information

UNICORN 6.4. Administration and Technical Manual

UNICORN 6.4. Administration and Technical Manual UNICORN 6.4 Administration and Technical Manual Page intentionally left blank Table of Contents Table of Contents 1 Introduction... 1.1 Administrator functions overview... 1.2 Network terms and concepts...

More information

UNICORN 7.0. Administration and Technical Manual

UNICORN 7.0. Administration and Technical Manual UNICORN 7.0 Administration and Technical Manual Page intentionally left blank Table of Contents Table of Contents 1 Introduction... 1.1 Administrator functions overview... 1.2 Network terms and concepts...

More information

Business Enterprise Server Help Desk Integration Guide. Version 3.5

Business Enterprise Server Help Desk Integration Guide. Version 3.5 Business Enterprise Server Help Desk Integration Guide Version 3.5 June 30, 2010 Copyright Copyright 2003 2010 Interlink Software Services, Ltd., as an unpublished work. All rights reserved. Interlink

More information

CA Spectrum. Microsoft MOM and SCOM Integration Guide. Release 9.4

CA Spectrum. Microsoft MOM and SCOM Integration Guide. Release 9.4 CA Spectrum Microsoft MOM and SCOM Integration Guide Release 9.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

Windows PowerShell Cookbook

Windows PowerShell Cookbook Windows PowerShell Cookbook Lee Holmes O'REILLY' Beijing Cambridge Farnham Koln Paris Sebastopol Taipei Tokyo Table of Contents Foreword Preface xvii xxi Part I. Tour A Guided Tour of Windows PowerShell

More information

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide Abstract This guide describes the Virtualization Monitor (vmon), an add-on service module of the HP Intelligent Management

More information

SAS 9.4 Intelligence Platform

SAS 9.4 Intelligence Platform SAS 9.4 Intelligence Platform Application Server Administration Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2013. SAS 9.4 Intelligence Platform:

More information

Matisse Installation Guide for MS Windows. 10th Edition

Matisse Installation Guide for MS Windows. 10th Edition Matisse Installation Guide for MS Windows 10th Edition April 2004 Matisse Installation Guide for MS Windows Copyright 1992 2004 Matisse Software Inc. All Rights Reserved. Matisse Software Inc. 433 Airport

More information

WHITE PAPER: ENTERPRISE SOLUTIONS. Symantec Backup Exec Continuous Protection Server Continuous Protection for Microsoft SQL Server Databases

WHITE PAPER: ENTERPRISE SOLUTIONS. Symantec Backup Exec Continuous Protection Server Continuous Protection for Microsoft SQL Server Databases WHITE PAPER: ENTERPRISE SOLUTIONS Symantec Backup Exec Continuous Protection Server Continuous Protection for Microsoft SQL Server Databases White Paper: Enterprise Solutions Symantec Backup Exec Continuous

More information

EMC Data Domain Management Center

EMC Data Domain Management Center EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes

More information

Release 6.2.1 System Administrator s Guide

Release 6.2.1 System Administrator s Guide IBM Maximo Release 6.2.1 System Administrator s Guide Note Before using this information and the product it supports, read the information in Notices on page Notices-1. First Edition (January 2007) This

More information

LifeKeeper for Linux PostgreSQL Recovery Kit. Technical Documentation

LifeKeeper for Linux PostgreSQL Recovery Kit. Technical Documentation LifeKeeper for Linux PostgreSQL Recovery Kit Technical Documentation January 2012 This document and the information herein is the property of SIOS Technology Corp. (previously known as SteelEye Technology,

More information

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support...

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support... Informatica Corporation B2B Data Exchange Version 9.5.0 Release Notes June 2012 Copyright (c) 2006-2012 Informatica Corporation. All rights reserved. Contents New Features... 1 Installation... 3 Upgrade

More information

Monitor Print Popup for Mac. Product Manual. www.monitorbm.com

Monitor Print Popup for Mac. Product Manual. www.monitorbm.com Monitor Print Popup for Mac Product Manual www.monitorbm.com Monitor Print Popup for Mac Product Manual Copyright 2013 Monitor Business Machines Ltd The software contains proprietary information of Monitor

More information

Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014

Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Siebel Installation Guide for UNIX Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Copyright 2005, 2014 Oracle and/or its affiliates. All rights reserved. This software and related documentation

More information

GFI Product Manual. Outlook Connector User Manual

GFI Product Manual. Outlook Connector User Manual GFI Product Manual Outlook Connector User Manual http://www.gfi.com info@gfi.com The information and content in this document is provided for informational purposes only and is provided "as is" with no

More information

Matisse Server Administration Guide

Matisse Server Administration Guide Matisse Server Administration Guide May 2014 MATISSE Server Administration Guide Copyright 2013 Matisse Software Inc. All Rights Reserved. This manual and the software described in it are copyrighted.

More information

How To Upgrade A Websense Log Server On A Windows 7.6 On A Powerbook (Windows) On A Thumbdrive Or Ipad (Windows 7.5) On An Ubuntu 7.3.2 (Windows 8) Or Windows

How To Upgrade A Websense Log Server On A Windows 7.6 On A Powerbook (Windows) On A Thumbdrive Or Ipad (Windows 7.5) On An Ubuntu 7.3.2 (Windows 8) Or Windows Websense v7.6 Install or Upgrade Checklist Greetings from Websense Technical Support. Most Websense upgrades complete successfully, and from my years of troubleshooting, I have learned a number of steps

More information

Parallels Virtuozzo Containers 4.6 for Windows

Parallels Virtuozzo Containers 4.6 for Windows Parallels Parallels Virtuozzo Containers 4.6 for Windows Upgrade Guide Copyright 1999-2010 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd. c/o Parallels International

More information

24x7 Scheduler Multi-platform Edition 5.2

24x7 Scheduler Multi-platform Edition 5.2 24x7 Scheduler Multi-platform Edition 5.2 Installing and Using 24x7 Web-Based Management Console with Apache Tomcat web server Copyright SoftTree Technologies, Inc. 2004-2014 All rights reserved Table

More information

CA Workload Automation Agent for Databases

CA Workload Automation Agent for Databases CA Workload Automation Agent for Databases Implementation Guide r11.3.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the

More information

NetIQ AppManager for NetBackup UNIX

NetIQ AppManager for NetBackup UNIX NetIQ AppManager for NetBackup UNIX Management Guide January 2008 Legal Notice NetIQ AppManager is covered by United States Patent No(s): 05829001, 05986653, 05999178, 06078324, 06397359, 06408335. THIS

More information

Workflow Templates Library

Workflow Templates Library Workflow s Library Table of Contents Intro... 2 Active Directory... 3 Application... 5 Cisco... 7 Database... 8 Excel Automation... 9 Files and Folders... 10 FTP Tasks... 13 Incident Management... 14 Security

More information

System Administration of Windchill 10.2

System Administration of Windchill 10.2 System Administration of Windchill 10.2 Overview Course Code Course Length TRN-4340-T 3 Days In this course, you will gain an understanding of how to perform routine Windchill system administration tasks,

More information

McAfee SMC Installation Guide 5.7. Security Management Center

McAfee SMC Installation Guide 5.7. Security Management Center McAfee SMC Installation Guide 5.7 Security Management Center Legal Information The use of the products described in these materials is subject to the then current end-user license agreement, which can

More information

WhatsUp Gold v16.1 Database Migration and Management Guide Learn how to migrate a WhatsUp Gold database from Microsoft SQL Server 2008 R2 Express

WhatsUp Gold v16.1 Database Migration and Management Guide Learn how to migrate a WhatsUp Gold database from Microsoft SQL Server 2008 R2 Express WhatsUp Gold v16.1 Database Migration and Management Guide Learn how to migrate a WhatsUp Gold database from Microsoft SQL Server 2008 R2 Express Edition to Microsoft SQL Server 2005, 2008, or 2008 R2

More information

How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network

How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Migrating to vcloud Automation Center 6.1

Migrating to vcloud Automation Center 6.1 Migrating to vcloud Automation Center 6.1 vcloud Automation Center 6.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

Dell Active Administrator 8.0

Dell Active Administrator 8.0 What s new in Dell Active Administrator 8.0 January 2016 Dell Active Administrator 8.0 is the upcoming release of Dell Software's complete solution for managing Microsoft Active Directory security auditing,

More information